Sunday, August 9, 2020

DNA digital data storage

 Abstract on DNA digital data storage  

Digital data has changed the use and access of information. Everyday  lot  of  data  is  produced  and  this  requires  high-density  storage  devices which  can  retain  values  for  a long time. Deoxyribonucleic acid (DNA) can be potentially used for  these  purposes  as  it  is  not  much  different  from  the conventional method used in a computer. DNA can be used as a  robust  and  high-density  storage  device  even  under unfavourable  conditions. Theoretically,  one  can encode 2 bits per nucleotide in DNA which can store 455 exabytes per gram maximum data in single-stranded DNA (ssDNA). In this paper, the method described can be used to store text data in DNA by compressing, storing multiple copies  along with providing security to data.

Introduction

The demand for data storage devices is increasing day by day as  more  and  more  data  is  generated  every  day.  Total information in digital  format in the year 2012 was about 2.7 zettabytes. Presently devices such as optical discs, portable hard drives, and flash drives are used to store data. But silicon and the other non-biodegradable materials used in data storage pollute  the environment. Also,  they are available in  limited quantities. Thus, they would be exhausted one day. The linear density of digital storage  device is  10 kb  per square  mm. Hence,  newer  technology  is  needed  for  data  storage  and archival  process.  As  the  data  increases,  the  current  data storage technology would not be enough to store data in future as  data  is  growing  every  day.  Even  potentially  important information can get lost due lack of storage space. 

How does DNA digital data storage work?

The digital data is encoded in a DNA sequence, the corresponding sequence information is synthesized into an artificial DNA and the information is decoded by sequencing the artificial DNA strand. This is the exact path of storing and retrieving digital data from DNA.

Encoding data into the DNA sequence:

The computer is worked on a binary system of 1 and 2. In the very first step, digital data is incorporated into the DNA. The DNA has 4 nitrogenous bases: Adenine (A), Cytosine (C), Guanine (G) and Thymine (T).  For storing data into the DNA, the A, T, G and C bases of DNA first converted into binary codes 1 and 0. 

00 for A, 01 for G, 10 for C and 11 for T are the binary codes for storing information. The information in the binary form is converted into the sequence of A, T, G, C. Now we have the long digital sequence of DNA.

References:

https://www.researchgate.net/publication/303318968_Digital_Data_Storage_on_DNA

https://geneticeducation.co.in/dna-digital-data-storage/

https://en.wikipedia.org/wiki/DNA_digital_data_storage

Sunday, May 10, 2020

Spatial computing

Abstract on Spatial computing



Spatial computing is human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces. It is an essential component for making our machines fuller partners in our work and play. This thesis presents a series of experiments in the discipline and analysis of its fundamental properties.

Spatial Computing is the practice of using physical space to send input to and receive output from a computer.


What is spatial computing ?


Spatial computing is digital technology that interacts with us in the places we live, work and play. A spatial computer, like Magic Leap 1, knows where it is in space. It uses a variety of sensors and cameras to build an understanding of both its environment and its user. This enables immersive, mixed-reality experiences that seamlessly blend the digital and the real world.

Spatial Computing Devices


VR Headsets

Hardware development in the last few years has accelerated to the point where we have a large spectrum of wearable Virtual Reality products. These include the high-end HTC Vive, Oculus Rift, and Samsung Gear VR all the way to economical smartphone-powered headsets at most major retailers.

AR Glasses

Glasses and goggles equipped to project data and two-dimensional imagery are more of a specialty product than VR headsets. They’re generally designed for workplace and industrial applications. Products like the Google Glass and Vuzix Blade are good examples, but with Microsoft’s HoloLens and Magic Leap’s Lightwear poised to access mainstream consumers, we will soon see AR wearables gaining ground on their larger, heavier VR counterparts. 

Hybrid Gear

A hybrid wearable is one that shifts effectively between VR and AR – ideally encompassing MR as well. A truly versatile holistic pair of XR glasses is still in the R&D phase with several tech giants. We expect to see huge players like Apple, Samsung, Google, and Microsoft acquiring start-ups, cherry-picking the best cutting edge XR hardware and software until they can produce affordable consumer-ready glasses that realize the hybrid dream. It’s not decades away, but it’s not here yet.




https://www.stambol.com/2019/03/12/spatial-computing-in-less-than-140-characters-and-more/
https://medium.com/@victoragulhon/what-is-spatial-computing-777fae84a499
https://developer.magicleap.com/en-us/learn/guides/design-spatial-computing

latest seminar topics on Computer security for Computer Science

Computer security, cybersecurity or information technology security (IT security) is the protection of computer systems and networks from the theft of or damage to their hardware, software, or electronic data, as well as from the disruption or misdirection of the services they provide.The field is becoming more important due to increased reliance on computer systems, the Internet and wireless network standards such as Bluetooth and Wi-Fi, and due to the growth of "smart" devices, including smartphones, televisions, and the various devices that constitute the "Internet of things". Owing to its complexity, both in terms of politics and technology, cybersecurity is also one of the major challenges in the contemporary world.

Internet security
Automotive security
Cybersex trafficking
Cyberwarfare
Computer security
Mobile security
Network security

Advanced persistent threat
Computer crime
Vulnerabilities
Eavesdropping
Malware
Spyware
Ransomware
Trojans
Viruses
Worms
Rootkits
Bootkits
Keyloggers
Screen scrapers
Exploits
Backdoors
Logic bombs
Payloads
Denial of service
Web shells
Web application security
Phishing

Computer access control
Application security
Antivirus software
Secure coding
Secure by default
Secure by design
Secure operating systems
Authentication
Multi-factor authentication
Authorization
Data-centric security
Encryption
Firewall
Intrusion detection system
Mobile secure gateway
Runtime application self-protection (RASP)

latest seminar topics on computer science Edge detection,Virtual private cloud,Data science,Machine learning

Edge detection


Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction.


Virtual private cloud


A virtual private cloud (VPC) is an on-demand configurable pool of shared computing resources allocated within a public cloud environment, providing a certain level of isolation between the different organizations (denoted as users hereafter) using the resources. The isolation between one VPC user and all other users of the same cloud (other VPC users as well as other public cloud users) is achieved normally through allocation of a private IP subnet and a virtual communication construct (such as a VLAN or a set of encrypted communication channels) per user. In a VPC, the previously described mechanism, providing isolation within the cloud, is accompanied with a VPN function (again, allocated per VPC user) that secures, by means of authentication and encryption, the remote access of the organization to its VPC resources. With the introduction of the described isolation levels, an organization using this service is in effect working on a 'virtually private' cloud (that is, as if the cloud infrastructure is not shared with other users), and hence the name VPC. VPC is most commonly used in the context of cloud infrastructure as a service. In this context, the infrastructure provider, providing the underlying public cloud infrastructure, and the provider realizing the VPC service over this infrastructure, may


Data science


Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, deep learning and big data. Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, and information science. Turing award winner Jim Gray imagined data science as a "fourth paradigm" of science (empirical, theoretical, computational and now data-driven) and asserted that "everything about science is changing because of the impact of information technology" and the data deluge.

Machine learning


Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.


VLAN hopping


VLAN hopping is a computer security exploit, a method of attacking networked resources on a virtual LAN (VLAN). The basic concept behind all VLAN hopping attacks is for an attacking host on a VLAN to gain access to traffic on other VLANs that would normally not be accessible. There are two primary methods of VLAN hopping: switch spoofing and double tagging. Both attack vectors can be mitigated with proper switch port configuration.

Data mining 


Data miningis the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.

Virtual LAN


A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer (OSI layer 2). LAN is the abbreviation for local area network and in this context virtual refers to a physical object recreated and altered by additional logic. VLANs work by applying tags to network frames and handling these tags in networking systems – creating the appearance and functionality of network traffic that is physically on a single network but acts as if it is split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed.

Sass (stylesheet language)


Sass (short for syntactically awesome style sheets) is a style sheet language initially designed by Hampton Catlin and developed by Natalie Weizenbaum. After its initial versions, Weizenbaum and Chris Eppstein have continued to extend Sass with SassScript, a scripting language used in Sass files.Sass is a preprocessor scripting language that is interpreted or compiled into Cascading Style Sheets (CSS). SassScript is the scripting language itself. Sass consists of two syntaxes. The original syntax, called "the indented syntax," uses a syntax similar to Haml. It uses indentation to separate code blocks and newline characters to separate rules. The newer syntax, "SCSS" (Sassy CSS), uses block formatting like that of CSS. It uses braces to denote code blocks and semicolons to separate rules within a block. The indented syntax and SCSS files are traditionally given the extensions .sass and .scss, respectively.

Virtual private network


A virtual private network (VPN) extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running on an end system (PC, smartphone etc.) across a VPN may therefore benefit from the functionality, security, and management of the private network. Encryption is a common, though not an inherent, part of a VPN connection.


ERP  (Enterprise Resource Planning )


Enterprise resource planning (ERP) is business process management software that allows an organization to use a system of integrated applications to manage the business and automate many back office functions related to technology, services and human resources. ERP software integrates all facets of an operation, including product planning, development, manufacturing, sales and marketing. An important goal oF ERP is to facilitate the flow of information so business decisions can be data-driven. ERP software suites are built to collect and organize data from various levels of an organization to provide management with insight into key performance indicators (KPIs) in real time.

Claytronics


Claytronics is a system designed to implement the concept of programmable matter, that is, material which can be manipulated electronically in three dimensions in the same way that two-dimensional images can be manipulated through computer graphics. Such materials would be composed of “catoms” — claytronics atoms — which would, in analogy with actual atoms, be the smallest indivisible units of the programmable matter. As of 2006 researchers have already created a prototype catom that is 44 millimeters in diameter. The goal is to eventually produce catoms that are one or two millimeters in diameter-small enough to produce convincing replicas.





Google cloud computing (GCP)

Abstract on Google cloud computing:

Google Cloud Platform (GCP) is Google’s public cloud offering comparable to Amazon Web Services and Microsoft Azure. The difference is that GCP is built upon Google's massive, cutting-edge infrastructure that handles the traffic and workload of all Google users. There is a wide range of services available in GCP ranging from Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) to completely managed Software-as-a-Service (SaaS). We will discuss the available infrastructure components and how they provide a powerful and flexible foundation on which to build your applications.


Google Cloud Platform (GCP) is one of the leaders among cloud APIs. Although it was established only five years ago, GCP has gained notable expansion due to its suite of public cloud services that it based on a huge, solid infrastructure. GCP allows developers to use these services by accessing GCP RESTful API that is described through HTML pages on its website. However, the documentation of GCP API is written in natural language (English prose) and therefore shows several drawbacks, such as Informal Heterogeneous Documentation, Imprecise Types, Implicit Attribute Metadata, Hidden Links, Redundancy and Lack of Visual Support.

What is Cloud Computing:


Cloud computing is a highly scalable and cost-effective infrastructure for running HPC, enterprise and Web applications. However, the growing demand of Cloud infrastructure has drastically increased the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high operational cost, which reduces the profit margin of Cloud providers, but also leads to high carbon emissions which is not environmentally friendly. Hence, energy-efficient solutions are required to minimize the impact of Cloud computing on the environment. In order to design such solutions, deep analysis of Cloud is required with respect to their power efficiency. For more about Seminar Topic on Cloud Computing Check here

What is google cloud computing:

Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail and YouTube.Alongside a set of management tools, it provides a series of modular cloud services including computing, data storage, data analytics and machine learning. Registration requires a credit card or bank account details.

Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless computing environments.


Google Cloud reference architecture

https://cloud.google.com/migrate/compute-engine/docs/4.5/concepts/architecture/gcp-reference-architecture


What are Google Cloud Platform (GCP) Services?


Google offers a wide range of Services. Following are the major Google Cloud Services:

  • Compute
  • Networking
  • Storage and Databases
  • Big Data
  • Machine Learning
  • Identity & Security
  • Management and Developer Tools
  • Cloud AI
  • IoT
  • API Platform


Reference Site:
https://en.wikipedia.org/wiki/Google_Cloud_Platform
https://www.edureka.co/blog/what-is-google-cloud-platform/

Sunday, May 3, 2020

Web Scraping

From the evolution of WWW, the scenario of internet user and data exchange is fastly changes. As common people join the internet and start to use it, lots of new techniques are promoted to boost up the network. At the same time, to enhance computers and network facility new technologies were introduces which results into automatically decreasing in cost of hardware and website’s related costs. Due to all these changes, large number of users are joined and use the internet facilities. Daily use of internet cose in to a tremendous data is available on internet. Business, academician,  researchers all are share their advertisements, information on internet so that they can be connected to people fastly and easily. 

As a result of exchange, share and store data on internet, a new problem is arise that how to handle such data overload and how the user will get or access the best information in least efforts. To solve this issues, researcher spotout new technique called Web Scraping. Web scraping is very imperative technique which is used to generate structured data on the basis of available unstructured data on the web.
https://www.edureka.co/blog/web-scraping-with-python/ :image source


Scaping generated structured data then stored in central database and analyze in spreadsheets. Traditional copy-and-paste, Text grapping and regular expression matching, HTTP programming, HTML parsing, DOM parsing, Webscraping software, Vertical aggregation platforms, Semantic annotation recognizing and Computer vision web-page analyzers are some of the common techniques used for data scraping. Previously most user uses the common copy-pest technique for gathering and analyzing data on the internet, but it is a tedious technique where lot of data copied by the user and store on computer files. 

As compared to this technique web scraping software is easiest scraping technique. Now a days, there are lots of software are available in the market for web scraping. Our paper is focused on the overview on the information extraction technique i.e. web scraping, different techniques of web scraping and some of the recent tools used for a web scraping. 

Keywords- Web mining, information extraction, web scraping


What is web scraping?


If you’ve ever copy and pasted information from a website, you’ve performed the same function as any web scraper, only on a microscopic, manual scale.

Web scraping, also known as web data extraction, is the process of retrieving or “scraping” data from a website. Unlike the mundane, mind-numbing process of manually extracting data, web scraping uses intelligent automation to retrieve hundreds, millions, or even billions of data points from the internet’s seemingly endless frontier.

More than a modern convenience, the true power of web scraping lies in its ability to build and power some of the world’s most revolutionary business applications. ‘Transformative’ doesn’t even begin to describe the way some companies use web scraped data to enhance their operations, informing executive decisions all the way down to individual customer service experiences. 




Why Web Scraping?

Web scraping is used to collect large information from websites. But why does someone have to collect such large data from websites? To know about this, let’s look at the applications of web scraping:

Price Comparison: Services such as ParseHub use web scraping to collect data from online shopping websites and use it to compare the prices of products.

Email address gathering: Many companies that use email as a medium for marketing, use web scraping to collect email ID and then send bulk emails.

Social Media Scraping: Web scraping is used to collect data from Social Media websites such as Twitter to find out what’s trending.

Research and Development: Web scraping is used to collect a large set of data (Statistics, General Information, Temperature, etc.) from websites, which are analyzed and used to carry out Surveys or for R&D.

Job listings: Details regarding job openings, interviews are collected from different websites and then listed in one place so that it is easily accessible to the user.


How does Web Scraping work?

When you run the code for web scraping, a request is sent to the URL that you have mentioned. As a response to the request, the server sends the data and allows you to read the HTML or XML page. The code then, parses the HTML or XML page, finds the data and extracts it.

To extract data using web scraping with python, you need to follow these basic steps:

  • Find the URL that you want to scrape
  • Inspecting the Page
  • Find the data you want to extract
  • Write the code
  • Run the code and extract the data
  • Store the data in the required format 

What are the different scraping technique


Manual scraping
  •     Copy-pasting

Automated Scraping
  •     HTML Parsing
  •     DOM Parsing
  •     Vertical Aggregation
  •     XPath
  •     Google Sheets
  •     Text Pattern Matching


Sources / References:

http://www.ijfrcsce.org/download/browse/Volume_4/April_18_Volume_4_Issue_4/1524638955_25-04-2018.pdf

https://scrapinghub.com/what-is-web-scraping

https://www.edureka.co/blog/web-scraping-with-python/

Web Scraping PPThttps://www.slideshare.net/SelectoCompany/web-scraping-76200621








Thursday, March 5, 2020

Edge Computing

The explosive growth of internet of things (IoT) devices, and the increasing computing power of these devices, have resulted in unprecedented volumes of data. And data volumes will continue to grow as 5G networks increase the number of connected mobile devices.

 In the past, the promise of cloud and artificial intelligence (AI) was to automate and speed innovation by driving actionable insight from data. But the unprecedented scale and complexity of data that’s created by connected devices has outpaced network and infrastructure capabilities.

Sending all that device-generated data to a centralized data center or to the cloud causes bandwidth and latency issues. Edge computing offers a more efficient alternative: data is processed and analyzed closer to the point where it is created. Because data does not traverse over a network to a cloud or data center in order to be processed, latency is significantly reduced. Edge computing — and mobile edge computing on 5G networks — enables faster and more comprehensive data analysis, creating the opportunity for deeper insights, faster response times and improved customer experiences.

Edge_computing_infrastructure - seminar topic


Deploying Edge Data Centers

While edge computing deployments can take many forms, they generally fall into one of three categories:

1. Local devices that serve a specific purpose, such as an appliance that runs a building’s security system or a cloud storage gateway that integrates an online storage service with premise-based systems, facilitating data transfers between them.
2. Small, localized data centers (1 to 10 racks) that offer significant processing and storage capabilities.
3. Regional data centers with more than 10 racks that serve relatively large local user populations.

Regardless of size, each of these edge examples is important to the business, so maximizing availability is essential.

It’s critical then, that companies build edge data centers with the same attention to reliability and security as they would for a large, centralized data center. This site is intended to provide the information you need to build secure, reliable, and manageable high-performance edge data centers that can help fuel your organization’s digital transformation.

How IoT is Driving the Need for Edge Computing

The IoT involves collecting data from various sensors and devices and applying algorithms to the data to glean insights that deliver business benefits. Industries ranging from manufacturing, utility distribution, traffic management to retail, medical and even education are making use of the technology to improve customer satisfaction, reduce costs, improve security and operations, and enrich the end user experience, to name a few benefits.

A retailer, for example, may use data from IoT applications to better serve customers, by anticipating what they may want based on past purchases, offering on-the-spot discounts, and improving their own customer service groups. For industrial environments, IoT applications can be used to support preventive maintenance programs by providing the ability to detect when the performance of a machine varies from an established baseline, indicating it’s in need of maintenance.

The list of potential use cases is virtually endless, but they all have one thing in common: collecting lots of data from many sensors and smart devices and using it to drive business improvements.

Many IoT applications rely on cloud-based resources for compute power, data storage and application intelligence that yields business insights. However, it’s often not optimal to send all the data generated by sensors and devices directly to the cloud, for reasons that generally come down to bandwidth, latency and regulatory requirements.

Real life examples of edge computing

Oil rigs provide a good example of how edge computing is used in the real world. Because of their remote offshore locations, they rely on the technology to mitigate lengthy distances to data centre and poor network connections. It's also costly, inefficient and time-consuming for rigs to send real-time data to a centralised cloud. Having a localised data processing facility helps a rig to run without delay or interruption.

Similarly, autonomous vehicles, which operate with low connectivity, need real-time data analysis to navigate roads. Gateways hosted within the vehicle can aggregate data from other vehicles, traffic signals, GPS devices, proximity sensors, onboard control units and cloud applications, and can process and analyse this information locally.

What next for edge computing?

According to Gartner's Digital Business Will Push Infrastructures to the Edge report, data generated and processed by enterprises outside of a traditional data centre will increase from less than 10% in 2018 to 75% by 2022.

This is hardly surprising given the increasing popularity of the IoT both in business and consumer use. And while we may still be a way off fully-autonomous vehicles those that are on the road already, or will be shortly, still need this type of technology to operate properly.

The analyst house has also predicted in its Hype Cycle for Emerging Technologies 2019 report that additional edge technologies notably AI and analytics will come to play a key role in this technology in the coming five to 10 years.

Drawbacks of edge computing

One drawback of edge computing is that it can increase attack vectors. With the addition of more ‘smart’ devices into the mix, such as edge servers and IoT devices that have robust built-in computers, there are new opportunities for malicious actors to compromise these devices.

Another drawback with edge computing is that it requires more local hardware. For example, while an IoT camera needs a built-in computer to send its raw video data to a web server, it would require a much more sophisticated computer with more processing power in order for it to run its own motion-detection algorithms. But the dropping costs of hardware are making it cheaper to build smarter devices.

Download PDF - Edge Computing 1

Sources / References:

https://www.ibm.com/in-en/cloud/what-is-edge-computing
https://www.apc.com/us/en/solutions/business-solutions/edge-computing/what-is-edge-computing.jsp
https://www.itpro.co.uk/cloud/31389/what-is-edge-computing
https://www.cbinsights.com/research/what-is-edge-computing/
https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/

Fog Computing | Fog Networking | Fogging

Fog computing or fog networking, also known as fogging, is an architecture that uses edge devices to carry out a substantial amount of computation, storage, and communication locally and routed over the internet backbone.

Both cloud computing and fog computing provide storage, applications, and data to end-users. However, fog computing is closer to end-users and has wider geographical distribution.

‘Cloud computing’ is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.Cloud computing can be a heavyweight and dense form of computing power.
Fog Computing | Fog Networking | Fogging Seminar topic
Fog Computing | Fog Networking | Fogging  Image Credit: Online Design/TechTarget


The term 'Fog Computing' was defined by Prof. Jonathan Bar-Magen Numhauser in the year 2011 as part of his PhD dissertation project proposal. In January 2012 he presented the concept in the Third International Congress of Silenced Writings in the University of Alcala and published in an official source.

Also known as edge computing or fogging, fog computing facilitates the operation of compute, storage, and networking services between end devices and cloud computing data centers. While edge computing is typically referred to the location where services are instantiated, fog computing implies distribution of the communication, computation, storage resources, and services on or close to devices and systems in the control of end-users.Fog computing is a medium weight and intermediate level of computing power.Rather than a substitute, fog computing often serves as a complement to cloud computing.


What is fog computing?

Fog computing refers to a decentralized computing structure, where resources, including the data and applications, get placed in logical locations between the data source and the cloud; it also is known by the terms ‘fogging’ and ‘fog networking.’

The goal of this is to bring basic analytic services to the network edge, improving performance by positioning computing resources closer to where they are needed, thereby reducing the distance that data needs to be transported on the network, improving overall network efficiency and performance. Fog computing can also be deployed for security reasons, as it has the ability to segment bandwidth traffic, and introduce additional firewalls to a network for higher security. 

Fog computing has its origins as an extension of cloud computing, which is the paradigm to have the data, storage and applications on a distant server, and not hosted locally. With the cloud computing model, the client can purchase the services from a provider, which delivers not only the service, but also the maintenance and upgrades, with the plus that they can be accessed anywhere, and facilitating work by teams.


History of fog computing

The term fog computing is associated with Cisco, who registered the name ‘Cisco Fog Computing,’ which played on cloud computing as in the clouds are up in the sky, and the fog refers to the clouds down close to the ground. In 2015, an OpenFog Consortium was created with founding members ARM, Cisco, Dell, Intel, Microsoft and Princeton University, and additional contributing members including GE, Hitachi and Foxconn. IBM introduced the closely allied, and mostly synonymous (although in some situations not exactly) term ‘edge computing.’


How fog computing works

It is important to note that fog networking complements -- not replaces -- cloud computing; fogging allows for short-term analytics at the edge, and the cloud performs resource-intensive, longer-term analytics. While edge devices and sensors are where data is generated and collected, they sometimes don't have the compute and storage resources to perform advanced analytics and machine-learning tasks. Though cloud servers have the power to do these, they are often too far away to process the data and respond in a timely manner. In addition, having all endpoints connecting to and sending raw data to the cloud over the internet can have privacy, security and legal implications, especially when dealing with sensitive data subject to regulations in different countries. Popular fog computing applications include smart grid, smart city, smart buildings, vehicle networks and software-defined networks.

Sources / References:

https://en.wikipedia.org/wiki/Fog_computing
https://internetofthingsagenda.techtarget.com/definition/fog-computing-fogging
https://www.techradar.com/in/news/what-is-fog-computing

Health technology

                         Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures, and systems developed to solve a health problem and improve quality of lives".This includes pharmaceuticals, devices, procedures, and organizational systems used in the healthcare industry,as well as computer-supported information systems.

 In the United States, these technologies involve standardized physical objects, as well as traditional and designed social means and methods to treat or care for patients.During the last five decades, technology development has been remarkable in the healthcare industry.

Healthcare technology, commonly referred to as “healthtech,” refers to the use of technologies developed for the purpose of improving any and all aspects of the healthcare system. From telehealth to robotic-assisted surgery, our guide will walk you through what it is and how it's being used.

seminar topic on Health technology


What Is Healthcare Technology?

Healthcare technology refers to any IT tools or software designed to boost hospital and administrative productivity, give new insights into medicines and treatments, or improve the overall quality of care provided. Today’s healthcare industry is a $2 trillion behemoth at a crossroads. Currently being weighed down by crushing costs and red tape, the industry is looking for ways to improve in nearly every imaginable area. That’s where healthtech comes in. Tech-infused tools are being integrated into every step of our healthcare experience to counteract two key trouble spots: quality and efficiency.

The way we purchase healthcare is becoming more accessible to a wider group of people through the insurance technology industry, sometimes called "insurtech." Patient waiting times are declining and hospitals are more efficiently staffed thanks to artificial intelligence and predictive analytics. Even surgical procedures and recovery times are being reduced thanks to ultra-precise robots that assist in surgeries and make some procedures less evasive.


Sources / References:

https://en.wikipedia.org/wiki/Health_technology_in_the_United_States

Artificial intelligence (AI)

Artificial intelligence (AI) is an emerging component of computer science, which tries to make computers more intelligent. Hence, Machine Learning is one of the most rapidly emerging parts of AI. Machine Learning algorithms were designed since the past and used to analyze large datasets in medical field. Presently, Machine Learning algorithms serve as indispensable tools for data analysis.

It can be regarded as the most rapidly growing field, which integrates intersection of computer science and statistics, involving use of AI and data science. In this chapter, we focus on various Machine Learning algorithms, which can be used for classification of EEG datasets into two groups, namely Alzheimer’s disease and healthy groups. It can be concluded that progression in advanced computing and AI, medical analysis, and classification of data is become simpler and easy.


What is Artificial Intelligence?

Artificial intelligence (AI) is wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.


HOW IS AI USED?

Artificial intelligence generally falls under two broad categories:

Narrow AI: Sometimes referred to as "Weak AI," this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.

Artificial General Intelligence (AGI): AGI, sometimes referred to as "Strong AI," is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem.

Artificial Intelligence Examples

  • Smart assistants (like Siri and Alexa)
  • Disease mapping and prediction tools
  • Manufacturing and drone robots
  • Optimized, personalized healthcare treatment recommendations
  • Conversational bots for marketing and customer service
  • Robo-advisors for stock trading
  • Spam filters on email
  • Social media monitoring tools for dangerous content or false news
  • Song or TV show recommendations from Spotify and Netflix

Narrow Artificial Intelligence

Narrow AI is all around us and is easily the most successful realization of artificial intelligence to date. With its focus on performing specific tasks, Narrow AI has experienced numerous breakthroughs in the last decade that have had "significant societal benefits and have contributed to the economic vitality of the nation," according to "Preparing for the Future of Artificial Intelligence," a 2016 report released by the Obama Administration.

A few examples of Narrow AI include:

  • Google search
  • Image recognition software
  • Siri, Alexa and other personal assistants
  • Self-driving cars
  • IBM's Watson 
  • Machine Learning & Deep Learning
Sources / References:

https://en.wikipedia.org/wiki/Artificial_intelligence
https://www.sciencedirect.com/science/article/pii/B9780128153925000058
https://builtin.com/artificial-intelligence

Wednesday, March 4, 2020

Artificial Neural Network

Artificial  Neural Network -   Seminar Topic

Artificial  Neural Network  (ANN)  is  gaining prominence  in  various applications  like  pattern  recognition,  weather prediction, handwriting recognition, face recognition, autopilot, robotics, etc. In electrical engineering, ANN is being extensively researched in load forecasting, processing substation alarms and predicting weather for solar radiation and wind  farms.  With more focus on smart grids, ANN has an important  role. ANN belongs to the family  of Artificial Intelligence along with Fuzzy Logic, Expert Systems, Support Vector Machines. This paper gives an introduction into ANN and the way it is used.


Artificial Neural network is a system loosely modeled on the human brain. The field goes by many names, such as connectionism; parallel distributed processing, euro computing, natural intelligent systems, machine learning algorithms and artificial neural networks. 

It is an attempt to simulate within specialized hardware or sophisticated software, the multiple layers of simple processing elements called neurons. Each neuron is linked to certain of its neighbors with varying coefficients of connectivity that represent the strengths of these connections. 

Learning is accomplished by adjusting these strengths to cause the overall network to output appropriate results.





Artificial Neural Network History

Brief History of Neural Networks - medium.com

What’s in Store for the Future? Neural Network

With all those strengths fueling the future of neural nets and all those weaknesses complicating things

Integration. The weaknesses of neural nets could easily be compensated if we could integrate them with a complementary technology, like symbolic functions. The hard part would be finding a way to have these systems work together to produce a common result—and engineers are already working on it.

Sheer complexity. Everything has the potential to be scaled up in terms of power and complexity. With technological advancements, we can make CPUs and GPUs cheaper and/or faster, enabling the production of bigger, more efficient algorithms. We can also design neural nets capable of processing more data, or processing data faster, so it may learn to recognize patterns with just 1,000 examples, instead of 10,000. Unfortunately, there may be an upper limit to how advanced we can get in these areas—but we haven’t reached that limit yet, so we’ll likely strive for it in the near future.

New applications. Rather than advancing vertically, in terms of faster processing power and more sheer complexity, neural nets could (and likely will) also expand horizontally, being applied to more diverse applications. Hundreds of industries could feasibly use neural nets to operate more efficiently, target new audiences, develop new products, or improve consumer safety—yet it’s criminally underutilized. Wider acceptance, wider availability, and more creativity from engineers and marketers have the potential to apply neural nets to more applications.

Obsolescence. Technological optimists have enjoyed professing the glorious future of neural nets, but they may not be the dominant form of AI or complex problem solving for much longer. Several years from now, the hard limits and key weaknesses of neural nets may stop them from being pursued. Instead, developers and consumers may gravitate toward some new approach—provided one becomes accessible enough, with enough potential to make it a worthy successor.

THE ANALOGY TO BRAIN

The most basic components of neural networks are modeled after the structure of the brain. Some neural network structures are not closely to that of the brain and some does not have a biological counterpart in the brain. However, neural networks have a strong similarity to the biological brain and therefore a great deal of the terminology is borrowed from neuroscience.


Sources / References:

https://en.wikipedia.org/wiki/Artificial_neural_network

https://www.researchgate.net/publication/319903816_AN_INTRODUCTION_TO_ARTIFICIAL_NEURAL_NETWORK

https://readwrite.com/2019/01/25/everything-you-need-to-know-about-the-future-of-neural-networks/

https://medium.com/analytics-vidhya/brief-history-of-neural-networks-44c2bf72eec

Tuesday, March 3, 2020

Quantum computing

Quantum Computing - Seminar Topic


Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.

There are currently two main approaches to physically implementing a quantum computer: analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, and adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits.



What is quantum computing?

Quantum computers could spur the development of new breakthroughs in science, medications to save lives, machine learning methods to diagnose illnesses sooner, materials to make more efficient devices and structures, financial strategies to live well in retirement, and algorithms to quickly direct resources such as ambulances.

But what exactly is quantum computing, and what does it take to achieve these quantum breakthroughs? Here’s what you need to know





How Do Quantum Computers Work?

Quantum computers perform calculations based on the probability of an object's state before it is measured - instead of just 1s or 0s - which means they have the potential to process exponentially more data compared to classical computers.

Classical computers carry out logical operations using the definite position of a physical state. These are usually binary, meaning its operations are based on one of two positions. A single state - such as on or off, up or down, 1 or 0 - is called a bit.

Types of quantum computers

Building a functional quantum computer requires holding an object in a superposition state long enough to carry out various processes on them.

Unfortunately, once a superposition meets with materials that are part of a measured system, it loses its in-between state in what's known as decoherence and becomes a boring old classical bit.

Devices need to be able to shield quantum states from decoherence, while still making them easy to read.

Different processes are tackling this challenge from different angles, whether it's to use more robust quantum processes or to find better ways to check for errors.

How we will use quantum computers


  • Weather and climate
  • Personalized medicine
  • Space exploration
  • Fundamental sciences
  • Machine learning
  • Encryption
  • Real-time language translation

More seminar topics related to Quantum computing :

Quantum Cryptography
Quantum Internet
Quantum Machine Learning
Quantum Processing Units
Quantum Supremacy
Quantum Network
Quantum Logic Gate
Quantum neural networks

Sources / References:
https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/
https://en.wikipedia.org/wiki/Quantum_computing
https://www.sciencealert.com/quantum-computers




Home Networking

Home Networking

This report discusses keypoints and overviews in related home networking literature. After introduction, five specific technologies (LAN, Phoneline, Powerline, Wireless and IrDA) are reviewed. Services and other issues are also discussed. Finally there is an overview of current market players.

A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. 

Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact.


 These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.

Physical connectivity and protocols


Home networks can use either wired or wireless technologies to connect endpoints. Wireless is the predominant option in homes due to the ease of installation, lack of unsightly cables, and network performance characteristics sufficient for residential activities.
  • Wireless
  • Wireless LAN
  • Wireless PAN
  • Low-rate wireless PAN
  • Twisted pair cables
  • Fiber optics
  • Telephone wires
  • Coaxial cables
  • Power lines

Endpoint devices and services

Traditionally, data-centric equipment such as computers and media players have been the primary tenants of a home network. However, due to the lowering cost of computing and the ubiquity of smartphone usage, many traditionally non-networked home equipment categories now include new variants capable of control or remote monitoring through an app on a smartphone. 

Newer startups and established home equipment manufacturers alike have begun to offer these products as part of a "Smart" or "Intelligent" or "Connected Home" portfolio. The control and/or monitoring interfaces for these products can be accessed through proprietary smartphone applications specific to that product line.

  • General purpose
  • Entertainment
  • Lighting
  • Home security and access control
  • Cloud services

Network management

  • Embedded devices
  • Apple ecosystem devices
  • Microsoft ecosystem devices

Common issues and concerns

  • Wireless signal loss
  • "Leaky" Wi-Fi
  • Electrical grid noise
  • Administration

New Home Network Technology


New developments in home networks affect more than just home offices and entertainment systems. Some of the most exciting advances are in healthcare and housing.

In healthcare, Wireless Sensor Networks (WSNs) let doctors monitor patients wirelessly. Patients wear wireless sensors that transmit data through specialized channels. These signals contain information about vital signs, body functions, patient behavior and their environments. In the case of an unusual data transmission -- like a sudden spike in blood pressure or a report that an active patient has become suddenly still -- an emergency channel picks up the signal and sends medical services to the patient's home.

Builders are beginning to offer home network options for their customers that range from the primitive -- installing Ethernet cables in the walls -- to the cutting-edge -- managing the ambient temperature from a laptop hundreds of miles from home. In one trial experiment called Laundry Time, Microsoft, Hewlett Packard, Panasonic, Proctor & Gamble and Whirlpool demonstrated the power of interfacing home appliances. 
The experiment networked a washing machine and clothes dryer with a TV, PC and cell phone. This unheard-of combination of networked devices let homeowners know when their laundry loads were finished washing or drying by sending alerts to their TV screens, instant messaging systems or cell phones.

 Research and development also continues for systems that perform a wide variety of functions -- data and voice recognition might change the way we enter, exit and secure our homes, while service appliances could prepare our food, control indoor temperatures and keep our homes clean.



Source:
https://en.wikipedia.org/wiki/Home_network
https://computer.howstuffworks.com/home-network.htm