Thursday, March 5, 2020

Edge Computing

The explosive growth of internet of things (IoT) devices, and the increasing computing power of these devices, have resulted in unprecedented volumes of data. And data volumes will continue to grow as 5G networks increase the number of connected mobile devices.

 In the past, the promise of cloud and artificial intelligence (AI) was to automate and speed innovation by driving actionable insight from data. But the unprecedented scale and complexity of data that’s created by connected devices has outpaced network and infrastructure capabilities.

Sending all that device-generated data to a centralized data center or to the cloud causes bandwidth and latency issues. Edge computing offers a more efficient alternative: data is processed and analyzed closer to the point where it is created. Because data does not traverse over a network to a cloud or data center in order to be processed, latency is significantly reduced. Edge computing — and mobile edge computing on 5G networks — enables faster and more comprehensive data analysis, creating the opportunity for deeper insights, faster response times and improved customer experiences.

Edge_computing_infrastructure - seminar topic


Deploying Edge Data Centers

While edge computing deployments can take many forms, they generally fall into one of three categories:

1. Local devices that serve a specific purpose, such as an appliance that runs a building’s security system or a cloud storage gateway that integrates an online storage service with premise-based systems, facilitating data transfers between them.
2. Small, localized data centers (1 to 10 racks) that offer significant processing and storage capabilities.
3. Regional data centers with more than 10 racks that serve relatively large local user populations.

Regardless of size, each of these edge examples is important to the business, so maximizing availability is essential.

It’s critical then, that companies build edge data centers with the same attention to reliability and security as they would for a large, centralized data center. This site is intended to provide the information you need to build secure, reliable, and manageable high-performance edge data centers that can help fuel your organization’s digital transformation.

How IoT is Driving the Need for Edge Computing

The IoT involves collecting data from various sensors and devices and applying algorithms to the data to glean insights that deliver business benefits. Industries ranging from manufacturing, utility distribution, traffic management to retail, medical and even education are making use of the technology to improve customer satisfaction, reduce costs, improve security and operations, and enrich the end user experience, to name a few benefits.

A retailer, for example, may use data from IoT applications to better serve customers, by anticipating what they may want based on past purchases, offering on-the-spot discounts, and improving their own customer service groups. For industrial environments, IoT applications can be used to support preventive maintenance programs by providing the ability to detect when the performance of a machine varies from an established baseline, indicating it’s in need of maintenance.

The list of potential use cases is virtually endless, but they all have one thing in common: collecting lots of data from many sensors and smart devices and using it to drive business improvements.

Many IoT applications rely on cloud-based resources for compute power, data storage and application intelligence that yields business insights. However, it’s often not optimal to send all the data generated by sensors and devices directly to the cloud, for reasons that generally come down to bandwidth, latency and regulatory requirements.

Real life examples of edge computing

Oil rigs provide a good example of how edge computing is used in the real world. Because of their remote offshore locations, they rely on the technology to mitigate lengthy distances to data centre and poor network connections. It's also costly, inefficient and time-consuming for rigs to send real-time data to a centralised cloud. Having a localised data processing facility helps a rig to run without delay or interruption.

Similarly, autonomous vehicles, which operate with low connectivity, need real-time data analysis to navigate roads. Gateways hosted within the vehicle can aggregate data from other vehicles, traffic signals, GPS devices, proximity sensors, onboard control units and cloud applications, and can process and analyse this information locally.

What next for edge computing?

According to Gartner's Digital Business Will Push Infrastructures to the Edge report, data generated and processed by enterprises outside of a traditional data centre will increase from less than 10% in 2018 to 75% by 2022.

This is hardly surprising given the increasing popularity of the IoT both in business and consumer use. And while we may still be a way off fully-autonomous vehicles those that are on the road already, or will be shortly, still need this type of technology to operate properly.

The analyst house has also predicted in its Hype Cycle for Emerging Technologies 2019 report that additional edge technologies notably AI and analytics will come to play a key role in this technology in the coming five to 10 years.

Drawbacks of edge computing

One drawback of edge computing is that it can increase attack vectors. With the addition of more ‘smart’ devices into the mix, such as edge servers and IoT devices that have robust built-in computers, there are new opportunities for malicious actors to compromise these devices.

Another drawback with edge computing is that it requires more local hardware. For example, while an IoT camera needs a built-in computer to send its raw video data to a web server, it would require a much more sophisticated computer with more processing power in order for it to run its own motion-detection algorithms. But the dropping costs of hardware are making it cheaper to build smarter devices.

Download PDF - Edge Computing 1

Sources / References:

https://www.ibm.com/in-en/cloud/what-is-edge-computing
https://www.apc.com/us/en/solutions/business-solutions/edge-computing/what-is-edge-computing.jsp
https://www.itpro.co.uk/cloud/31389/what-is-edge-computing
https://www.cbinsights.com/research/what-is-edge-computing/
https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/

Fog Computing | Fog Networking | Fogging

Fog computing or fog networking, also known as fogging, is an architecture that uses edge devices to carry out a substantial amount of computation, storage, and communication locally and routed over the internet backbone.

Both cloud computing and fog computing provide storage, applications, and data to end-users. However, fog computing is closer to end-users and has wider geographical distribution.

‘Cloud computing’ is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.Cloud computing can be a heavyweight and dense form of computing power.
Fog Computing | Fog Networking | Fogging Seminar topic
Fog Computing | Fog Networking | Fogging  Image Credit: Online Design/TechTarget


The term 'Fog Computing' was defined by Prof. Jonathan Bar-Magen Numhauser in the year 2011 as part of his PhD dissertation project proposal. In January 2012 he presented the concept in the Third International Congress of Silenced Writings in the University of Alcala and published in an official source.

Also known as edge computing or fogging, fog computing facilitates the operation of compute, storage, and networking services between end devices and cloud computing data centers. While edge computing is typically referred to the location where services are instantiated, fog computing implies distribution of the communication, computation, storage resources, and services on or close to devices and systems in the control of end-users.Fog computing is a medium weight and intermediate level of computing power.Rather than a substitute, fog computing often serves as a complement to cloud computing.


What is fog computing?

Fog computing refers to a decentralized computing structure, where resources, including the data and applications, get placed in logical locations between the data source and the cloud; it also is known by the terms ‘fogging’ and ‘fog networking.’

The goal of this is to bring basic analytic services to the network edge, improving performance by positioning computing resources closer to where they are needed, thereby reducing the distance that data needs to be transported on the network, improving overall network efficiency and performance. Fog computing can also be deployed for security reasons, as it has the ability to segment bandwidth traffic, and introduce additional firewalls to a network for higher security. 

Fog computing has its origins as an extension of cloud computing, which is the paradigm to have the data, storage and applications on a distant server, and not hosted locally. With the cloud computing model, the client can purchase the services from a provider, which delivers not only the service, but also the maintenance and upgrades, with the plus that they can be accessed anywhere, and facilitating work by teams.


History of fog computing

The term fog computing is associated with Cisco, who registered the name ‘Cisco Fog Computing,’ which played on cloud computing as in the clouds are up in the sky, and the fog refers to the clouds down close to the ground. In 2015, an OpenFog Consortium was created with founding members ARM, Cisco, Dell, Intel, Microsoft and Princeton University, and additional contributing members including GE, Hitachi and Foxconn. IBM introduced the closely allied, and mostly synonymous (although in some situations not exactly) term ‘edge computing.’


How fog computing works

It is important to note that fog networking complements -- not replaces -- cloud computing; fogging allows for short-term analytics at the edge, and the cloud performs resource-intensive, longer-term analytics. While edge devices and sensors are where data is generated and collected, they sometimes don't have the compute and storage resources to perform advanced analytics and machine-learning tasks. Though cloud servers have the power to do these, they are often too far away to process the data and respond in a timely manner. In addition, having all endpoints connecting to and sending raw data to the cloud over the internet can have privacy, security and legal implications, especially when dealing with sensitive data subject to regulations in different countries. Popular fog computing applications include smart grid, smart city, smart buildings, vehicle networks and software-defined networks.

Sources / References:

https://en.wikipedia.org/wiki/Fog_computing
https://internetofthingsagenda.techtarget.com/definition/fog-computing-fogging
https://www.techradar.com/in/news/what-is-fog-computing

Health technology

                         Health technology is defined by the World Health Organization as the "application of organized knowledge and skills in the form of devices, medicines, vaccines, procedures, and systems developed to solve a health problem and improve quality of lives".This includes pharmaceuticals, devices, procedures, and organizational systems used in the healthcare industry,as well as computer-supported information systems.

 In the United States, these technologies involve standardized physical objects, as well as traditional and designed social means and methods to treat or care for patients.During the last five decades, technology development has been remarkable in the healthcare industry.

Healthcare technology, commonly referred to as “healthtech,” refers to the use of technologies developed for the purpose of improving any and all aspects of the healthcare system. From telehealth to robotic-assisted surgery, our guide will walk you through what it is and how it's being used.

seminar topic on Health technology


What Is Healthcare Technology?

Healthcare technology refers to any IT tools or software designed to boost hospital and administrative productivity, give new insights into medicines and treatments, or improve the overall quality of care provided. Today’s healthcare industry is a $2 trillion behemoth at a crossroads. Currently being weighed down by crushing costs and red tape, the industry is looking for ways to improve in nearly every imaginable area. That’s where healthtech comes in. Tech-infused tools are being integrated into every step of our healthcare experience to counteract two key trouble spots: quality and efficiency.

The way we purchase healthcare is becoming more accessible to a wider group of people through the insurance technology industry, sometimes called "insurtech." Patient waiting times are declining and hospitals are more efficiently staffed thanks to artificial intelligence and predictive analytics. Even surgical procedures and recovery times are being reduced thanks to ultra-precise robots that assist in surgeries and make some procedures less evasive.


Sources / References:

https://en.wikipedia.org/wiki/Health_technology_in_the_United_States

Artificial intelligence (AI)

Artificial intelligence (AI) is an emerging component of computer science, which tries to make computers more intelligent. Hence, Machine Learning is one of the most rapidly emerging parts of AI. Machine Learning algorithms were designed since the past and used to analyze large datasets in medical field. Presently, Machine Learning algorithms serve as indispensable tools for data analysis.

It can be regarded as the most rapidly growing field, which integrates intersection of computer science and statistics, involving use of AI and data science. In this chapter, we focus on various Machine Learning algorithms, which can be used for classification of EEG datasets into two groups, namely Alzheimer’s disease and healthy groups. It can be concluded that progression in advanced computing and AI, medical analysis, and classification of data is become simpler and easy.


What is Artificial Intelligence?

Artificial intelligence (AI) is wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.


HOW IS AI USED?

Artificial intelligence generally falls under two broad categories:

Narrow AI: Sometimes referred to as "Weak AI," this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.

Artificial General Intelligence (AGI): AGI, sometimes referred to as "Strong AI," is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem.

Artificial Intelligence Examples

  • Smart assistants (like Siri and Alexa)
  • Disease mapping and prediction tools
  • Manufacturing and drone robots
  • Optimized, personalized healthcare treatment recommendations
  • Conversational bots for marketing and customer service
  • Robo-advisors for stock trading
  • Spam filters on email
  • Social media monitoring tools for dangerous content or false news
  • Song or TV show recommendations from Spotify and Netflix

Narrow Artificial Intelligence

Narrow AI is all around us and is easily the most successful realization of artificial intelligence to date. With its focus on performing specific tasks, Narrow AI has experienced numerous breakthroughs in the last decade that have had "significant societal benefits and have contributed to the economic vitality of the nation," according to "Preparing for the Future of Artificial Intelligence," a 2016 report released by the Obama Administration.

A few examples of Narrow AI include:

  • Google search
  • Image recognition software
  • Siri, Alexa and other personal assistants
  • Self-driving cars
  • IBM's Watson 
  • Machine Learning & Deep Learning
Sources / References:

https://en.wikipedia.org/wiki/Artificial_intelligence
https://www.sciencedirect.com/science/article/pii/B9780128153925000058
https://builtin.com/artificial-intelligence

Wednesday, March 4, 2020

Artificial Neural Network

Artificial  Neural Network -   Seminar Topic

Artificial  Neural Network  (ANN)  is  gaining prominence  in  various applications  like  pattern  recognition,  weather prediction, handwriting recognition, face recognition, autopilot, robotics, etc. In electrical engineering, ANN is being extensively researched in load forecasting, processing substation alarms and predicting weather for solar radiation and wind  farms.  With more focus on smart grids, ANN has an important  role. ANN belongs to the family  of Artificial Intelligence along with Fuzzy Logic, Expert Systems, Support Vector Machines. This paper gives an introduction into ANN and the way it is used.


Artificial Neural network is a system loosely modeled on the human brain. The field goes by many names, such as connectionism; parallel distributed processing, euro computing, natural intelligent systems, machine learning algorithms and artificial neural networks. 

It is an attempt to simulate within specialized hardware or sophisticated software, the multiple layers of simple processing elements called neurons. Each neuron is linked to certain of its neighbors with varying coefficients of connectivity that represent the strengths of these connections. 

Learning is accomplished by adjusting these strengths to cause the overall network to output appropriate results.





Artificial Neural Network History

Brief History of Neural Networks - medium.com

What’s in Store for the Future? Neural Network

With all those strengths fueling the future of neural nets and all those weaknesses complicating things

Integration. The weaknesses of neural nets could easily be compensated if we could integrate them with a complementary technology, like symbolic functions. The hard part would be finding a way to have these systems work together to produce a common result—and engineers are already working on it.

Sheer complexity. Everything has the potential to be scaled up in terms of power and complexity. With technological advancements, we can make CPUs and GPUs cheaper and/or faster, enabling the production of bigger, more efficient algorithms. We can also design neural nets capable of processing more data, or processing data faster, so it may learn to recognize patterns with just 1,000 examples, instead of 10,000. Unfortunately, there may be an upper limit to how advanced we can get in these areas—but we haven’t reached that limit yet, so we’ll likely strive for it in the near future.

New applications. Rather than advancing vertically, in terms of faster processing power and more sheer complexity, neural nets could (and likely will) also expand horizontally, being applied to more diverse applications. Hundreds of industries could feasibly use neural nets to operate more efficiently, target new audiences, develop new products, or improve consumer safety—yet it’s criminally underutilized. Wider acceptance, wider availability, and more creativity from engineers and marketers have the potential to apply neural nets to more applications.

Obsolescence. Technological optimists have enjoyed professing the glorious future of neural nets, but they may not be the dominant form of AI or complex problem solving for much longer. Several years from now, the hard limits and key weaknesses of neural nets may stop them from being pursued. Instead, developers and consumers may gravitate toward some new approach—provided one becomes accessible enough, with enough potential to make it a worthy successor.

THE ANALOGY TO BRAIN

The most basic components of neural networks are modeled after the structure of the brain. Some neural network structures are not closely to that of the brain and some does not have a biological counterpart in the brain. However, neural networks have a strong similarity to the biological brain and therefore a great deal of the terminology is borrowed from neuroscience.


Sources / References:

https://en.wikipedia.org/wiki/Artificial_neural_network

https://www.researchgate.net/publication/319903816_AN_INTRODUCTION_TO_ARTIFICIAL_NEURAL_NETWORK

https://readwrite.com/2019/01/25/everything-you-need-to-know-about-the-future-of-neural-networks/

https://medium.com/analytics-vidhya/brief-history-of-neural-networks-44c2bf72eec

Tuesday, March 3, 2020

Quantum computing

Quantum Computing - Seminar Topic


Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.

There are currently two main approaches to physically implementing a quantum computer: analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, and adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits.



What is quantum computing?

Quantum computers could spur the development of new breakthroughs in science, medications to save lives, machine learning methods to diagnose illnesses sooner, materials to make more efficient devices and structures, financial strategies to live well in retirement, and algorithms to quickly direct resources such as ambulances.

But what exactly is quantum computing, and what does it take to achieve these quantum breakthroughs? Here’s what you need to know





How Do Quantum Computers Work?

Quantum computers perform calculations based on the probability of an object's state before it is measured - instead of just 1s or 0s - which means they have the potential to process exponentially more data compared to classical computers.

Classical computers carry out logical operations using the definite position of a physical state. These are usually binary, meaning its operations are based on one of two positions. A single state - such as on or off, up or down, 1 or 0 - is called a bit.

Types of quantum computers

Building a functional quantum computer requires holding an object in a superposition state long enough to carry out various processes on them.

Unfortunately, once a superposition meets with materials that are part of a measured system, it loses its in-between state in what's known as decoherence and becomes a boring old classical bit.

Devices need to be able to shield quantum states from decoherence, while still making them easy to read.

Different processes are tackling this challenge from different angles, whether it's to use more robust quantum processes or to find better ways to check for errors.

How we will use quantum computers


  • Weather and climate
  • Personalized medicine
  • Space exploration
  • Fundamental sciences
  • Machine learning
  • Encryption
  • Real-time language translation

More seminar topics related to Quantum computing :

Quantum Cryptography
Quantum Internet
Quantum Machine Learning
Quantum Processing Units
Quantum Supremacy
Quantum Network
Quantum Logic Gate
Quantum neural networks

Sources / References:
https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/
https://en.wikipedia.org/wiki/Quantum_computing
https://www.sciencealert.com/quantum-computers




Home Networking

Home Networking

This report discusses keypoints and overviews in related home networking literature. After introduction, five specific technologies (LAN, Phoneline, Powerline, Wireless and IrDA) are reviewed. Services and other issues are also discussed. Finally there is an overview of current market players.

A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. 

Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact.


 These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.

Physical connectivity and protocols


Home networks can use either wired or wireless technologies to connect endpoints. Wireless is the predominant option in homes due to the ease of installation, lack of unsightly cables, and network performance characteristics sufficient for residential activities.
  • Wireless
  • Wireless LAN
  • Wireless PAN
  • Low-rate wireless PAN
  • Twisted pair cables
  • Fiber optics
  • Telephone wires
  • Coaxial cables
  • Power lines

Endpoint devices and services

Traditionally, data-centric equipment such as computers and media players have been the primary tenants of a home network. However, due to the lowering cost of computing and the ubiquity of smartphone usage, many traditionally non-networked home equipment categories now include new variants capable of control or remote monitoring through an app on a smartphone. 

Newer startups and established home equipment manufacturers alike have begun to offer these products as part of a "Smart" or "Intelligent" or "Connected Home" portfolio. The control and/or monitoring interfaces for these products can be accessed through proprietary smartphone applications specific to that product line.

  • General purpose
  • Entertainment
  • Lighting
  • Home security and access control
  • Cloud services

Network management

  • Embedded devices
  • Apple ecosystem devices
  • Microsoft ecosystem devices

Common issues and concerns

  • Wireless signal loss
  • "Leaky" Wi-Fi
  • Electrical grid noise
  • Administration

New Home Network Technology


New developments in home networks affect more than just home offices and entertainment systems. Some of the most exciting advances are in healthcare and housing.

In healthcare, Wireless Sensor Networks (WSNs) let doctors monitor patients wirelessly. Patients wear wireless sensors that transmit data through specialized channels. These signals contain information about vital signs, body functions, patient behavior and their environments. In the case of an unusual data transmission -- like a sudden spike in blood pressure or a report that an active patient has become suddenly still -- an emergency channel picks up the signal and sends medical services to the patient's home.

Builders are beginning to offer home network options for their customers that range from the primitive -- installing Ethernet cables in the walls -- to the cutting-edge -- managing the ambient temperature from a laptop hundreds of miles from home. In one trial experiment called Laundry Time, Microsoft, Hewlett Packard, Panasonic, Proctor & Gamble and Whirlpool demonstrated the power of interfacing home appliances. 
The experiment networked a washing machine and clothes dryer with a TV, PC and cell phone. This unheard-of combination of networked devices let homeowners know when their laundry loads were finished washing or drying by sending alerts to their TV screens, instant messaging systems or cell phones.

 Research and development also continues for systems that perform a wide variety of functions -- data and voice recognition might change the way we enter, exit and secure our homes, while service appliances could prepare our food, control indoor temperatures and keep our homes clean.



Source:
https://en.wikipedia.org/wiki/Home_network
https://computer.howstuffworks.com/home-network.htm