Monday, December 21, 2015

Tango technology

Tango technology

What is Project Tango?

Project Tango is a Google technology platform that uses computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. Project Tango technology gives a mobile device the ability to navigate the physical world similar to how we do as humans. Project Tango brings a new kind of spatial perception to the Android device platform by adding advanced computer vision, image processing, and special vision sensors.

Project Tango is different from other emerging 3D-sensing computer vision products, such as Microsoft Hololens, in that it's designed to run on a standalone mobile device and chiefly concerned with determining the device's position and orientation within the environment.

The software works by integrating three types of functionality:

    Motion-tracking:

 using visual features of the environment, in combination with accelerometer and gyroscope data, to closely track the device's movements in space
    Area learning: storing environment data in a map that can be re-used later, shared with other Project Tango devices, and enhanced with metadata such as notes, instructions, or points of interest

    Depth perception: 

detecting distances, sizes, and surfaces in the environment Together, these generate data about the device in "six degrees of freedom" (3 axes of orientation plus 3 axes of motion) and detailed three-dimensional information about the environment.

Applications on mobile devices use Project Tango's C and Java APIs to access this data in real time. In addition, an API is also provided for integrating Project Tango with the Unity game engine; this enables the rapid conversion or creation of games that allow the user to interact and navigate in the game space by moving and rotating a Project Tango device in real space. These APIs are documented on the Google developer website.


Refered Link:
https://www.google.com/atap/project-tango/about-project-tango/
https://en.wikipedia.org/wiki/Project_Tango

Friday, December 4, 2015

Zettabyte file System (ZFS)


Zettabyte file System  (ZFS)
 ZFS is a 128-bit file system developed by Sun Microsystems in 2005 for OpenSolaris. A Solaris file system that uses storage pools to manage physical storage. The ZFS pooled storage model eliminates the concept of volumes and the associated problems of partitions, provisioning and stranded storage by enabling thousands of file systems to draw from a common storage pool, using only as much space as it actually needs. ZFS also uses RAID-Z, a data replication model that is similar to RAID-5 but uses variable stripe width to eliminate the RAID-5 write hole-that is stripe corruption due to loss of power between data and parity updates. 

ZFS runs on Solaris, FreeBSD and Linux variants, and includes built-in data services and features such as replication, deduplication, compression, snapshots and data protection. The Sun development team began work on ZFS in 2001 and integrated it into the Unix-based Solaris and open source OpenSolaris operating systems in 2005.

After acquiring Sun in 2010, Oracle Corp. discontinued work on open source ZFS. The company trademarked the name "ZFS" and turned ZFS into its proprietary root file system for Oracle Solaris, Oracle's ZFS Storage Appliances, mainframe storage (VSM) and other Oracle technologies. Oracle continues to develop and add features to its proprietary ZFS. The open source version of ZFS is now known as OpenZFS.

Features  


  • Endless scalability
Well, it’s not technically endless, but it’s a 128-bit file system that’s capable of managing zettabytes (one billion terabytes) of data.  No matter how much hard drive space you have, ZFS will be suitable for managing it.

  • Maximum integrity
Everything you do inside of ZFS uses a checksum to ensure file integrity.  You can rest assured that your files and their redundant copies will not encounter silent data corruption.  Also, while ZFS is busy quietly checking your data for integrity, it will do automatic repairs anytime it can.

  • Drive pooling
The creators of ZFS want you to think of it as being similar to the way your computer uses RAM.  When you need more memory in your computer, you put in another stick and you’re done.  Similarly with ZFS, when you need more hard drive space, you put in another hard drive and you’re done.  No need to spend time partitioning, formatting, initializing, or doing anything else to your disks – when you need a bigger storage “pool,” just add disks.

  • RAID
ZFS is capable of many different RAID levels, all while delivering performance that’s comparable to that of hardware RAID controllers.  This allows you to save money, make setup easier, and have access to superior RAID levels that ZFS has improved upon.



References:-

http://www.webopedia.com/TERM/Z/ZFS.html
http://www.webopedia.com/TERM/Z/ZFS.html
http://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/

Thursday, October 1, 2015

List Of Seminar Topics For Computer Science Page - 8

 List Of Seminar Topics For Computer Science Page - 8


Beowulf


Beowulf is an approach to building a supercomputer as a cluster of commodity off-the-shelf personal computers, interconnected with a local area network technology like Ethernet, and running programs written for parallel processing. The Beowulf idea is said to enable the average university computer science department or small research company to build its own small supercomputer that can operate in the gigaflop (billions of operations per second) range. Beowulf can be implemented with typical personal computers. It is easy to expand for increased performance.

BitTorrent


BitTorrent (often abbreviated to 'BT') is a protocol that allows you to download files quickly and efficiently. It is a peer to peer protocol, which means you download and upload to other people downloading the same file. BitTorrent is often used for distribution of large files or popular content as it is a cheap, fast, efficient way to distribute files to users like you.Using a server and several clients on a network, BitTorrent is clocked alongside two traditional protocols to determine which takes the least amount of time. From its inauspicious start in 2001, bittorrent has grown to one of the major forces on the internet. BitTorrent prevents tampered or broken files from being shared.

Cloud Computing


Cloud computing is the phrase used to describe different scenarios in which computing resource is delivered as a service over a network connection (usually, this is the internet). Cloud computing is therefore a type of computing that relies on sharing a pool of physical and/or virtual resources, rather than deploying local or personal hardware and software. It is somewhat synonymous with the term ‘utility computing’ as users are able to tap into a supply of computing resource rather than manage the equipment needed to generate it themselves; much in the same way as a consumer tapping into the national electricity supply, instead of running their own generator.

Hyper Wiser


In virtualization technology, hypervisor is a software program that manages multiple operating systems (or multiple instances of the same operating system) on a single computer system. The hypervisor manages the system's processor, memory, and other resources to allocate what each operating system requires.  The first hypervisors were introduced in the 1960s to allow for different operating systems on a single mainframe computer. However, its current popularity is largely due to Linux and Unix. Around 2005, Linux and Unix systems started using virtualization technology to expand hardware capabilities, control costs, and improved reliability and security that hypervisors provided to these systems.

OpenFlow


OpenFlow is a protocol that allows a server to tell network switches where to send packets. In a conventional network, each switch has proprietary software that tells it what to do. With OpenFlow, the packet-moving decisions are centralized, so that the network can be programmed independently of the individual switches and data center gear. It is used for applications such as virtual machine mobility, high-security networks and next generation ip based mobile networks. Several established companies including IBM, Google, and HP have either fully utilized, or announced their intention to support, the OpenFlow standard.

Surface Computing


Surface computing is the term for the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts directly with a touch-sensitive screen.It is a natural user interface. Surface computer was created by Microsoft with surface. The term "surface" describes how it's used. There is no keyboard or mouse. All interactions with the computer are done via touching the surface of the computer's screen with hands or brushes, or via wireless interaction with devices such as smartphones, digital cameras or Microsoft's Zune music player.

Voice Morphing


Voice morphing (also known as voice transformation and voice conversion) is the software-generated alteration of a person's natural voice. The purpose may be to add audio effects to the voice, to obscure the identity of the person or to impersonate another individual. There are basically three inter-dependent issues that must be solved before building a voice morphing system. Firstly, it is important to develop a mathematical model to represent the speech signal so that the synthetic speech can be regenerated and prosody can be manipulated without artifacts. Secondly, the various acoustic cues which enable humans to identify speakers must be identified and extracted. Thirdly, the type of conversion function and the method of training and applying the conversion function must be decided.

FogScreen


The FogScreen is a new invention which makes objects seem to appear and move in thin air! The FogScreen is a suspensible device that creates a thin, smooth fog surface almost instantly when it is switched on. It can be used for image projection just like a conventional screen. FogScreen is, however, a screen you can walk through! The fog, made of ordinary water with no chemicals whatsoever, dissolves in seconds by itself, leaving no trace behind when you switch it off. The viewer can walk through the screen – walk directly into the picture! People and things can be brought into view through the screen. There are numerous other ways to use the FogScreen.

Microsoft Silverlight


Microsoft Silverlight is a cross-browser, cross-platform implementation of the .NET Framework for building and delivering the next generation of media experiences and rich internet applications (RIA) for the Web. Silverlight uses the Extensible Application Markup Language (XAML) to ease UI development (e.g. controls, animations, graphics, layout, etc) while using managed code or dynamic languages for application logic. Silverlight is a browser plug-in approximately 4MB in size, it is client side free software for easy and fast less than 10 sec one time installation available for any client side browsers. Silverlight supports the display of high-definition video files, and sending them over the Net.

Blade server


A blade server is a compact, self-contained server that consists of core processing components that fit into an enclosure with other blade servers. A single blade may consist of hot-plug hard-drives, memory, network cards, input/output cards and integrated lights-out remote management. The modular design of the blade server helps to optimize server performance and reduce energy costs. Each blade typically comes with one or two local ATA or SCSI drives. For additional storage, blade servers can connect to a storage pool facilitated by a network-attached storage (NAS), Fiber Channel, or iSCSI storage-area network (SAN).

Wednesday, September 30, 2015

List Of Seminar Topics For Computer Science Page - 7

List Of Seminar Topics For Computer Science Page - 7


Synthetic Aperture Radar  (SAR)


A Synthetic Aperture Radar (SAR), or SAR, is a coherent mostly airborne or spaceborne sidelooking radar system which utilizes the flight path of the platform to simulate an extremely large antenna or aperture electronically, and that generates high-resolution remote sensing imagery. Over time, individual transmit/receive cycles (PRT's) are completed with the data from each cycle being stored electronically. The signal processing uses magnitude and phase of the received signals over successive pulses from elements of a synthetic aperture. After a given number of cycles, the stored data is recombined (taking into account the Doppler effects inherent in the different transmitter to target geometry in each succeeding cycle) to create a high resolution image of the terrain being over flown.

Scatternet


A scatternet is a type of network that is formed between two or more Bluetooth-enabled devices, such as smartphones and newer home appliances. A scatternet is made up of at least two piconets. Bluetooth devices are peer units that act as slaves or masters. Scatternets are formed when a device in a piconet, whether a master or a slave, decides to participate as a slave to the master of another piconet. This device then becomes the bridge between the two piconets, connecting both networks. In order for a scatternet to form, one Bluetooth unit must submit as a slave to another piconet to become a bridge for both networks. If the master of a piconet is the bridge to another piconet, it functions as a slave in the other piconet, even though it is a master of its own piconet. The device participating in both piconets can relay data between members of both networks.

Wine  (Windows Emulator)


Wine makes it possible to run Windows programs alongside any Unix-like operating system, particularly Linux. At its heart, Wine is an implementation of the Windows Application Programing Interface (API) library, acting as a bridge between the Windows program and Linux. Think of Wine as a compatibility layer, when a Windows program tries to perform a function that Linux doesn't normally understand, Wine will translate that program's instruction into one supported by the system. Wine is primarily developed for Linux, but the Mac OS X, FreeBSD, and Solaris ports are currently (as of January 2009) well maintained. Wine is also available for NetBSD, through pkgsrc, respectively.

Computer Forensics  (Cyber Forensics)


Computer forensics is the application of investigation and analysis techniques to gather and preserve evidence from a particular computing device in a way that is suitable for presentation in a court of law. The goal of computer forensics is to perform a structured investigation while maintaining a documented chain of evidence to find out exactly what happened on a computing device and who was responsible for it. Adding the ability to practice sound computer forensics will help you ensure the overall integrity and survivability of your network infrastructure.

Cyborg


Cyborg, a compound word derived from cybernetics and organism, is a term coined by Manfred Clynes in 1960 to describe the need for mankind to artificially enhance biological functions in order to survive in the hostile environment of Space. Originally, a cyborg referred to a human being with bodily functions aided or controlled by technological devices, such as an oxygen tank, artificial heart valve or insulin pump. Over the years, the term has acquired a more general meaning, describing the dependence of human beings on technology. In this sense, cyborg can be used to characterize anyone who relies on a computer to complete their daily work.

Transactional memory


Transactional memory is a technology of concurrent threads synchronization. It simplifies the parallel programming by extracting instruction groups to atomic transactions. Concurrent threads operate paralleled till they start to modify the same memory chunk. For example, operations of nodes adding to the red/black tree (animation in the heading) can operate in parallel in several threads. Transactional memory allows programmers to de- fine customized read-modify-write operations that apply to multiple, independently-chosen words of memory. It is implemented by straightforward extensions to any multiprocessor cache-coherence protocol.

Internet Protocol Television  (IPTV)


Internet protocol television, or IPTV, uses a two-way digitalbroadcast signal that is sent through a switched telephone or cablenetwork by way of a broadband connection, along with a set top box programmed with software that can handle viewer requests to access media sources. A television is connected to the set top box that handles the task of decoding the IP video and converts it into standard television signals. IPTV primarily uses multicasting with Internet Group Management Protocol (IGMP) version 2 for live television broadcasts and Real Time Streaming Protocol for on-demand programs. Compatible video compression standards include H.264, Windows Media Video 9 and VC1, DivX, XviD, Ogg Theora and the MPEG-2 and -4.

Virtual Keyboard


A virtual keyboard is a keyboard that a user operates by typing (moving fingers) on or within a wireless or optical-detectable surface or area rather than by depressing physical keys. In one technology, the keyboard is projected optically on a flat surface and, as the user touches the image of a key, the optical device detects the stroke and sends it to the computer. The Virtual Keyboard uses light to project a full-sized computer keyboard onto almost any surface,and disappears when not in use. The Virtual Key (VKEY) provides a practical way to do email,word processing and spreadsheet tasks.

Multi-touch


Multi-touch, in a computing context, is an interface technology that enables input through pressure and gestures on multiple points on the surface of a device. Although most commonly used with touch screens on handheld devices, such as smartphonesand tablets, multi-touch has been adapted for other surfaces as well, including touch pads and mice, whiteboards, tables and walls. Gestures for multi-touch interfaces are often selected to be similar to real-life movements, so that the actions are intuitive and easily learned.
 

Electronic nose (e-nose)


An electronic nose (e-nose) is a device that identifies the specific components of an odor and analyzes its chemical makeup to identify it. An electronic nose consists of a mechanism for chemical detection, such as an array of electronic sensors, and a mechanism for pattern recognition, such as a neural network . Electronic noses have been around for several years but have typically been large and expensive. Electronic noses based on the biological model work in a similar manner, albeit substituting sensors for the receptors, and transmitting the signal to a program for processing, rather than to the brain.

Friday, September 25, 2015

List Of Seminar Topics For Computer Science Page - 6

List Of Seminar Topics For Computer Science Page - 6


Windows Azure service platform


Windows Azure service platform is a cloud Platform as a Service (PaaS) by Microsoft. It enables the development and hosting of applications on Microsoft’s managed data center. According to Microsoft, Azure features and services are exposed using open REST protocols. The Azure client libraries, which are available for multiple programming languages, are released under an open source license and hosted on GitHub. Every new mobile application needs a powerful set of server side services to power it. With Windows Azure Cloud Services you have everything you need to build the most robust, scalable APIs you can dream up. Take advantage of instant access to infinite scale so you can handle huge success without having to write any new code.

Protein-based optical computing and memories


The current and potential uses of bacteriorhodopsin in optical computing and memory devices are reviewed. The protein has significant potential for use in these applications due to unique intrinsic photophysical properties, and the range of chemical and genetic methods available for optimizing performance for specific application environments. The intrinsic properties of the native bacteriorhodopsin protein are described. The applications of bacteriorhodopsin in spatial light modulators, integral components in a majority of one-dimensional and two-dimensional optical processing environments, and holographic associative memories are presented.

Object cache


Object cache is a simple module using Drupal's cache API to store and retrieve objects (nodes, comments, users etc) to speed up rendering of pages, to lower the number of requests to the database and so on which benefits both anonymous as authenticated users. Since the Drupal cache API is used, these objects can also live in memcache or any other storage mechanism you can think of. The object-cache element can be used to specify the ObjectCache implementation used by OJB.

Open Moxis


OpenMosix is a Linux kernel extension for single-system image clustering. This kernel extension turns a network of ordinary computers into a supercomputer for Linux applications. Once you have installed openMosix, the nodes in the cluster start talking to one another and the cluster adapts itself to the workload. Processes originating from any one node, if that node is too busy compared to others, can migrate to any other node. openMosix continuously attempts to optimize the resource allocation. OpenMosix achieves this with a kernel patch for Linux, creating a reliable, fast and cost-efficient SSI clustering platform that is linearly scalable and adaptive.

Next generation network   (NGN)


The next-generation network (NGN) enables the deployment of access independent services over converged fixed and mobile networks – The NGN is packet based and uses IP to transport the various types of traffic (voice, video, data and signalling). By definition, the NGN is essentially a managed IP-based (i.e., packet-switched) network that enables a wide variety of services. The Next Generation Network (NGN) is body of key architectural changes in telecommunication core and access networks. NGN is expected to completely reshape the present structure of communication system.

HDMI  (High Definition Multimedia Interface)


HDMI (High Definition Multimedia Interface) is a specification that combines video and audio into a single digital interface for use with digital versatile disc (DVD) players, digital television (DTV) players, set-top boxes, and other audiovisual devices. The basis for HDMI is High Bandwidth Digital Content Protection (HDCP) and the core technology of Digital Visual Interface (DVI). HDCP is an Intel specification used to protect digital content transmitted and received by DVI-compliant displays. HDMI has the capacity to support existing high-definition video formats such as 720p, 1080i, and 1080p, along with support of enhanced definition formats like 480p, as well as standard definition formats such as NTSC or PAL.

Smart Drive updater v3.0


Smart driver updater v3.0.exe is a type of EXE file associated with Smart Driver Updater developed by DR.Ahmed Saker for the Windows Operating System. The latest known version of Smart driver updater v3.0.exe is 5.0.0.0, which was produced for Windows 7. This EXE file carries a popularity rating of 1 stars and a security rating of "UNKNOWN". EXE ("executable") files, such as smart driver updater v3.0.exe, are files that contain step-by-step instructions that a computer follows to carry out a function. When you "double-click" an EXE file, your computer automatically executes these instructions designed by a software developer (eg. DR.Ahmed Saker) to run a program (eg. Smart Driver Updater) on your PC.

Face Recognition Technology


Face recognition technology is the least intrusive and fastest biometric technology. It works with the most obvious individual identifier – the human face. Facial recognition technology (FRT) has emerged as an attractive solution to address many contemporary needs for identification and the verification of identity claims. It brings together the promise of other biometric systems, which attempt to tie identity to individually distinctive features of the body, and the more familiar functionality of visual surveillance systems. Instead of requiring people to place their hand on a reader(a process not acceptable in some cultures as well as being a source of illness transfer) or precisely position their eye in front of a scanner, face recognition systems unobtrusively take pictures of people's faces as they enter a defined area.

Web 2.0


Web 2.0 is the current state of online technology as it compares to the early days of the Web, characterized by greater user interactivity and collaboration, more pervasive network connectivity and enhanced communication channels. Web 2.0 basically refers to the transition from static HTML Web pages to a more dynamic Web that is more organized and is based on serving Web applications to users. One of the most significant differences between Web 2.0 and the traditional World Wide Web (WWW, retroactively referred to as Web 1.0) is greater collaboration among Internet users, content providers and enterprises.

Smart Camera


Smart camera is a label which refers to cameras that have the ability to not only take pictures but also more importantly make sense of what is happening in the image and in some cases take some action on behalf of the camera user. Smart cameras are generally less expensive to purchase and set up than the PCbased solution, since they include the camera, lenses, lighting (sometimes), cabling and processing. Software tools available with smart cameras are of the point-and-click variety and are easier to use than those available on PC's. Algorithms come pre-packaged and do not need to be developed, thus making the smart camera quicker to setup and use.



 


List Of Seminar Topics For Computer Science Page - 5

List Of Seminar Topics For Computer Science Page - 5


Femtocell Technology


Femto cells or femtocells are small cellular telecommunications base stations that can be installed in residential or business environments either as single stand-alone items or in clusters to provide improved cellular coverage within a building. It is widely known that cellular coverage, especially for data transmission where good signal strengths are needed is not as good within buildings. Femtocells are compatible with CDMA2000,WiMAX or UMTS mobile telephony devices, using the provider's own licensed spectrum to operate. Typically, consumer-oriented femtocells will support no more than four active users, while enterprise-grade femtocells can support up to 16 active users.


Inferno OS


Inferno is an operating system for creating and supporting distributed services. It was originally developed by the Computing Science Research Center of Bell Labs, the R&D arm of Lucent Technologies, and further developed by other groups in Lucent. Inferno was designed specifically as a commercial product, both for licensing in the marketplace and for use within new Lucent offerings. It encapsulates many years of Bell Labs research in operating systems, languages, on-the-fly compilers, graphics, security, networking and portability. Inferno runs directly on native hardware and also as an application providing a Virtual Operating System over other platforms. Applications can be developed and run on all Inferno platforms without modification or recompilation.

i Phones


iPhone is a smartphone made by Apple that combines an iPod, a tablet PC, a digital camera and a cellular phone. The device includes Internet browsing and networking capabilities. The iPhone also includes a 3.5-inch multi-touch screen (4-inch Retina Display on the iPhone 5), rather than a keyboard, that can be manipulated by users with by two finger touches. The iPhone runs on a special version of Apple'sMac OS X operating system. Like iPod, iPhone synchronizes data with a user's personal computer, using iTunes as a client software and Apple's proprietary USB port. Apple says that iPhone carries 8 hours of life on the internal lithium-ion battery for talk or video, and up to 24 hours for music mode.

Project Natal 


Project Natal is Microsoft's 3D camera for Xbox 360. It tracks people's motions in three dimensions and has a microphone capable of voice recognition. It is to be used as a controller-free method of playing video games, tracking the player's body movements and voice and transferring that information directly to the gaming console. Microsoft has described the code name as having several sources. "The name "Project Natal" has several sources. Project Natal will allow users to interact with other users using its camera. Project Natal can recognize more than one person at a time.

Next-Generation Secure Computing Base  (NGSCB)


The Next Generation Secure Computing Base (NGSCB) is a part of the Microsoft Vista operating system (OS) that employs a trusted platform module (TPM), a specialized chip that can be installed on the motherboard of a personal computer (PC) or server for the purpose of hardware authentication. The TPM stores information specific to the host system, such as encryption keys, digital certificates and passwords. NGSCB employs a unique hardware and software design to enable new kinds of secure computing capabilities to provide enhanced data protection, privacy and system integrity.

Photosynth


Microsoft have released a new application called photosynth. Any smartphone that has a compass and tilt reading capability can use it. Photosynth offers two styles for creating immersive 3D experiences: panoramas and synths. Shoot a panorama when you can capture everything from a single location with a single zoom level. Great for giving a sense of what it feels like to be in one particular place. Can be 360° in both directions, but doesn't have to be.

Deep Web


Current automatic wrappers using DOM tree and visual properties of data records to extract the required information from the deep web generally have limitations such as the inability to check the similarity of tree structures accurately. Data records located in the deep web do not only share similar visual properties and tree structures, but they are also related semantically in their contents.There's a part of the Internet is known as the deep web. Deep web is called the deep web because of its massive size, it's literally 'deep'. According to the guardian, you can only access 0.03% of the Internet via search engines like Google and the rest is what makes up the deep web.

Semantic web


In addition to the classic “Web of documents” W3C is helping to build a technology stack to support a “Web of data,” the sort of data you find in databases. The ultimate goal of the Web of data is to enable computers to do more useful work and to develop systems that can support trusted interactions over the network. The term “Semantic Web” refers to W3C’s vision of the Web of linked data. Semantic Web technologies enable people to create data stores on the Web, build vocabularies, and write rules for handling data. In addition to the classic “Web of documents” W3C is helping to build a technology stack to support a “Web of data,” the sort of data you find in databases. The ultimate goal of the Web of data is to enable computers to do more useful work and to develop systems that can support trusted interactions over the network. 

HTML 5


HTML 5 is a revision of the Hypertext Markup Language (HTML), the standard programming language for describing the contents and appearance of Web pages. HTML5 was developed to solve compatibility problems that affect the current standard, HTML4. One of the biggest differences between HTML5 and previous versions of the standard is that older versions of HTML require proprietary plugins and APIs. (This is why a Web page that was built and tested in one browser may not load correctly in another browser.) HTML5 provides one common interface to make loading elements easier.

Ethical Hacking


Ethical hacking is also known as penetration testing, intrusion testing and red teaming. It is also known as detecting, reporting, exploiting, security vulnerabilities. The state of security on the internet is bad and getting worse. One reaction to this state of affairs is termed as Ethical Hacking which attempts to increase security protection by identifying and patching known security vulnerabilities on systems owned by other parties. Ethical hacking is solely done to find system vulnerabilities, to find weak areas in system security which can cause loss of vital information. It is different from peripheral defense and network defense, which enables system owners to adopt stronger security, measures in a way so that an attacker knows if he is committing an attack.

 


Tuesday, September 22, 2015

List Of Seminar Topics For Computer Science Page - 4

List Of Seminar Topics For Computer Science Page - 4


Biochip


A biochip is a collection of miniaturized test sites (microarrays) arranged on a solid substrate that permits many tests to be performed at the same time in order to achieve higher throughput and speed. Typically, a biochip's surface area is no larger than a fingernail. Like a computer chip that can perform millions of mathematical operations in one second, a biochip can perform thousands of biological reactions, such as decoding genes, in a few seconds. Biochips are commonly defined as devices that contain tens of millions of individual sensor elements or biosensors. These sensors are packed together into a package of a micron size. Thus, the biochips are arrays of biological material fixed to a solid surface with a high density of integration. These biochips are often made using the same micro fabrication technology as used in making of conventional microchips.

Autonomic Computing


Autonomic computing is a computer's ability to manage itself automatically through adaptive technologies that further computing capabilities and cut down on the time required by computer professionals to resolve system difficulties and other maintenance such as software updates. he goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system's complexity invisible to the user. Autonomic computing is one of the building blocks of pervasive computing, an anticipated future computing model in which tiny - even invisible - computers will be all around us, communicating through increasingly interconnected networks.

Artificial Passenger  (AP)


The AP is an artificial intelligence-based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession. IBM has developed a prototype that holds a conversation with a driver, telling jokes and asking questions intended to determine whether the driver can respond alertly enough. Assuming the IBM approach, an artificial passenger would use a microphone for the driver and a speech generator and the vehicle's audio speakers to converse with the driver.

Optical camouflage


Optical camouflage is a hypothetical type of active camouflage currently only in a very primitive stage of development. The idea is relatively straightforward: to create the illusion of invisibility by covering an object with something that projects the scene directly behind that object. Camouflage is a method of crypsis (hiding). It allows an otherwise visible organism or object to remain unnoticed, by blending with its environment. Optical camouflage uses the retro-reflective projection technology, a projection-based augmented-reality system composed of a projector with a small iris and a retro-reflective screen. The object that needs to be made transparent is painted or covered with retro-reflective material. Then a projector projects the background image on it making the masking object virtually transparent.

Green Computing  (Green IT)


Green computing is an umbrella term, referring to an eco-conscious way of developing, using and recycling technology, as well as utilizing resources in a more planet-friendly manner. It is "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment. Green IT also strives to achieve economic viability and improved system performance and use, while abiding by our social and ethical responsibilities. Thus, green IT includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling. It is the study and practice of using computing resources efficiently."

Space Mouse


The SpaceMouse is a peripheral tool for controlling three-dimensional objects created by computer programmers. This tool represents part of the vanguard of a class of three-dimensional mouse products that allow users to use more natural movements in manipulating three-dimensional objects on a screen. A three-dimensional mouse has its own system of controls and its own signals that need to be interpreted by an operating system and/or application. Space Mouse is a professional 3D controller specifically designed for manipulating objects in a 3D environment. It permits the simultaneous control of all six degrees of freedom - translation rotation or a combination. . The device serves as an intuitive man-machine interface.

MPEG 7  (Multimedia Content Description Interface)


MPEG-7, formally known as the Multimedia Content Description Interface, includes standardized tools (descriptors, description schemes, and language) enabling structural, detailed descriptions of audio-visual information at different granularity levels (region, image, video segment, collection) and in different areas (content description, management, organization, navigation, and user interaction). It aims to support and facilitate a wide range of applications, such as media portals, content broadcasting, and ubiquitous multimedia. We present a high-level overview of the MPEG-7 standard. More specifically, MPEG-7 specifies color, texture, object shape, global motion, or object motion features for this purpose.

Smart quill


Smart quill is the one of the most important type of pen that can remember the words that can be used to write and helpful to transform the same words into the computer text. In this way Smart Quill is related to the computer. It is absolutely related to the new millennium era. Smart quill is very different from digital pens and it is a fountain pen sized handheld computer. This is developed by Williams a scientist from Microsoft research laboratory. Smart quill can be said as the pen computer which was a working prototype. Smart Quill contains sensors that record movement by using the earth's gravity system, irrespective of the platform used. The pen records the information inserted by the user.

Co-operative Linux


Cooperative Linux is the first working free and open source method for optimally running Linux on Microsoft Windows natively. More generally, Cooperative Linux (short-named coLinux) is a port of the Linux kernel that allows it to run cooperatively alongside another operating system on a single machine. Cooperative Linux allows the use of native Linux applications without having to switch operating systems, rebooting, or using more resource-consuming full virtualization solutions. It also ensures continued full compatibility with Windows applications.

Universal Robotic System (URCS)


The design and development of a universal robot control system (URCS) that would enable computation-intensive control algorithms to be implemented and modified is reported. This required shifting from hardware to software, using high-performance computing platforms. In general, multiprocessing has been found to be a cost-effective method for increasing performance, especially when the control algorithm can be composed into concurrent computational tasks. The URCS was developed using the University of Toronto Multiprocessor System (TUNIS) as the computing platform.











 





 



Saturday, September 19, 2015

List Of Seminar Topics For Computer Science Page - 3

List Of Seminar Topics For Computer Science Page - 3


E Paper


E-paper (sometimes called radio paper or just electronic paper) is a portable, reusable storage and display medium that looks like paper but can be repeatedly written on (refreshed) - by electronic means - thousands or millions of times. E-paper will be used for applications such as e-books, electronic newspapers, portable signs, and foldable, rollable displays. Information to be displayed is downloaded through a connection to a computer or a cell phone, or created with mechanical tools such as an electronic "pencil". An e-paper display actually uses no power when the image is not changing.

Smart Fabrics


The term “Smart Fabrics” refers to a broad and somewhat ill-defined field of study and products that extend the functionality and usefulness of fabrics. Humanity has used various types of fabrics for thousands of years to keep warm, provide comfort, and protect from the elements of nature. For most of recorded history, fabrics have also provided a means of self-expression through colors, patterns, cuts, and other stylistic elements. The basic technological elements of smart fabric are conductive or semiconductive threads and yarns, nanoelectronics applied directly to fibers, yarns, or woven elements, and chemical treatments that provide different features.smart fabrics are designed to maximise characteristics such as lightness, breathability, waterproofing etc, or to react to heat or light. They are usually manufactured using microfibres.

Intel Santa Rosa


As with all Centrino platforms, Santa Rosa is the codename given to a combination of Intel components: CPU, chipset and wireless Ethernet. With Santa Rosa there's a new optional fourth component, now called Intel Turbo Memory but at one point it was known as Robson. The Santa Rosa CPU is the same 65nm Merom based Core 2 Duo processor that was introduced last year with a few minor changes. The most noticeable change is that Santa Rosa CPUs can support up to an 800MHz FSB, up from 667MHz. The Core 2 Duo is a data hungry CPU, and thus giving it a faster FSB should improve overall performance when plugged in. A faster FSB is also necessary as Intel increases clock speeds; the faster your CPU runs, the faster it needs data to work on in order to operate efficiently.

Optical Computers 


An optical computer (also called a photonic computer) is a device that uses the photons in visible light or infrared ( IR ) beams,rather than electric current, to perform digital computations. An electric current flows at only about 10 percent of the speed of light. This limits the rate at which data can be exchanged over long distances, and is one of the factors that led to the evolution of optical fiber .The computers we use today use transistors and semiconductors to control electricity. Computers of the future may utilize crystals and metamaterials to control light. Optical computers make use of light particles called photons.

Plasmonics


Plasmonics is the study of the interaction between electromagnetic field and free electrons in a metal. Free electrons in the metal can be excited by the electric component of light to have collective oscillations. However, due to the Ohmic loss and electron-core interactions, loss are inevitable for the plasmon oscillation, which is usually detrimental to most plasmonic devices. Meanwhile, the absorption of light can be enhanced greatly in the metal by proper designing metal patterns for SP excitation. Plasmonics takes advantage of the coupling of light to charges like electrons in metals, and allows breaking the diffraction limit for the localization of light into subwavelength dimensions enabling strong field enhancements.

Wireless USB



Wireless USB (WUSB) is a form of Universal Serial Bus ( USB ) technology that uses radio-frequency ( RF ) links rather than cables to provide the interfaces between a computer and peripherals, such as monitors, printers, external drives, head sets, MP3 players and digital cameras. The WUSB technology is based on the WiMedia ultra wideband common radio platform. An advantage of using a wireless USB is the ability to sync to a multiple media devices. The disadvantages to using a wireless USB is the device may run slower than normal, there is a risk of damage because of the USB is sticking out of the computer, and there is a greater risk of having information hacked.

Ajax  (Asynchronous JavaScript and XML)


AJAX is not a new programming language, but a new way to use existing standards. AJAX is the art of exchanging data with a server, and updating parts of a web page - without reloading the whole page. AJAX is a new technique for creating better, faster, and more interactive web applications with the help of XML, HTML, CSS, and Java Script. Ajax uses XHTML for content, CSS for presentation, along with Document Object Model and JavaScript for dynamic content display. Ajax is a client-side script that communicates to and from a server/database without the need for apostback or a complete page refresh.

ZIGBEE


ZigBee is the wireless language that everyday devices use to connect to one another. ZigBee is designed for wireless Automation and other lower data tasks, such assmart home automation and remote monitoring. ZigBee is a low-cost, low-power,wireless mesh networking standard. The low cost allows the technology to be widelydeployed in wireless control and monitoring applications, the low power usage allowslonger life with smaller batteries, and the mesh networking provides high reliability andlarger range. Due to the low-cost and low-power usage this wireless technology is widelyused in Home Automation, Smart Energy, Telecommunication Applications, PersonalHome, Hospital Care. ZigBee enables new opportunities for wireless sensors and controlnetworks. ZigBee is standard based,low cost, can be used globally, reliable and self healing, supports large number of nodes, easy to deploy ,very long battery life and secure.


Polymer Memory


Polymer memory refers to memory technologies based on the use of organic polymers. Some of these technologies use changes in the resistance of conducting polymers under read/write control. Other architectures are based on ferroelectric polymers. The properties of polymer memory are low-cost and high-performance, and have the potential for 3D stacking and mechanical flexibility. Variants can be write-once or multiple-write. Printed versions of this technology already exist and are used in low-density applications such as toys.

Bio - Computer



BIO computer use system of biologically derived molecules such as DNA and protein to perform computational caculations, retreiving and processing data. Biocomputing is one of the upcoming field in the areas of mole-cularelectronics and nanotechnology. The idea behind blending biology with technology is due to the limitations faced by the semiconductor designers in decreasing the size of the silicon chips, which directly affects the processor speed. Biocomputers consists of biochips unlike the normal computers, which are silicon-based computers. This biochip consists of biomaterial such as nucleic acid, enzymes, etc.

List Of Seminar Topics For Computer Science - Page 2

List Of Seminar Topics For Computer Science - Page 2


Zettabyte file System  (ZFS)


ZFS - the Zettabyte File System - is an enormous advance in capability on existing file systems. It provides greater space for files, hugely improved administration and greatly improved data security.Files stored on a computer are managed by the file system of the operating system. When a computer is used to store illegal data such as child pornography, it is important that the existence
of the illegal data can be proven even after the data is deleted. In this study, a new functionality is added to the Zettabyte File System (ZFS) debugger, which digs into the physical disk of the computer without using the file system layer of the operating system.

MRAM  (Magnetic RAM / Magnetoresistive RAM)


MRAM (magnetoresistive random access memory) is a method of storing data bits using magnetic charges instead of the electrical charges used by DRAM (Dynamic Random Access Memory). Scientists define a metal as magnetoresistive if it shows a slight change in electrical resistance when placed in a magnetic field. By combining the high speed of static RAM and the high density of DRAM, proponents say MRAM could be used to significantly improve electronic products by storing greater amounts of data, enabling it to be accessed faster while consuming less battery power than existing electronic memory.

Voice XML 


VoiceXML is an application of the Extensible Markup Language (XML) which, when combined with voice recognition technology, enables interactive access to the Web through the telephone or a voice-driven browser. An individual session works through a combination of voice recognition and keypad entry.VoiceXML is designed for creating audio dialogs that feature synthesized speech, digitized audio, recognition of spoken and DTMF key input, recording of spoken input, telephony, and mixed initiative conversations. Its major goal is to bring the advantages of Web-based development and content delivery to interactive voice response applications.

JMX  (Java Management Extensions)


JMX (Java Management Extensions) is a set of specifications for application and network management in the J2EE development and application environment. JMX defines a method for Java developers to integrate their applications with existing network management software by dynamically assigning Java objects with management attributes and operations. By encouraging developers to integrate independent Java management modules into existing management systems, the Java Community Process (JCP) and industry leaders hope that developers will consider non-proprietary management as a fundamental issue rather than as an afterthought. A management interface, as defined by JMX, is composed of named objects - called MBeans (Management Beans). MBeans are registered with a name (an Object Name) in an MBeanServer.

Vtion Wireless Tech AG 


Vtion Wireless Technology AG, through its subsidiaries, provides wireless data card solutions for mobile computing through broadband wireless networks in the People's Republic of China. The company operates in three segments: Wireless Data Terminals, Wireless Intelligent Terminals, and All Others. The company primarily supplies wireless data card products and related after sales service support for the mobile use of computers; and interface conversion card products. It provides a range of 3G wireless data cards that fit PCMCIA, USB, Mini-USB, Express Card 34, and PCI Express Mini interfaces of laptops or personal computers. The company’s data cards are used primarily by business customers and governmental organizations to enable their employees to access a range of applications, including the Internet, e-mail, corporate intranet, remote databases, and corporate applications. I

HVD (Holographic Versatile Disc) 


HVD (Holographic Versatile Disc) is the next generation in optical disk technology. HVD is still in a research phase that would phenomenally increase the disk storage capacities over the currently existing HD DVD and Blu-ray optical disk systems. According to published statistics, when produced in full scale, HVDs will have a storage capacity of 3.9 terabytes (39,000 GB) and a data transfer rate of 1 GB/s, which is at least six times more than the speed of DVD players. This would, without a doubt, become a giant step in revolutionizing the disk storage industry. Holographic versatile disc (HVD) is a holographic storage format that looks like a DVD but is capable of storing far more data.

Tempest & Echelon


TEMPEST and ECHELON are the method of spying in a sophisticated manner; both are developed by National Security Agency (NSA) for monitoring the people. These technologies are originally developed for pure military espionage, but hackers use them now for spying in to other people’s activities.TEMPEST is a code word that relates to specific standards used to reduce electromagnetic emanations. Echelon is the technology for sniffing through the messages sent over a network or any transmission media, even it is wireless messages. Tempest is the technology for intercepting the electromagnetic waves over the air. It simply sniffs through the electromagnetic waves propagated from any device, even it is from the monitor of a computer screen.

DCCP (Datagram Congestion Control Protocol)


The Datagram Congestion Control Protocol (DCCP) is a transport protocol that provides bidirectional unicast connections of congestion-controlled unreliable datagrams. DCCP is suitable for applications that transfer fairly large amounts of data and that can benefit from control over the tradeoff between timeliness and reliability. DCCP is intended for applications such as streaming media that can benefit from control over the tradeoffs between delay and reliable in-order delivery. DCCP is a packet stream protocol, not a byte stream protocol. The application is responsible for framing.

Robotic Surgery


Robotic surgery is a method to perform surgery using very small tools attached to a robotic arm. The surgeon controls the robotic arm with a computer. Robotic surgery is similar to laparoscopic surgery. It can be performed through smaller cuts than open surgery. The small, precise movements that are possible with this type of surgery give it some advantages over standard endoscopic techniques.
The surgeon can make small, precise movements using this method. This can allow the surgeon to do a procedure through a small cut that once could be done only with open surgery.

Resilient packet ring system 


Resilient Packet Ring (RPR) technology is optimized for robust and efficient packet networking over a fiber ring topology. This technology incorporates extensive performance monitoring. Resilient Packet Ring (RPR) is a network topology being developed as a new standard for fiber optic rings. The Institute of Electrical and Electronic Engineers (IEEE) began the RPR standards (IEEE 802.17) development project in December 2000 with the intention of creating a new Media Access Control layer for fiber optic rings. The IEEE working group is part of the IEEE's local area network (LAN) and metropolitan area network (MAN) Committee. Fiber optic rings are widely deployed as part of both MANs and wide area networks (WANs); however, these topologies are dependent on protocols that aren't optimized or scalable to meet the demands of packet-switched networks.


Steganography

Seminar topic on Steganography


Abstract on Steganography


Steganography is the technique of hiding private or sensitive information within something that appears to be nothing out of the usual. Steganography is often confused with cryptology because the two are similar in the way that they both are used to protect important information. The difference between the two is that Steganography involves hiding information so it appears that no information is hidden at all.In this paper,we describe method of Steganography based on embedding encrypted message bits using RSA Algorith min the 1st least significant (LSB Technique) and last 4 significant bits (Modulus 4 bit technique)of the pixe l of image.Here we also provide integrity using MD5 hash algorithm. The analysis shows that the PSNR is improved in the case of LSB technique. Use of hash algorithm  provides data integrity

Steganography is a form of security technique through obscurity, the science and art of hiding the existence of a message between sender and intended recipient. Steganography has been used to hide secret messages in various types of files, including digital images, audio and video. The three most important parameters for audio steganography are imperceptibility, payload, and robustness. Different applications have different requirements of the steganography technique used. This paper intends to give an overview of image steganography, its uses and techniques.

How it Works

 

The core of any steganographic method is how it encodes a single byte (8 bits) of information within the image. Some methods take advantage of the file structure of the image and hide it in special data fields. For example, in the BMP file format, the offset between the file information and pixel data can be manually specified. This presents interesting possibility of hiding an entire file between the file information and the pixel data without altering the image at all. In practice, such methods are impossible to detect visually but easy to detect with a computer: if the file size is larger than the minimum size necessary for the image size and color depth (according to the file format specification), then it probably included hidden data.
Other steganographic methods hide data by slightly modifying the pixels of the actual image by small amounts. Typically, the modification is done by changing the least significant bit (or bits) of the red, green, blue, and applicable, alpha channels of one or more pixels. This is how we proceed in our sample program.

Detection and Countermeasures

Detecting and preventing steganographic messages from being transmitted is an extremely difficult task. Depending on the circumstances, it may be impossible to prove that someone is even sending messages in the first place!

When it comes to combating (or rather detecting) physical steganographic messages, the key is really to understand all of the possible ways a message could be hidden in or on an object. For example, invisible ink that only visible when exposed to certain chemicals or ultraviolet light. There is a great history of steganography used in the real world eventually being discovered – hidden messages sent through newspapers, media, and seemingly normal communications. Detecting steganography when you don’t know what you’re looking for is partially a guessing game, and partially a pattern finding exercise.

In computing, detection of steganographically encoded packages is called steganalysis. The simplest method to detect files that may have been modified to send steganographic messages is to compare the files to known, clean originals of the files. For example, if one wanted to see if a website had been modified to hide a message in it’s image files, one could compare the image files currently on the site with the original files intended to be there. The differences in the file (if any) would reveal the steganographic message in it’s entirety.

Another common method of combating steganography is data compression. Lossful data compression (such as the JPEG image format) can completely ruin any hidden modified data bits within the file. Reducing the size of the file through compression will also significantly decrease the amount of space available for a hidden message to reside.

Types of encoding


Least Significant Bit encoding


This method is the more popular one among encoding images. Programs that use the Least Significant Bit, or LSB,method encode the message in the least significant bit of every byte in an image. By doing so, the value of each pixel is changed slightly, but not enough to make significant changes to the image. In a 24-bit image, 3 bytes are used for each pixel, so each pixel could encode 3 bits of a secret message.The altered image would look identical to the human eye, even when compared to the original. However, 24-bit images are quite large, and are not a popular method of sending images around the web, so the fact that they are so large would arouse suspicion.A more plausible container image would be a 256 color image, where 1 byte is used for each pixel. A 640 x 480 image of this quality would be able to store 300 kilobits of data. With a large enough image, one could even hide an image within another image.Popular commercial programs that use LSB encoding include White Noise Storm and S-Tools.

Frequency Domain encoding


This method encodes messages within images by working with the 2-dimensional Fast Fourier Transform, or 2-D FFT of the container image. The 2-D FFT separates the frequencies of the image into rings centered around an axis. Those rings closest to the axis represent the low frequencies of the image, and those furthest away represent the high frequencies. In the frequency domain encoding method, the secret message is encoded in the middle frequencies of the image. This is done by converting the message text to bits and overlaying these bits in a ring shape in the desired frequency band on the 2-D FFT. Although the ring of bits appears dark and outstanding on the 2-D FFT, the effect on the image itself is very slight. Also, an image encoded by this method is able to better withstand noise, compression, translation, and rotation, than images encoded by the LSB method. All of the images we worked with in this project were encoded by the nPhaze Boys in this manner using Matlab.

DIFFERENT KINDS OF STEGANOGRAPHY

The four main categories of file formats that can be used for steganography are: 
  • Text 
  • Images 
  • Audio 
  • Protocol  

Text steganography:

Hiding information in text is the most important method of steganography. It hides the text behind some other text file. It is the a difficult form of steganography as the redundant amount of text to hide the secret message is scarce in text files

Image steganography:

It is one of the most commonly used technique because of the limitation of the Human visual System(HVS). Human eye cannot detect the vast range of colors and an insignificant change in the quality of an image that results from steganography.

Audio steganography:

It  is  also  a  difficult  form  of steganography  as  humans are  able  to  detect  a  minute  change in the quality of audio

Protocol steganography:

The term protocol steganography is to embedding information within network protocols such as TCP/IP. We hide information in the header of a TCP/IP packet in some fields that can be either optional or are never used.






References
http://www.ijceronline.com/papers/Vol2_issue7/AF02701900193.pdf
http://ieeexplore.ieee.org/
http://easybmp.sourceforge.net/steganography.html
https://www.clear.rice.edu/elec301/Projects01/steganosaurus/background.html
http://www.garykessler.net/library/steganography.html
http://www.ijettjournal.org/volume-4/issue-7/IJETT-V4I7P186.pdf
http://www.ijettjournal.org/volume-4/issue-7/IJETT-V4I7P186.pdf

Tuesday, September 15, 2015

3D Holographic

seminar topic on 3D Holographic


Abstract on  3D Holographic

Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects, and is unrivalled in high-end industry and scientific research for non-contact inspection of precision 3D components and microscopic 3D samples. Many existing 3D imaging techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.). The advantage of holograms is that multiple 2D perspectives can be optically combined in parallel in one step independent of the hologram size. Recently digital holography (holography using a digital camera) has become feasible due to advances in scientific camera technology. The advantage of a digital representation of holograms is that they can be processed, analysed, and transmitted electronically.

Monday, September 14, 2015

Map Reduce

 Computer Seminar Topic on Map Reduce

Abstract on  MapReduce

Distributed processing framework where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster.

Hadoop

 Computer Seminar Topic on Hadoop

Abstract on  Hadoop

Hadoop is a Java software framework that supports data-intensive distributed applications and is developed under open source license. It enables applications to work with thousands of nodes and petabytes of data. The two major pieces of Hadoop are HDFS: Hadoop's own file system. This is designed to scale to petabytes of storage and runs on top of the file systems of the underlying operating systems.

What is Hadoop

Apache™ Hadoop® is an open source software project that enables distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with very high degree of fault tolerance. Rather than relying on high-end hardware, the resiliency of these clusters comes from the software's ability to detect and handle failures at the application layer.

Hadoop makes it possible to run applications on systems with thousands of nodes involving thousands of terabytes. Its distributed file system facilitates rapid data transfer rates among nodes and allows the system to continue operating uninterrupted in case of a node failure. This approach lowers the risk of catastrophic system failure, even if a significant number of nodes become inoperative.

Hadoop was inspired by Google's MapReduce, a software framework in which an application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be run on any node in the cluster. Doug Cutting, Hadoop's creator, named the framework after his child's stuffed toy elephant. The current Apache Hadoop ecosystem consists of the Hadoop kernel, MapReduce, the Hadoop distributed file system (HDFS) and a number of related projects such as Apache Hive, HBase and Zookeeper.

http://searchcloudcomputing.techtarget.com/definition/Hadoop
http://www-01.ibm.com/software/data/infosphere/hadoop/
http://www.sas.com/en_us/insights/big-data/hadoop.html



Thursday, September 10, 2015

3D Printing


3D Printing - Seminar topic

3D Printing, Seminar topic,printingAbstarct on 3D printing

3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying down successive layers of material. It is also known as rapid prototyping, is a mechanized method whereby 3D objects are quickly made on a reasonably sized machine connected to a computer containing blueprints for the object. The 3D printing concept of custom manufacturing is exciting to nearly everyone. This revolutionary method for creating 3D models with the use of inkjet technology saves time and cost by eliminating the need to design; print and glue together separate model parts. Now, you can create a complete model in a single process using 3D printing. The basic principles include materials cartridges, flexibility of output, and translation of code into a visible pattern. 
 

What is 3D printing?


3D printing or additive manufacturing is a process of making three dimensional solid objects from a digital file. The creation of a 3D printed object is achieved using additive processes. In an additive process an object is created by laying down successive layers of material until the entire object is created. Each of these layers can be seen as a thinly sliced horizontal cross-section of the eventual object.

How does 3D printing work?

It all starts with making a virtual design of the object you want to create. This virtual design is made in a CAD (Computer Aided Design) file using a 3D modeling program (for the creation of a totally new object) or with the use of a 3D scanner (to copy an existing object). A 3D scanner makes a 3D digital copy of an object.

3d scanners use different technologies to generate a 3d model such as time-of-flight, structured / modulated light, volumetric scanning and many more.

Recently, many IT companies like Microsoft and Google enabled their hardware to perform 3d scanning, a great example is Microsoft’s Kinect. This is a clear sign that future hand-held devices like smartphones will have integrated 3d scanners. Digitizing real objects into 3d models will become as easy as taking a picture. Prices of 3d scanners range from very expensive professional industrial devices to 30 USD DIY devices anyone can make at home.

To prepare a digital file for printing, the 3D modeling software “slices” the final model into hundreds or thousands of horizontal layers. When the sliced file is uploaded in a 3D printer, the object can be created layer by layer. The 3D printer reads every slice (or 2D image) and creates the object, blending each layer with hardly any visible sign of the layers, with as a result the three dimensional object.

History of 3d Printing 

The technology for printing physical 3D objects from digital data was first developed by Charles Hull in 1984. He named the technique as Stereolithography and obtained apatent for the technique in 1986. After obtaining the patent, he founded 3D Systems and developed the first commercial 3D Printing machine. However the term â€Å“3D Printer” was not used by that time and the machine was called only as Stereolithography Apparatus. As the technology was very new, 3D Systems delivered the first version of the machine to only a few selected customers and based on their feedback, 3D Systems developed an improved version, named SLA-250, which was made available to the general public in 1988.

While Stereolithography systems had become popular by the end of 1980s, other similar technologies such as Fused Deposition Modeling (FDM) and Selective Laser Sintering (SLS) were introduced. FDM was invented in 1988 by Scott Crump who founded Stratasys in the next year to commercialize the technology. Stratasys sold its first FDM-based machine, "3D Modeler", in 1992.During the same year, DTM marketed SLS based systems.

In 1993, Massachusetts Institute of Technology (MIT) patented another technology, named "3 Dimensional Printing techniques", which is similar to the inkjet technology used in 2D Printers. In 1995, Z Corporation obtained an exclusive license from MIT to use the technology and started developing 3D Printers based on 3DP technology.

In 1996, three major products, "Genisys" from Stratasys, "Actua 2100" from 3D Systems and "Z402" from Z Corporation, were introduced. It was only during this period, the term "3D Printer" was first used to refer rapid prototyping machines. During the late 1990s and early 2000s, several relatively low-cost 3D Printers came into the market.

In 2005, Z Corp. launched a breakthrough product, named Spectrum Z510, which was the first high definition color 3D Printer in the market.

Another breakthrough in 3D Printing occurred in 2006 with the initiation of an open source project, named Reprap, which was aimed at developing a self-replicating 3D printer. The first version of Reprap, which was released in 2008, can manufacture about 50 percent of its own parts. The second version of Reprap is currently under development.


3D Printers and 3D Printing: Technologies, Processes and Techniques

Here you will find information about the different types of 3D printing processes as well as the various 3D printers used for each technology.

Stereolithography (SLA)
PolyJet & MultiJet
Digital Light Processing (DLP)
Selective Laser Sintering (SLS)
Metal 3D Printing (DMLS & EBM)
Full Color 3D Printing (Binder Jetting, SDL & Triple jetting)
Fused Deposition Modeling or Fused Filament Fabrication (FDM/FFF)

Stereolithography (SLA)

Stereolithography (SL) is one of several methods used to create 3D-printed objects. It's the process by which a uniquely designed 3D printing machine, called a stereolithograph apparatus (SLA) converts liquid plastic into solid objects.The process was patented as a means of rapid prototyping in 1986 by Charles Hull, co-founder of 3D Systems, Inc., a leader in the 3D printing industry.
seminar topic,3D_printing, stereolithography


3-D printing is a very good example of the age we live in. In the past, it could conceivably take months to prototype a part -- today you can do it hours. If you can dream up a product, you can hold a working model in your hands two days later! In this edition of HowStuffWorks, we will take a tour of the stereolithography service bureau at PT CAM (Piedmont Triad Center for Advanced Manufacturing) so that you can understand everything involved and see some actual 3-D models that this technology has produced!




PolyJet & MultiJet

Post processing is really different for the two. PolyJet requires manual labor in the form of pressurized water to remove the wax supports. This is a tedious process with more complicated parts, but the shape of the part is retained provided it doesn't suffer in the process. More intricate shapes require dentist tools to clear of support. MJM uses an oven to melt the support material. While this is a good idea because of the hands-off aspect, the part suffers from deformation due to being a plastic part being stuck in an oven. Every company I have visited has reported the same problem. It's not that problematic if you print big, thick parts though, as those tend to be more resistant to deformation.
Seminar topic,3D printing, PolyJet and MultiJet
PolyJet technology is a powerful additive manufacturing method that produces smooth, accurate prototypes, parts and tooling. With 16-micron layer resolution and accuracy as high as 0.1 mm, it can produce thin walls and complex geometries using the widest range of materials

How PolyJet 3D Printing Works

PolyJet 3D printing is similar to inkjet printing, but instead of jetting drops of ink onto paper, PolyJet 3D Printers jet layers of curable liquid photopolymer onto a build tray.

The process is simple:

Pre-processing: Build-preparation software automatically calculates the placement of photopolymers and support material from a 3D CAD file.

Production: The 3D printer jets and instantly UV-cures tiny droplets of liquid photopolymer. Fine layers accumulate on the build tray to create a precise 3D model or part. Where overhangs or complex shapes require support, the 3D printer jets a removable gel-like support material.

Support removal: The user easily removes the support materials by hand or with water. Models and parts are ready to handle and use right out of the 3D printer, with no post-curing needed.

PolyJet 3D Printing Benefits

PolyJet 3D Printing technology offers many advantages for rapid tooling and prototyping, and even production parts including astonishingly fine detail, smooth surfaces, speed and precision.
  • Create smooth, detailed prototypes that convey final-product aesthetics.
  • Produce short-run manufacturing tools, jigs and assembly fixtures.
  • Produce complex shapes, intricate details and smooth surfaces.
  • Incorporate color and diverse material properties into one model with the greatest material versatility available
.- See more at: http://www.stratasys.com/3d-printers/technologies/polyjet-technology#sthash.UkO19Irh.dpuf

Digital Light Processing (DLP)

Digital Light Processing (DLP) is a process in additive manufacturing, also known as 3D printing and stereolithography, which takes a design created in a 3D modeling software and uses DLP technology to print a 3D object.

How Digital Light Processing Works in 3D Printing
digital light processing (dlp), 3d printing ,seminar topic
In this process, once the 3D model is sent to the printer, a vat of liquid polymer is exposed to light from a DLP projector under safelight conditions. The DLP projector displays the image of the 3D model onto the liquid polymer. The exposed liquid polymer hardens and the build plate moves down and the liquid polymer is once more exposed to light. The process is repeated until the 3D model is complete and the vat is drained of liquid, revealing the solidified model. DLP 3D printing is faster and can print objects with a higher resolution. The Envision Tec Ultra, MiiCraft High Resolution 3D printer, and Lunavast XG2 are examples of DLP printers.

History of Digital Light Processing

Larry Hornbeck of Texas Instruments created the technology for Digital Light Processing in 1987. DLP is used for projectors and uses digital micromirrors laid out in a matrix on a semiconductor chip called the Digital Micromirror Device. Each mirror represents a pixel in the image for display. Several applications use DLP technology including projectors, movie projectors, cell phones, and 3D printing.



Refered Source From:
http://www.mahalo.com/3d-printers/
http://3dprinting.com/what-is-3d-printing/
http://nicsu.up.nic.in/knowdesk/3D-Printing-Technology.pdf

Monday, September 7, 2015

cloud computing

Cloud computing Abstract:

Cloud computing is a highly scalable and cost-effective infrastructure for running HPC, enterprise and Web applications. However, the growing demand of Cloud infrastructure has drastically increased the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high operational cost, which reduces the profit margin of Cloud providers, but also leads to high carbon emissions which is not environmentally friendly. Hence, energy-efficient solutions are required to minimize the impact of Cloud computing on the environment. In order to design such solutions, deep analysis of Cloud is required with respect to their power efficiency.


History of Cloud computing

History has a funny way of repeating itself, or so they say. But it may come as some surprise to find this old cliché applies just as much to the history of computers as to wars, revolutions, and kings and queens. For the last three decades, one trend in computing has been loud and clear: big, centralized, mainframe systems have been "out"; personalized, power-to-the-people, do-it-yourself PCs have been "in." Before personal computers took off in the early 1980s, if your company needed sales or payroll figures calculating in a hurry, you'd most likely have bought in "data-processing" services from another company, with its own expensive computer systems, that specialized in number crunching; these days, you can do the job just as easily on your desktop with off-the-shelf software. Or can you? In a striking throwback to the 1970s, many companies are finding, once again, that buying in computer services makes more business sense than do-it-yourself. This new trend is called cloud computing and, not surprisingly, it's linked to the Internet's inexorable rise. 

What is the cloud?

Cloud computing is a general term for the delivery of hosted services over the Internet. Cloud computing enables companies to consume compute resources as a utility -- just like electricity -- rather than having to build and maintain computing infrastructures in-house. 

Cloud computing promises several attractive benefits for businesses and end users. Three of the main benefits of cloud computing include:

Self-service provisioning: End users can spin up computing resources for almost any type of workload on-demand.

Elasticity: Companies can scale up as computing needs increase and then scale down again as demands decrease.
Pay per use: Computing resources are measured at a granular level, allowing users to pay only for the resources and workloads they use.

Cloud computing services can be private, public or hybrid.


Private cloud services are delivered from a business' data center to internal users. This model offers versatility and convenience, while preserving management, control and security. Internal customers may or may not be billed for services through IT chargeback.


In the public cloud model, a third-party provider delivers the cloud service over the Internet. Public cloud services are sold on-demand, typically by the minute or the hour. Customers only pay for the CPU cycles, storage or bandwidth they consume. Leading public cloud providers include Amazon Web Services (AWS), Microsoft Azure, IBM/SoftLayer and Google Compute Engine.


Hybrid cloud is a combination of public cloud services and on-premises private cloud – with orchestration and automation between the two. Companies can run mission-critical workloads or sensitive applications on the private cloud while using the public cloud for bursty workloads that must scale on-demand. The goal of hybrid cloud is to create a unified, automated, scalable environment which takes advantage of all that a public cloud infrastructure can provide, while still maintaining control over mission-critical data.



Sharing and Storing Data

Cloud computing, in turn, refers to sharing resources, software, and information via a network, in this case the Internet. The information is stored on physical servers maintained and controlled by a cloud computing provider, such as Apple in regards to iCloud. As a user, you access your stored information on the cloud via the Internet.

By using cloud storage, you don’t have to store the information on your own hard drive. Instead, you can access it from any location and download it onto any device of your choice, including laptops, tablets, or smartphones. Moreover, you can also edit files, such as Word documents or PowerPoint presentations, simultaneously with other users, making it easier to work away from the office.





What makes cloud computing different?


It's managed

Most importantly, the service you use is provided by someone else and managed on your behalf. If you're using Google Documents, you don't have to worry about buying umpteen licenses for word-processing software or keeping them up-to-date. Nor do you have to worry about viruses that might affect your computer or about backing up the files you create. Google does all that for you. One basic principle of cloud computing is that you no longer need to worry how the service you're buying is provided: with Web-based services, you simply concentrate on whatever your job is and leave the problem of providing dependable computing to someone else.



It's "on-demand"


Cloud services are available on-demand and often bought on a "pay-as-you go" or subscription basis. So you typically buy cloud computing the same way you'd buy electricity, telephone services, or Internet access from a utility company. Sometimes cloud computing is free or paid-for in other ways (Hotmail is subsidized by advertising, for example). Just like electricity, you can buy as much or as little of a cloud computing service as you need from one day to the next. That's great if your needs vary unpredictably: it means you don't have to buy your own gigantic computer system and risk have it sitting there doing nothing.



It's public or private


Now we all have PCs on our desks, we're used to having complete control over our computer systems—and complete responsibility for them as well. Cloud computing changes all that. It comes in two basic flavors, public and private, which are the cloud equivalents of the Internet and Intranets. Web-based email and free services like the ones Google provides are the most familiar examples of public clouds. The world's biggest online retailer, Amazon, became the world's largest provider of public cloud computing in early 2006. When it found it was using only a fraction of its huge, global, computing power, it started renting out its spare capacity over the Net through a new entity called Amazon Web Services. Private cloud computing works in much the same way but you access the resources you use through secure network connections, much like an Intranet. Companies such as Amazon also let you use their publicly accessible cloud to make your own secure private cloud, known as a Virtual Private Cloud (VPC), using virtual private network (VPN) connections.



Types of cloud computing

IT people talk about three different kinds of cloud computing, where different services are being provided for you. Note that there's a certain amount of vagueness about how these things are defined and some overlap between them.

Infrastructure as a Service (IaaS) means you're buying access to raw computing hardware over the Net, such as servers or storage. Since you buy what you need and pay-as-you-go, this is often referred to as utility computing. Ordinary web hosting is a simple example of IaaS: you pay a monthly subscription or a per-megabyte/gigabyte fee to have a hosting company serve up files for your website from their servers.

Software as a Service (SaaS) means you use a complete application running on someone else's system. Web-based email and Google Documents are perhaps the best-known examples. Zoho is another well-known SaaS provider offering a variety of office applications online.


Platform as a Service (PaaS) means you develop applications using Web-based tools so they run on systems software and hardware provided by another company. So, for example, you might develop your own ecommerce website but have the whole thing, including the shopping cart, checkout, and payment mechanism running on a merchant's server. Force.com (from salesforce.com) and the Google App Engine are examples of PaaS.


What are the Advantages of Cloud Computing?


Worldwide Access. Cloud computing increases mobility, as you can access your documents from any device in any part of the world. For businesses, this means that employees can work from home or on business trips, without having to carry around documents. This increases productivity and allows faster exchange of information. Employees can also work on the same document without having to be in the same place.


More Storage. In the past, memory was limited by the particular device in question. If you ran out of memory, you would need a USB drive to backup your current device. Cloud computing provides increased storage, so you won’t have to worry about running out of space on your hard drive.



Easy Set-Up. You can set up a cloud computing service in a matter of minutes. Adjusting your individual settings, such as choosing a password or selecting which devices you want to connect to the network, is similarly simple. After that, you can immediately start using the resources, software, or information in question.



Automatic Updates. The cloud computing provider is responsible for making sure that updates are available – you just have to download them. This saves you time, and furthermore, you don’t need to be an expert to update your device; the cloud computing provider will automatically notify you and provide you with instructions.


Reduced Cost. Cloud computing is often inexpensive. The software is already installed online, so you won’t need to install it yourself. There are numerous cloud computing applications available for free, such as Dropbox, and increasing storage size and memory is affordable. If you need to pay for a cloud computing service, it is paid for incrementally on a monthly or yearly basis. By choosing a plan that has no contract, you can terminate your use of the services at any time; therefore, you only pay for the services when you need them.

What are the Disadvantages of Cloud Computing?


Security. When using a cloud computing service, you are essentially handing over your data to a third party. The fact that the entity, as well as users from all over the world, are accessing the same server can cause a security issue. Companies handling confidential information might be particularly concerned about using cloud computing, as data could possibly be harmed by viruses and other malware. That said, some servers like Google Cloud Connect come with customization spam filtering, email encryption, and SSL enforcement for secure HTTPS access, among other security measures.

Privacy. Cloud computing comes with the risk that unauthorized users might access your information. To protect against this happening, cloud computing services offer password protection and operate on secure servers with data encryption technology.

Loss of Control. Cloud computing entities control the users. This includes not only how much you have to pay to use the service, but also what information you can store, where you can access it from, and many other factors. You depend on the provider for updates and backups. If for some reason, their server ceases to operate, you run the risk of losing all your information
.

Internet Reliance. While Internet access is increasingly widespread, it is not available everywhere just yet. If the area that you are in doesn’t have Internet access, you won’t be able to open any of the documents you have stored in the cloud. 


Read more at http://www.moneycrashers.com/cloud-computing-basics/#thzo4EyIVWM78iUZ.99

Reference Site: