Friday, July 27, 2012

Free-space optical


Abstract
Free space optical communications (FSOC) is a method by which one transmits a modulated beam of light through the atmosphere for broadband applications. Fundamental limitations of FSOC arise from the environment through which light propagates. This work addresses transmitted light beam dispersion (spatial, angular, and temporal dispersion) in FSOC operating as a ground-to-air link when clouds exist along the communications channel. Light signals (photons) transmitted through clouds will interact with the cloud particles. Photon–particle interaction causes dispersion of light signals, which has significant effects on signal attenuation and pulse spread. The correlation between spatial and angular dispersion is investigated as well, which plays an important role on the receiver design. Moreover, the paper indicates that temporal dispersion (pulse spread) and energy loss strongly depend on the aperture size of the receiver, the field-of-view (FOV), and the position of the receiver relative to the optical axis of the transmitter.

 Refer:


Google Fiber



Google Fiber starts with a connection speed 100 times faster than today's average broadband. Instant downloads.



Crystal clear HD. And endless possibilities. It's not just TV. And it's not just Internet. It's Google Fiber.

follow.. for more......
https://www.facebook.com/fiber

https://fiber.google.com/about/

http://www.webpronews.com/google-fiber-is-more-than-just-fast-internet-2012-07


Google Fiber is a project whose goal is to build a fiber optic communications infrastructure.

Google Fiber connections will come into metropolitan areas through multiple aggregators. From these points, fiber optic cables will branch out and run into neighborhoods and individual residences, providing FTTH (fiber to the home) service. Much of the cable will be strung on new and existing utility poles; some of it will be buried. The system is expected to provide Internet connection speeds of up to 1Gbps (one billion or 109 bits per second) to end users, bothdownstream (downloading) and upstream (uploading).
http://whatis.techtarget.com/definition/Google-Fiber



Google on Thursday detailed its high-speed Internet network called Google Fiber, which runs 100 times faster than today’s average broadband connection.

The search engine giant is bringing the ultra-high speeds first to Kansas City, Kan. and Kansas City, Mo.

“No more buffering. No more loading. No more waiting,” the company notes on its blog. “Imagine: instantaneous sharing; truly global education; medical appointments with 3D imaging; even new industries that we haven’t even dreamed of, powered by a gig.”

To get things started, the company divided Kansas City into small communities called “fiberhoods.” Each fiberhood needs a high majority of their residents to pre-register to get the service. Those communities with a high pre-registration percentage will be among the first to get Google Fiber. Households in those communities can register for the service throughout the next six weeks.

Households in fiberhoods that qualify will be able to select from various subscription packages. Internet will cost $70 a month, while Internet along with television will cost about $120. According to the New York Times, a Nexus 7 tablet will come with the TV service package and serve as a remote.

“It’s easy to forget how revolutionary high-speed Internet access was in the 1990s,” the company said. “Not only did broadband kill the screeching sound of dial-up, it also spurred innovation, helping to create amazing new services as well as new job opportunities for many thousands of Americans. But today the Internet is not as fast as it should be.”

Google noted that the average Internet speed in the U.S. is only 5.8 megabits per second (Mbps), which is a slight uptick in speed first made available by residential broadband 16 years ago.

“Access speeds have simply not kept pace with the phenomenal increases in computing power and storage capacity that’s spurred innovation over the last decade, and that’s a challenge we’re excited to work on,” it said.

The news comes as the White House recently launched a public-private partnership called US Ignite to build ultra high-speed broadband networks in communities around the U.S.

http://mashable.com/2012/07/26/google-fiber/

Tuesday, July 24, 2012

Goal-line technology


Abstract  

Goal-line technology PPT 1





This report will examine technology’s influence throughout the sporting world and its current paramountcy on sporting matches and events. It will analyse current technological officiating methods concentrating on their level of success and how these could be imitated in football; using valued perspectives both for and against technological involvement in football. The paper will acknowledge each side of the argument in detail, deciphering factors that cause such strong opinions to be held around the debate of goal line technology or indeed the lack of it. The opinions of those whom are involved and will be effected by such a change in the world’s most popular game will be discussed in conjunction with the vast list of questions and issues surrounding the debate.

Factors involving the various technologies available or in current development will be discussed as well as the way the politics, that have many believing are the sole source of football’s lack of technological input, effect companies’ and institute’s researching and developing of potential goal line technology. The head of the University of Loughborough's sporting development institute, Professor Mike Caine will speak of his personal stance on all of the controversy that has ignited the deliberation of the goal line technology debate in a phone interview conducted at the culmination of this dissertation.

The dissertation will conclude in successfully arguing for the implementation of goal line technology into the sport of football via video replay.

What is goal line technology?

Goal line technology is a technology that is being ivestigated to be used in football. It has come into the spotlight because of recent incidents where in games, the ball has crossed the line, but has not been noticed by the referee, and so the goal was not given. Sometimes when the ball crosses the line by a couple of inches before being hoofed out by a defender, it is difficult for the ref to see. Examples of this include:

Roy Carrol famously carried the ball over the line after a 50 yard shot by Pedro Mendes in 2005, with the score at 0-0 between Manchester United and Tottenham - the goal would have given Tottenham the win

Manchester United's Ryan Giggs slid the ball in in extra time of the 2007 FA Cup Final, where Chelsea's Petr Cech 'saved' the ball about a foot inside the goal line. The goal was not given and Chelsea went on to win the game. The goal would have seen Manchester United leading 1-0 with not much time left in the final

perhaps the most famous of all is Geoff Hurst's goal in the 1966 World Cup Final, which clearly did not cross the line and led to England winning the World Cup

The technology involves embedding the football with a microchip, and having sensors on the goal line, so the when the ball passes the goal line, the microchip will send a signal to the referee and he knows its a goal


Goal-line technology – Getting it right

August 2010
As the drone of the vuvuzela fades and the world recovers from the 2010 FIFA World Cup™ extravaganza in South Africa, one issue that will be on the lips of many a football fan around the world is whether goal-line technology has a place in the “beautiful game”. England player Frank Lampard’s disallowed goal against Germany in Bloemfontein on June 28 and various other controversial refereeing decisions at the FIFA 2010 World Cup™ are fuelling a long-standing debate about whether to introduce technology that can determine when a ball has crossed the goal line. The question officials have to answer, especially when a ball hits the cross bar and bounces down, is on which side of the line did the ball land? This article takes a look at two of the technologies that are possible candidates to support referees in officiating football matches.
Technology is now widely used to support umpiring and refereeing decisions in a range of sports. In tennis, it is commonly used to verify line calls, in cricket to back-up leg-before-wicket (LBW)1 decisions and in rugby to verify tries. But in the world of football, the jury is still out on whether technology has a role in adjudicating the game.

A turning point?

FIFA, the world’s football governing body, has resisted the introduction of goal-line technology for some years. In March 2010, the International Football Association Board (IFAB), responsible for establishing the laws of the game, voted not to use the technology as they felt it was not good for the game. Following a number of controversial refereeing decisions at the 2010 FIFA World Cup™, however, FIFA has agreed to revisit the issue. Just days before the end of the tournament, FIFA General Secretary Jerome Valcke said, “I would say that it is the final World Cup with the current refereeing system.” He added, “The game is so fast, the ball is flying so quickly, we have to help them [the referees].”
Goal-line incidents have been the subject of great controversy and debate for many years. The most famous goal-line decision concerned the third goal scored by England (Geoff Hurst) in the 1966 World Cup final against West Germany. While 44 years ago the technologies available were limited, today the technological landscape is vastly different offering a range of possibilities that can assist referees in their decisions.
The two main candidate technologies for use in football are those produced by U.K. company Hawk-Eye Innovations and German company Cairos Technologies AG.

Hawk-Eye: Tracking balls in flight

The Hawk-Eye system (PCT2 application - PCT/GB2000/004507), first developed in 1999 by Dr. Paul Hawkins, an expert in artificial intelligence, and Managing Director of Hawk-Eye Innovations, makes it possible to track the trajectory of balls in flight with a high degree of accuracy. The system is based on the principle of triangulation using the visual images and timing data provided by high-speed video cameras placed at six different locations around the area of play. This ensures that the goal is detected at times when players are huddled together at the goal mouth (for example, corners). As long as the ball is 25 percent visible, Hawk-Eye can track it.
Images are processed by a bank of computers in real time and sent to a central computer programmed to analyze a predefined playing area according to the rules of the game. This information is used to determine whether a ball has crossed a line or other rules have been infringed. In each frame sent from each camera, the system identifies the cluster of pixels that corresponds to the image of the ball. It calculates for each frame the three-dimensional position of the ball by comparing its position at the same instant in time on a least two cameras placed in different locations. A succession of frames builds up a record of the path along which the ball has travelled. The system generates a graphic image of the ball’s path and the playing area in real time and this information is readily available to judges, television viewers and coaching staff.
The system is even more astute than regular TV replays. A ball travelling at 60mph (97kph) moves at one meter per video frame on standard broadcast cameras which operate at 25 frames per second. Hawk-Eye uses cameras that operate at 500 frames per second making it possible to detect if a ball has crossed the goal line even for a fraction of a second.
The Hawk-Eye brand and simulation has been licensed to Codemasters, one of the oldest British video game developers, for use in sports video games and consoles.
Hawk-Eye was first used by U.K. broadcaster, Channel 4 during a Cricket Test Match between England and Pakistan on Lord’s Cricket Ground in May 2001. It is now regularly used by network broadcasters in many high-profile sporting events.
The International Cricket Council (ICC), the international governing body of cricket, first trialed Hawk-Eye in the 2008/2009 winter season to verify controversial LBW decisions. The umpire was able to look at what the ball actually did up to the point at which it hit the batsman but could not look at the predicted flight of the ball thereafter.
Hawk-Eye was first used in tennis at the 2006 Hopman Cup in Perth, Western Australia. Players were allowed to challenge point-ending line calls and have them reviewed by the referees using the technology. It has now become an integral part of the adjudication process in elite tennis tournaments.

“As a player, and now as a TV commentator, I always dreamed of the day when technology would take the accuracy of line calling to the next level. That day has now arrived.” Pam Shriver (TV commentator and former elite tennis player)

In the football stadium, Hawk-Eye’s development began in earnest in 2006 with trials first at Fulham Football Club (FC) and then at Reading FC. The system has been independently tested by the English Premier League and IFAB. The latter had stipulated that the technology must be accurate to within 5mm and provide the required information to the referee in less than 0.5 seconds. Hawk-Eye meets each of these conditions.

“We think [Hawk-Eye’s football system has] the right blend of simplicity and technology.” FA Premier League Spokesperson

In an open letter to FIFA’s President, Sepp Blatter, Dr. Hawkins says, “It is clear… that the technology fundamentally works and could be available for use within football if further in-stadia testing and development were permitted by IFAB and if there were decisive signals of intent to justify the investment in further testing.”

The Cairos System – A microchip in a match ball


Photo: Cairos Technologies A.G.
The second goal-line technology under consideration is that produced by German company Cairos Technologies AG in collaboration with Adidas. A number of international patent applications relating to this technology have been filed under the PCT.
The Cairos system involves embedding thin cables in the turf of the penalty area and behind the goal line. The electrical current that runs through the cables generates a magnetic field. A sensor suspended in the ball measures the magnetic fields as soon as the ball comes into contact with them and transmits data about the ball’s location to receivers located behind the goal that relay the data to a central computer. The computer then determines whether the ball has crossed the goal line. If so, a radio signal is transmitted to the referee’s watch within a split second.
Development began in 2006 and was first tested at the 2007 FIFA Club World Cup™ in Japan where it performed perfectly. At that time, Cairos teamed up with Adidas who “developed the suspension system for the ball, so that it keeps our chip safe inside the ball even when you kick the ball very hard,” said Oliver Braun, Cairos’ Director of Marketing and Communications. Adidas produced the test balls and those used during the FIFA Club World Cup in Japan.
One of the main concerns of those against using the new technologies is that of cost. They believe the costs of installation would be prohibitive and would create a two-tier system in football. Mr. Braun, however, explained that “Cairos bears the costs for the installation and will only charge the associations a percentage of what they pay the four referees for a match.” As for Hawk-Eye, Dr. Hawkins, told Press Association Sport that his company would install its goal-line technology in every Premier League ground free of charge in return for rights to sell sponsorship around the system.

The verdict?

Only time will tell if the events of the past weeks prove to be a turning point in the use of these or similar technologies in the world of football. While the technologies are not 100 percent foolproof, they are proving to be a useful tool for enabling umpires to better adjudicate and verify inconclusive incidents and promote fair play. Whatever FIFA’s ultimate decision, it is clear that these technologies do have the potential to reduce human error and to make goal-line controversies a thing of the past.

So how will the goal-line technology work?
Two different types of goal line technology have been approved by FIFA; Hawk-Eye and GoalRef.
Despite the name, Hawk-Eye does not operate in the same vein that has been so successful in tennis and cricket. Players will not have a say in using it; there will be no challenge system.
Instead an encrypted radio signal will be sent to the referee’s wristwatch when a goal has been scored.
The process will take less than 0.5 seconds to complete and be reliant upon six cameras, focusing on each goal, to track the ball. The signal will use a triangulation method to pinpoint the precise location of the ball before releasing a radio message.
The system has already been sampled in England. It was pioneered in the Hampshire Senior Cup final between Eastleigh FC and AFC Totton, and by the FA in England’s international encounter with Belgium three weeks later.
GoalRef is, on paper at least, a much more simple idea.
It will be dependent upon a microchip implanted in the ball, accompanied by low magnetic waves around the goal.
The system will detect any change in the magnetic field on or behind the goal line to assess whether a goal has been scored.
Like Hawk-Eye, the process will take less than a second.
The major concern with goal line technology was that it will slow down the pace of the game. The third umpire decision used in cricket and video ref in rugby are clearly not compatible with the free-flowing nature of football.
Neither is the system used in tennis.
But, the two systems advocated work in an entirely different fashion. Both require less than a second to generate a response that is much more accurate than human judgement and both leave the final decision in the hands of the match officials.
Complaints from fans that it will deny them something to talk about in the pub is obsolete and irrelevant. Football’s governing body has a duty of care to ensure that teams receive the fairest and highest quality of officiating.
The decision passed today by the IFAB has to be viewed as a major step forward in football’s desperate attempt to catch up with the officiating standards of rival sports.  

Monday, July 23, 2012

FeTRAM: A New Idea to Replace Flash Memory

 A technology of RAM (Random Access Memory) new currently being developed by the researchers. With that combination, the energy consumption of RAM to be more power efficient but has a much better speed.

FeTRAM, ferroelectric random acces memory transistor, is the result of a combination between the nanowire with the polymer. According to the author at the Birck Nanotechnology Center (BNC) at Purdue University, thanks to this combination, FetRAM has its own performance compared with traditional RAM.
  

Ferroelectric materials have the ability to change the polarity according to the terrain that are nearby. This property is then used by researchers at the BNC to form it into Ferroelectric transistors that are currently still on the market. 

 More Refer .... document.....
FeTRAM PDF1
FeTRAM PDF2



ABSTRACT 

FETRAM. An Organic Ferroelectric Material Based Novel Random Access Memory Cell Saptarshi Das*†‡ and Joerg Appenzeller†‡
†Department of Electrical and Computer Engineering
and ‡Birck Nanotechnology Center, Purdue University
School of Electrical and Computer Engineering, Purdue University

Science and technology in the electronics area have always been driven by the development of materials with unique properties and their integration into novel device concepts with the ultimate goal to enable new functionalities in innovative circuit architectures. In particular, a shift in paradigm requires a synergistic approach that combines materials, devices and circuit aspects simultaneously. Here we report the experimental implementation of a novel nonvolatile memory cell that combines silicon nanowires with an organic ferroelectric polymer—PVDF-TrFE—into a new ferroelectric transistor architecture. Our new cell, the ferroelectric transistor random access memory (FeTRAM) exhibits similarities with state-of the-art ferroelectric random access memories (FeRAMs) in that it utilizes a ferroelectric material to store information in a nonvolatile (NV) fashion but with the added advantage of allowing for nondestructive readout. This nondestructive readout is a result of information being stored in our cell using a ferroelectric transistor instead of a capacitor—the scheme commonly employed in conventional FeRAMs

This diagram shows the layout for a new type of computer memory that could be faster than the existing commercial memory and use far less power than flash memory devices. The technology, called FeTRAM, combines silicon nanowires with a "ferroelectric" polymer, a material that switches polarity when electric fields are applied, making possible a new type of ferroelectric transistor. (Birck Nanotechnology Center, Purdue University)

WEST LAFAYETTE, Ind. - Researchers are developing a new type of computer memory that could be faster than the existing commercial memory and use far less power than flash memory devices.
The technology combines silicon nanowires with a "ferroelectric" polymer, a material that switches polarity when electric fields are applied, making possible a new type of ferroelectric transistor.
"It's in a very nascent stage," said doctoral student Saptarshi Das, who is working with Joerg Appenzeller, a professor of electrical and computer engineering and scientific director of nanoelectronics at Purdue's Birck Nanotechnology Center

The ferroelectric transistor's changing polarity is read as 0 or 1, an operation needed for digital circuits to store information in binary code consisting of sequences of ones and zeroes.
The new technology is called FeTRAM, for ferroelectric transistor random access memory.
"We've developed the theory and done the experiment and also showed how it works in a circuit," he said.
Findings are detailed in a research paper that appeared this month in Nano Letters, published by the American Chemical Society.

The FeTRAM technology has nonvolatile storage, meaning it stays in memory after the computer is turned off. The devices have the potential to use 99 percent less energy than flash memory, a non-volatile computer storage chip and the predominant form of memory in the commercial market.

"However, our present device consumes more power because it is still not properly scaled," Das said. "For future generations of FeTRAM technologies one of the main objectives will be to reduce the power dissipation. They might also be much faster than another form of computer memory called SRAM."
The FeTRAM technology fulfills the three basic functions of computer memory: to write information, read the information and hold it for a long period of time.

"You want to hold memory as long as possible, 10 to 20 years, and you should be able to read and write as many times as possible," Das said. "It should also be low power to keep your laptop from getting too hot. And it needs to scale, meaning you can pack many devices into a very small area. The use of silicon nanowires along with this ferroelectric polymer has been motivated by these requirements."
The new technology also is compatible with industry manufacturing processes for complementary metal oxide semiconductors, or CMOS, used to produce computer chips. It has the potential to replace conventional memory systems. 

A patent application has been filed for the concept.

The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.

This nondestructive readout is possible by storing information using a ferroelectric transistor instead of a capacitor, which is used in conventional FeRAMs. 

This work was supported by the Nanotechnology Research Initiative (NRI) through Purdue's Network for Computational Nanotechnology (NCN), which is supported by National Science Foundation.

Writer:  Emil Venere, 765-494-4709, venere@purdjue.edu
 Sources:  Saptarshi Das, sdas@purdue.edu  
                    Joerg Appenzeller, 765 494-1076, appenzeller@purdue.edu
Note to Journalists: An electronic copy of the research paper is available from Emil Venere, 765-494-4709, venere@purdue.edu

Monday, July 16, 2012

cloud drive


Abstract:


A cloud drive is a Web-based service that provides storage space on a remote server.

Cloud drives, which are accessed over the Internet with client-side software, are useful for backing up files. A cloud drive provider may offer a limited amount of online storage space for free and additional storage space for a monthly or yearly fee. The name "cloud" is derived from the symbol for the Internet on flow charts.

Cloud drives make it possible for a small business or individual to store and sync documents and other electronic media without having to purchase or maintain external hard drives or file servers. Cloud drive services are recommended for backups of 1 terabyte (TB) or less. The service provider is responsible for maintaining the servers, ensuring availability and providing easy access to the stored data.

http://searchcio-midmarket.techtarget.com/definition/cloud-drive


Cloud Drive is online storage Drive


Cloud drive storage is the mounting of storage capacity provided by a cloud storage serviceso that it appears to the server as a normal drive letter. In this manner, the server can treat the cloud storage as if it were a drive on direct-attached storage or a shared storage filer so files can be easily saved to and restored from the cloud.
This practice makes it easy for applications to access the cloud storage -- no middleware or special cloud storage APIs are required; the application just needs to know what drive letter it should direct its requests to.
The term “cloud drive” has been popularized in part by Amazon, which offers the Amazon Cloud Drive cloud storage service but many other services offer the same interface and access to cloud storage.
This was last updated in August 2011
Posted by: Margaret Rouse



1.     Advantages of Cloud Data Storage


Storing extremely large volumes of information on a local area network (LAN) is expensive.  High capacity electronic data storage devices like file servers, Storage Area Networks (SAN) and Network Attached Storage (NAS) provide high performance, high availability data storage accessible via industry standard interfaces.  However, electronic data storage devices have many drawbacks, including that they are costly to purchase, have limited lifetimes, require backup and recovery systems, have a physical presence requiring specific environmental conditions, require personnel to manage and consume considerable amounts of energy for both power and cooling.

Cloud data storage providers, such as AmazonS3, provide cheap, virtually unlimited electronic data storage in remotely hosted facilities.  Information stored with these providers is accessible via the internet or Wide Area Network (WAN).  Economies of scale enable providers to supply data storage cheaper than the equivalent electronic data storage devices.

Cloud data storage has many advantages.  It’s cheap, doesn’t require installation, doesn’t need replacing, has backup and recovery systems, has no physical presence, requires no environmental conditions, requires no personnel and doesn’t require energy for power or cooling.  Cloud data storage however has several major drawbacks, including performance, availability, incompatible interfaces and lack of standards.


2.     Disadvantages of Cloud Data Storage


Performance of cloud data storage is limited by bandwidth.  Internet and WAN speeds are typically 10 to 100 times slower than LAN speeds.  For example, accessing a typical file on a LAN takes 1 second, accessing the same file in cloud data storage may take 10 to 100 seconds.  While consumers are used to slow internet downloads, they aren’t accustomed to waiting long periods of time for a document or spreadsheet to load.

Availability of cloud data storage is a serious issue.  Cloud data storage relies on network connectivity between the LAN and the cloud data storage provider.  Network connectivity can be affected by any number of issues including global networks disruptions, solar flares, severed underground cables and satellite damage.  Cloud data storage has many more points of failure and is not resilient to network outages.  Network outages mean the cloud data storage is completely unavailable.

Cloud data storage providers use proprietary networking protocols often not compatible with normal file serving on the LAN.  Accessing cloud data storage often involves ad hoc programs to be created to bridge the difference in protocols.

The cloud data storage industry doesn’t have a common set of standard protocols.  This means that different interfaces need to be created to access different cloud data storage providers.  Swapping or choosing between providers is complicated as their protocols are incompatible.
The cloud drive data storage is small enough to be used on laptops while having enterprise class features that enable it to be scaled out to the largest organization.


3.     Cloud Drive Architecture


Cloud Drive is a gateway to cloud storage.  Cloud Drive supports many cloud data storage providers including Microsoft Azure, Amazon S3, Amazon EC2, Rackspace, EMC Atmos, Nirvanix, GoGrid, vcloud, Zetta, Scality, Dunkel, Mezeo, Box.net, Webdav and FTP.  Cloud Drive hides the complexity of the underlying protocols allowing you to deploy cloud storage as simply as deploying storage via an IP SAN.

Cloud Drive is like an IP SAN that never runs out of space.  As usage increases, Cloud Drive starts “offloading” data to the cloud data provider.  Cloud Drive caches and optimizes traffic to/from cloud storage dramatically increasing performance and availability while also reducing network traffic.


Computers on the LAN access data via the block based iSCSI protocol.  The storage service communicates via an internet connection with the cloud data storage provider.  When the iSCSI initiator saves data to the data storage server, it initially stores the data in the local cache.  Each data unit is uniquely located within the local cache and is flagged as either “online” in the local cache or “offline” in the cloud data storage provider.  All data units in the local cache are checked periodically for usage.  Least recently used (or “dormant”) data units are uploaded to the cloud data storage provider, flagged as “offline” and deleted from the local cache.


4.     Cloud Drive Storage Service


The cloud drive storage service is simple to install and configure.  It can be installed on a range of hardware, from laptop for personal use, a server in the office, or a cluster of high end 64 bit servers for the enterprise.  Once the service in installed and configured, many clients can connect to it using the iSCSI protocol.

The storage service reduces the data storage requirements while maintaining performance by moving the least recently used data to the cloud data storage provider as well as one or more of the data storage accelerators.  Cloud Drive accelerates performance by assuming that actual writes to data can happen anytime before a subsequent read to the same data.  Cloud Drive accelerates performance by scheduling this “delayed” write data to periods of low activity and by not downloading data from the cloud data storage provider when the “delayed” write data has wholly overwritten data stored in the cloud.  Cloud Drive further accelerates performance by assuming that delete operations can happen anytime after the data is downloaded.


Figure 1 – Upload data to Cloud Storage




5.     Cloud Drive Optimizer


An optional component, cloud drive optimizer, improves performance, reduces bandwidth and reduces your data storage requirements.  The optimizer should be installed on all iSCSI clients using the cloud drive storage service.

The data storage optimizer has access to the virtual hard drive to optimize the data stored in the local cache.  The optimizer periodically reads virtual hard drive or virtual file share metadata including directories, filenames, permissions and attributes in order to maintain that data in the local cache.  In this way, the data storage optimizer also accelerates performance of the data storage server by preventing data other than file data from being identified as “dormant”.  The data storage optimizer also reduces storage requirements of the data storage server by periodically overwriting “all zeros” to unused parts of the virtual hard drive.  The data storage optimizer is also adapted to periodically run disk checking utilities against the virtual hard drive to prevent important internal file systems data structures from being marked as dormant.



6.     Cloud Drive Network Accelerator


An optional component, cloud drive network accelerator, improves the performance and availability of the Storage Service.  This component can be installed on all computers in the home, office or enterprise. 

The network accelerators allow the office to reclaim all those “small spaces” of data storage already available on the 10’s, 100’s or 1000’s of computers within the enterprise.  A typical office with 100 computers having on average 100 GB of space available could potentially reclaim 100 x 100 GB = 10 TB of data storage space by reclaiming and consolidating this unused space. Network accelerators boost performance and improve resilience to slowness or unavailability of the cloud data storage providers by redundantly storing data uploaded to the cloud data storage provider on the local network in the already existing “unused spaces”

Network accelerators work like a massive cache within the enterprise.  In the above example, the Storage Services local cache is complimented by a 10 TB onsite cache running throughout the enterprise




7.     Cloud Drive Solution


Cloud Drive increases the apparent availability of the cloud data storage provider.  If the local cache satisfies 99% of requests for data without requiring the cloud data storage provider, the apparent availability of the cloud data storage provider is increased 100 fold and 99% of data accesses occur at local network speeds rather than the network connection speeds to the cloud data storage provider.  Cloud Drive also manages the data formatting and communication with the cloud data storage provider while allowing seamless access to data using standard protocols such as iSCSI and NFS.   Further, Cloud Drive allows concurrent processing of read and writes requests to different data as well as synchronized and serialized access to the same data.

Cloud Drive virtualizes data storage by allowing a limited amount of physical data storage to appear many times larger than it actually is.  Cloud Drive allows fast, expensive physical data storage to be supplemented by cheaper, slower remote data storage without incurring substantial performance degradation Cloud Drive also reduces the physical data storage requirements to a small fraction of the total storage requirements, while the rest of the data can be “offloaded” into slower, cheaper online cloud data storage providers. 


Advantages And Disadvantages To Cloud Storage

It seems that everyone with a computer or mobile device spends a lot of time acquiring data and then trying to find a way to store it.
For some computer / mobile owners, finding enough storage space to hold all the data they’ve acquired is a real challenge. Some people invest in larger hard drives. Others prefer external storage devices like thumb drives or compact discs. Desperate few might delete entire folders worth of old files in order to make space for new information. But some are choosing to rely on a growing trend: cloud storage.
Cloud Storage: Cloud storage is the storage of your files and media on the “cloud”, i.e. someone else’s servers. There’s a few services that offer this, the most famous being Dropbox (2GB+ free storage), but there are other options such as Minus (10GB+ free storage) and Box.net (idk how much free storage) as well. Then there’s also specialized services such as Google Music and Amazon Music as well for, you guessed it, music.
Advantages And Disadvantages To Cloud Storage
Advantages
  • No need for extra hardware (i.e. SD card, thumb drive)
  • Convenient
  • Automatic synchronization
  • Um, I’m sure you guys can think of more, I’m drawing a blank right now
Disadvantages
    • Requires constant connection, either via data or wifi
    • Potentially slow over 3G or weak Wifi
    • Streaming movies is difficult, if not impossible (at least with Dropbox, etc.)
    • Eats up a limited data plan quickly
    While cloud storage sounds like it has something to do with weather fronts and storm systems, it really refers to saving data to an off-site storage system maintained by a third party. Instead of storing information to your computer’s hard drive or other local storage device, you save it to a remote database. The Internet provides the connection between your computer and the database.
    On the surface, cloud storage has several advantages over traditional data storage. For example, if you store your data on a cloud storage system, you’ll be able to get to that data from any location that has Internet access. You wouldn’t need to carry around a physical storage device or use the same computer to save and retrieve your information. With the right storage system, you could even allow other people to access the data, turning a personal project into a collaborative effort.­
    So cloud storage is convenient and offers more flexibility, but how does it work? Share your experience with us through your comments.


    Read more: http://www.bench3.org/tech/advantages-and-disadvantages-to-cloud-storage/#ixzz20ndi0pRM

    Cloud Drive can be used for:

    Online backup and real-time protection of your data
    Syncing and collecting your data in one place
    Sharing documents, movies and photos
    Streaming music to you mobile devices
    Access to all your data in one place
    Mobile access on the go through apps for iPhone, iPad and Android phones
    Securing your data with encryption and password protection
    Recovering previous versions of files, even if you accidentally deleted them



    Sunday, July 15, 2012

    Neural Networks


    Abstract
    The power and speed of modern digital computers is truly astounding. No human can ever hope to compute a million operations a second. However, there are some tasks for which even the most powerful computers cannot compete with the human brain, perhaps not even with the intelligence of an earthworm. Imagine the power of the machine which has the abilities of both computers and humans. It would be the most remarkable thing ever. And all humans can live happily ever after (or will they?). Before discussing the specifics of artificial neural nets though, let us examine what makes real neural nets - brains - function the way they do. Perhaps the single most important concept in neural net research is the idea of connection strength.

    Refer:
    Neural-Networks Report
    Neural-Networks-Ppt
    neural-networks-ppt
    Seminar Report on neural network and their applications

    Neural Networks  [ppt]



    Artificial Neural Networks (ANNs) are biologically inspired. Specifically, they borrow ideas from the manner in which the human brain works. The human brain is composed of special cells called neurons.  Estimates of the number of neurons in a human brain cover a wide range (up to 150 billion), and there are more than a hundred different kinds of neurons, separated into groups called networks. Each network contains several thousand neurons that are highly interconnected. Thus, the brain can be viewed as a collection of neural networks

     
    Today’s ANNs, whose application is referred to as neural computing, use a very limited set of concepts from biological neural systems. The goal is to simulate massive parallel processes that involve processing elements interconnected in a network architecture. The artificial neuron receives inputs analogous to the electrochemical impulses biological neurons receive from other neurons. The output of the artificial neuron corresponds to signals sent out from a biological neuron. These artificial signal can be changed, like the signals from the human brain. Neurons in an ANN receive information from other neurons or from external source, transform or process the information, and pass it on to other neurons or as external outputs.
    Artificial Neural Networks