Monday, January 14, 2013

Blue Brain

Blue Brain

Abstract
This is a modest review of a couple of papers devoted to an examination of neural specificity and invariance among microcircuits in neocortical columns and minicolums, and the loci of experience dependent plasticity and learning within and between these microcircuits. It focuses on somatosensory cortex, but the appendix covers visual research as well on some of the early work on these issues, from Stratton, to Sperry, Hebb, Mountcastle, Hubel and Wiesel, and T. A. Woolsey and Van der Loos. It starts with a paper from Markram’s group on the invariant properties of the microcircuits, and conjectures on a segue form Hebb to Edleman’s Darwinian brain theories. Next I review Petersen’s paper on neurophysiological studies of experience dependent plasticity in these systems. Some network theory has proved essential to research in these areas, as well as some nifty technical advances in neurophysiological stimulation and recording.

Introduction
Human brain, the most valuable creation of God. The man is called intelligent because of the brain .Today we are developed because we can think, that other animals can not do .But we loss the knowledge of a brain when the body is destroyed after the death of man. That knowledge might have been used for the development of the human society. What happen if we create a brain and up load the contents of natural brain into it.
         “Blue brain” –The name of the world’s first virtual brain. That means a machine   that can function as human brain.  Today scientists are in research to create an artificial brain that can think, response, take decision, and keep anything in memory. The main aim is to upload human brain into machine. So that man can think, take decision without any effort. After the death of the body, the virtual brain will act as the man .So, even after the death of a person we will not loose the knowledge, intelligence, personalities, feelings and memories of that man that can be used for the development of the human society.  No one has ever understood the complexity of human brain. It is complex than any circuitry in the world. So, question may arise “Is it really possible to create a human brain?” The answer is “Yes”. Because what ever man has created today always he has followed the nature. When man does not have a device called computer, it was a big question for all .But today it is possible due to the technology. Technology is growing faster than every thing.  IBM is now in research to create a virtual brain. It is called “Blue brain “.If possible, this would be the first virtual brain of the world.


What is Virtual Brain?
We can say Virtual brain is an artificial brain, which does not actually the natural brain, but can act as the brain .It can think like brain, take decisions based on the past experience, and response as the natural brain can. It is possible by using a super computer, with a huge amount of storage capacity, processing power and an interface between the human brain and this artificial one .Through this interface the data stored in the natural brain can be up loaded into the computer .So the brain and the knowledge, intelligence of anyone can be kept and used for ever, even after the death of the person.


Why we need virtual brain?
Today we are developed because of our intelligence. Intelligence is the inborn quality that can not be created .Some people have this quality ,so that they can think up to such an extent where other can not reach .Human society is always need of such intelligence and such an intelligent brain to have with. But the intelligence is lost along with the body after the death. The virtual brain is a solution to it. The brain and intelligence will alive even after the death.
          We often face difficulties in remembering things such as people's names, their birthdays, and the spellings of words, proper grammar, important dates, history facts, and etc. In the busy life every one want to be relaxed .Can not we use any machine to assist for all these? Virtual brain may be the solution to it. What if we upload ourselves into computer, we were simply aware of a computer, or maybe, what if we lived in a computer as a program?


How it is possible?
              First, it is helpful to describe the basic manners in which a person may be uploaded into a computer. Raymond Kurzweil recently provided an interesting paper on this topic. In it, he describes both invasive and noninvasive techniques. The most promising is the use of very small robots, or nanobots. These robots will be small enough to travel throughout our circulatory
systems. Traveling into the spine and brain, they will be able to monitor the activity and structure of our central nervous system. They will be able to provide an interface with computers that is as close as our mind can be while we still reside in our biological form. Nanobots could also carefully scan the structure of our brain, providing a complete readout of the connections between each neuron. They would also record the current state of the brain. This information, when entered into a computer, could then continue to function as us. All that is required is a computer with large enough storage space and processing power. Is the pattern and state of neuron connections in our brain truly all that makes up our conscious selves? Many people believe firmly those we posses a soul, while some very technical people believe that quantum forces contribute to our awareness.
                    But we have to now think technically. Note, however, that we need not know how the brain actually functions, to transfer it to a computer. We need only know the media and contents. The actual mystery of how we achieved consciousness in the first place, or how we maintain it, is a separate discussion.


 people who think that humans telling the machines what to do is totally backwards. Henry Markram, director of the Blue Brain Project, says we are ten years away from a functional artificial human brain. The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.
Reconstructing the brain piece by piece and building a virtual brain in a supercomputer—these are some of the goals of the Blue Brain Project. The virtual brain will be an exceptional tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases. The Blue Brain project began in 2005 with an agreement between the EPFL and IBM, which supplied the BlueGene/L supercomputer acquired by EPFL to build the virtual brain. The human brain is an immensely powerful, energy efficient, self-learning, self-repairing computer. If we could understand and mimic the way it works, we could revolutionize information technology, medicine and society. To do so we have to bring together everything we know and everything we can learn about the inner workings of the brain’s molecules, cells and circuits. With this goal in mind, the Blue Brain team has recently come together with 12 other European and international partners to propose the Human Brain Project (HBP), a candidate for funding under the EU’s FET Flagship program.
The computing power needed is considerable. Each simulated neuron requires the equivalent of a laptop computer. A model of the whole brain would have billions. Supercomputing technology is rapidly approaching a level where simulating the whole brain becomes a concrete possibility. Abstract | IBM’s Blue Gene supercomputer allows a quantum leap in the level of detail at which the brain can be modelled. Henry Markram’s team has perfected a facility that can create realistic models of one of the brain’s essential building blocks. This process is entirely data driven and essentially automatically executed on the supercomputer. Meanwhile the generated models show a behavior already observed in years of neuroscientific experiments. These models will be basic building blocks for larger scale models leading towards a complete virtual brain.Models of the brain will revolutionize information technology, allowing us to design computers, robots, sensors and other devices far more powerful, more intelligent and more energy efficient than any we know today. Brain simulation will help us understand the root causes of brain diseases, to diagnose them early, to develop new treatments, and to reduce reliance on animal testing. The project will also throw new light on questions human beings have been asking for more than two and a half thousand years. What does it mean to perceive, to think, to remember, to learn, to know, to decide? What does it mean to be conscious? In summary, the Human Brain Project has the potential to revolutionize technology, medicine, neuroscience, and society.


Blue Brain Seminar Report 
Refernce:
http://www.ijaiem.org/Volume2Issue3/IJAIEM-2013-03-28-091.pdf
Blue-Brain-Ppt.pdf
BLUE-BRAIN PPT
BLUE BRAIN PPT
Blue BrainTechnology PPT
BLUE-BRAIN PPT
BLUE BRAIN

Sunday, January 13, 2013

Wireless Chargers (Inductive charging)

Abstract:

Wireless battery charging or wireless inductive charging as it is also called, is a method for transferring electrical energy from a charger to a device without the need for a physical wire connection.
Wireless battery charging has many advantages in terms of convenience because users simply need to place the device requiring power onto a mat or other surface to allow the wireless charging to take place.


Refer Links:

What Is Wireless Charging?


Situations often occur in which it is inconvenient to bring along a regular battery charger for many popular electronic items, such as cell phones, laptop computers, and portable music devices. Solving this issue is what the concept of wireless charging strives to do. As many might guess from the very name, this type of technology allows myriad electronics to charge without having wires attached. Another aspect of the idea that is often convenient for many people is the fact that most wireless chargers are able to charge nearly any device, not just a specific kind. This means that only one charger usually is needed to charge a cell phone, MP3 player, laptop, or other small mechanism that runs on electricity.
Though wireless charging is likely convenient for many, the majority of people do not understand the concept, which usually involves inductive charging. The main power behind this kind of device is electromagnetic induction, which involves making a magnetic field that does not leave the charger like most wired products do. Instead, the field flows parallel to the surface of the charger, spreading magnetic force across the entire device. Due to this action, a thin receiver coil is created within the charger, so that there are two coils instead of the usual one. The small gap between the two coils makes it possible for an electrical transformer to be created, so it does not need an outlet to obtain power.

How does wireless charging work?

To appreciate the practical difficulties in transmitting power without wires, it helps to know a little about how electricity works. When an electrical current flows down a conductor, it generates a magnetic field, orientated at right angles to the conductor.

By creating a coil, the magnetic field is amplified and if a second coil is placed within the magnetic field of the first, then an electric current will be generated in the second coil, a process known as induction.

However, because the size of the magnetic field is proportional to the energy of the current running through the coil, and the fact that inductance over distance is a fairly inefficient transfer method, the two coils have to be placed in close proximity.

In an electric toothbrush, for instance, the two coils are less than 10mm apart. In order to increase the distance between the coils, both the size of the coils and the amount of current flowing through them, has to be significantly increased, although because the magnetic fields radiate in all directions, efficiency decreases.

Is increased resonance the answer?

One way to increase the efficiency and distance over which induction can occur, is to use resonance. Every object has a frequency at which it will naturally vibrate, called its resonant frequency. Researches at MIT discovered that if you enable the coils and fields to resonate at the same frequency, it increases the efficiency of the induction and were able to demonstrate this principle by using resonating coils to power a light bulb, over a distance of two meters.

With this sort of distance, the idea of being able to walk into a room and whatever gadgets you are carrying are immediately able to receive power from a transmitter buried in the wall or ceiling starts to gain some traction. Unfortunately, even though MIT demonstrated the principle nearly six years ago, the technology is still very much in the development stage.
Intel has also demonstrated resonant power transmission, but as can be seen the coil size needed for a light bulb is huge
Intel has also demonstrated resonant power transmission, but as can be seen the coil size needed for a light bulb is huge

Using larger induction coils is one way in which to increase transmission distance. In the MIT experiment, for instance, the coils were 60cm in diameter, but only about 45 per cent of the power was transmitted at two meters. With portable electronics, their size and the amount of free space within the casing is a major limiting factor.

An electric toothbrush is only used for a few minutes a day and spends the rest of the time being charged, so can have quite small coils. However, a smartphone has a very high capacity battery and using a standard charger, needs to achieve full charge in one or two hours.

Charging up vehicles

One area where the size of the coil doesn't really matter is in vehicles. Using specially built inductive roadways, trials have been run which enable an electric car or bus to receive power as it travels along the road. Wireless charging points built into bus stops and parking bays have also been successfully used to recharge on-board batteries, but it's still less efficient than physically plugging a cable in.

WiTricity is one company that markets wireless charging solutions for the automotive market. The company has also demonstrated its inductive resonance technology wirelessly powering a television as well as a number of mobile phones, is supplying its technology to OEMs and believes the first products should be on the market this year.
WiTricity's system enables electric vehicles to be charged without wires, delivering 3.3kW of power
WiTricity's system enables electric vehicles to be charged without wires, delivering 3.3kW of power

There already some products on the market, such as Duracell's Powermat, which doesn't use the resonance technique, so is much shorter range. In addition, devices such as mobile phones don't yet have induction coils built in and so have to be fitted with special cases containing the necessary circuitry.

However, if there's one sign that a technology is becoming more mainstream, it's when car manufacturers start to adopt it. Chrysler has announced that in 2013, its Dodge Dart car will have the option for a wireless charging bin. As devices to be charged will require special sleeve or cases, it's not clear if this is a bespoke Powermat solution, or something else, but the option to do away with a wired cigarette lighter adapter is certainly a welcome move. The $200 price tag may be a bit much to swallow though!
Chrysler will be adding an optional wireless charging system to the Dodge Dart in 2013
Chrysler will be adding an optional wireless charging system to the Dodge Dart in 2013
It seems that the technology still has quite a way to go, before it becomes an attractive proposition. Duracell has been refining the Powermat technology and has a vision where tables in bars or cafes have embedded wireless charging points. A lot of people leave their phone on the table when they are out socialising, so why not top up the battery while you're at it? However, so long as you need to add a case of sleeve to your device, the appeal of wireless charging is limited.

Ironically, some phones such as the Samsung S3, already contain part of the technology needed for wireless charging and it's even built into the battery. RFID, or Near Field Communication (NFC) uses very similar principles, in that a coil in the phone's battery, induces a current in the chip that you are trying to read, which then has enough power to transmit back the required information.

The history of wireless charging

Wireless charging isn’t a new concept. In fact, it’s been around since the 19th century when physicist Nicolas Tesla came up with the idea of wireless power transfer. It was first demoed by Intel back in 2007, with the idea being that if you can do it safely and efficiently, it would work for the majority of the devices we use every day. The idea of a laptop that people could just keep using and never, ever run out of juice, that could go directly off of wireless power or charge wirelessly? Sounds like something straight out of a sci-fi movie, but if the goal is to eventually have a completely wireless experience, these are just some of the scenarios we could possibly be looking at.
Wireless charging is seen by many as one of the biggest possible advancements we could have for personal computing in this century. Cables – messy, unwieldy, and with a predilection to getting lost at the most inconvenient times – could be a thing of the past in just the next couple of years. In addition to computers, wireless charging could make its way to the automotive industry with electric vehicles, making the charging process virtually automatic.

Two kinds of wireless charging

There are two kinds of wireless charging technologies (WCT): magnetic induction and resonance charging. Basically, the difference is distance: magnetic induction requires that the receiver be in direct contact with the transmitter, or charging device; resonance charging requires that the receiver merely be placed near the transmitter for charging.
Eventually,  the technology is aiming towards an Ultrabook coming pre-built with WCT detection software, enabling users to merely place their smartphones or tablets in the vicinity of their Ultrabooks and charge away (near field communication, or NFC). This would be in a “BE-BY” configuration, whereby two different devices don’t necessarily have to be touching in order to exchange energy, as opposed to a “BE-ON” configuration.
As debuted at IDF 2012, the Ultrabook transmitter recharging configuration will actually take up very little space (21 cm ², 7 cm x 3 cm x 5 mm) within the form factor. On the receiver side, we’re talking even smaller 5.6 cm ²), so definitely no chance of our smartphones, tablets, or mice getting bigger all of a sudden. 
Intel and IDT
Integrated Device Technology (IDT) will be developing and delivering integrated transmitter and receiver chipsets for Intel’s Wireless Charging Technology based on resonance charging technology, targeted for deployment within Ultrabooks, PCs, smartphones, and the plethora of other standalone devices (like Smart Watches) out there on the market.  
Now, this isn’t necessarily limited to inductive charging or smartphones/tablets on a charging mat usage; Intel is working with IDT, vendors (smartphones, printers, cameras, and much more), OEMs, and other partners to make WCT a completely non-touch-based reality for the devices we use every single day.  Intel is definitely putting its money on wireless charging, and plans to build the technology into Ultrabooks by 2013, implementing transmitters into these machines with receivers built within a range of devices using Intel’s own chips. 
Ultrabooks and WCT
As detailed by Intel execs this past week at IDF 2012, the battery life of Ultrabooks will be greatly increased with Intel’s upcoming Haswell processors.   Battery life will be essentially doubled, with battery life of up to ten hours for Ultrabooks, even more (12 hours or more) in the case of convertible Ultrabooks. Ultrabooks with Haswell configurations will also feature wireless charging and NFC capabilities, making that move to no cords even more of a reality.

Wireless battery charging basics

Wireless battery charging uses an inductive or magnetic field between two objects which are typically coils to transfer the energy from one to another. The energy is transferred from the energy source to the receiver where it is typically used to charge the battery in the device.
This makes wireless charging or inductive charging ideal for use with many portable devices such as mobile phones and other wireless applications. However they have also found widespread use in products such as electric toothbrushes where cordless operation is needed and where connections would be very unwise and short-lived.
The system is essentially a flat form of transformer - flat because this makes it easier to fit into the equipment in which it is to be used. Many wireless battery charging systems are used in consumer items where small form factors are essential.
Wireless battery charging concept
Wireless battery charging concept
The primary side of the transformer is connected to the energy supply that will typically be a mains power source, and the secondary side will be within the equipment where the charge is required.
In many applications the wireless battery charging system will consist of two flat coils. The power source is often contained within a pad or mat on which the appliance to be charged is placed.

Wireless battery charging advantages / disadvantages

As with any system, there are both advantages and disadvantages to wireless battery charging systems.

ADVANTAGESDISADVANTAGES
  • Convenience - it simply requires the appliance needing charging to be placed onto a charging area.
  • Reduced wear of plugs and sockets - as there is no physical connection, there are no issues with connector wear, etc. Physically the system is more robust than one using connectors.
  • Resilience from dirt - some applications operate in highly contaminated environments. As there are no connectors, the system is considerably more resilient to contamination
  • Application in medical environments - using wireless charging no connectors are required that may harbour bacteria, etc.. This makes this solution far more applicable for medical instruments that may require to be battery powered.
  • Added complexity - the system requires a more complicated system to transfer the power across a wire-less interface
  • Added cost - as the system is more complicated than a traditional wired system, a wireless battery charger will be more expensive
  • Reduced efficiency - there are losses on the wireless battery charging system - resistive losses on the coil, stray coupling, etc. However typical efficiency levels of between 85 - 90% are normally achieved.

Wireless charging has now become a mainstream technology. Initially it was a novelty, but with its applications and advantages becoming recognised, it has now become a mainstream application. It is anticipated that wireless battery charging will become very widespread, if not the most common method.
With standardised interfaces and techniques, only a single wireless battery charger will be required to charge a variety of items. No longer will a whole myriad of chargers be required. Also reliability and convenience will be improved as it is far easier to place the item to be charged on the charging mat, rather than having to use a small connector.
Although the efficiency of wireless battery charging is less than that using direct connections, the added intelligence could reduce the end of charge current, thereby reducing the overall power consumption as many normal chargers are left connected even when they are not charging
The obvious advantage of wireless charging is the ability to place electronics on a wireless charger device, rather than take a cell phone charger, laptop charger, or other type of charger everywhere in case the batteries run out of charge. Another less known benefit of wirelesscharging is that such chargers can be placed near water when necessary. This is because all the parts are enclosed, with no wires sticking out, so some electric razors or toothbrushes come with wireless chargers for the sake of safety. Additionally, the majority of wireless chargers can sense how much power each type of electronic device needs, so batteries are not typically overcharged.

One disadvantage of the ability to charge electronics wirelessly is the typically higher cost when compared to wired chargers. To get the most efficient wireless charging devices, it is often necessary to spend a lot of money, which usually results in the latest charger. Otherwise, older wireless chargers are frequently found to be slower at charging. They also often generate more heat than wired chargers, which can be considered a danger despite the somewhat smaller chances of electric shock when it comes to wireless charging devices.


Saturday, January 5, 2013

Virtual Surgery

ABSTRACT
Rapid change is under way on sever fronts I medicine and surgery.Advance in computing power have enable continued growth in virtual reality,visualization, and simulation technologies. The ideal learning opportunities afforded by simulated and virtual environments have prompted their exploration as learning modalities for surgical education and training. Ongoing improvements in this technology suggest an important future role for virtual reality and simulation in medicine.

Introduction
Rapid change in most segments of the society is occurring as a result of increasingly more sophisticated, affordable and ubiquitous computing power. One clear example of this change process is the internet, which provides interactive and instantaneous access to information that must scarcely conceivable only a few years ago.Same is the case in the medical field. Adv in instrumentation, visualization and monitoring have enabled continual growth in the medical field. The information revolution has enabled fundamental changes in this field.
Of the many disciplines arising from this new information era, virtual reality holds the greatest promise. The term virtual reality was coined by Jaron Lanier, founded of VPL research, in the late 1980’s. Virtual reality is defined as human computer interface that simulate realistic environments while enabling participant interaction, as a 3D digital world that accurately models actual environment, or simply as cyberspace.

Virtual reality is just beginning to come to that threshold level where we can begin using Simulators in Medicine the way that the Aviation industry has been using it for the past 50 Years — to avoid errors.

In surgery, the life of the patient is of utmost importance and surgeon cannot experiment on the patient body. VR provide a good tool to experiment the various complications arise during surgery.

WHAT IS VIRTUAL SURGERY?
          Virtual surgery, in general is a Virtual Reality Technique of simulating surgery procedure, which help Surgeons improve surgery plans and practice surgery process on 3D models. The simulator surgery results can be evaluated before the surgery is carried out on real patient. Thus helping the surgeon to have clear picture of the outcome of surgery. If the surgeon finds some errors, he can correct by repeating the surgical procedure as many number of times and finalising the parameters for good surgical results. The surgeon can view the anatomy from wide range of angles. This process, which cannot be done on a real patient in the surgery, helps the surgeon correct the incision, cutting, gain experience and therefore improve the surgical skills.
The virtual surgery is based on the patient specific model, so when the real surgery takes place, the surgeon is already familiar with all the specific operations that are to be employed.

VIRTUAL REALITY APPLICATIONS IN SURGERY
          The highly visual and interactive nature of virtual surgery has proven to be useful in understanding complex 3D structures and for training in visuospatial tasks. Virtual reality application in surgery can be subdivided as follows:
1. Training and Education.
2. Surgical Planning.
3. Image Guidance.
4. Tele-surgery.

1. TRAINING AND EDUCATION
          The similarities between pilots and surgeons responsibilities are striking; both must, be ready to manage potentially life-threatening situations in dynamic, unpredictable environments. The long and successful use of flight simulation in air and space flight training has inspired the application of this technology to surgical and education.
          Traditionally, textbook images or cadavers were used for training purposes, the former ie textbook images, limiting one’s perspective of anatomical
structures to 2D plane and the latter, cadavers; limited in supply and generally allowing one-time use only. Today VR simulators are becoming the training methods of choice in medical schools. Unlike textbook examples, VR simulators allow users to view the anatomy from a wide range of angles and “fly through” organs to examine bodies from inside.
The experience can be highly interactive allowing students to strip
away the various layers of tissues and muscles to examine each organ separately. Unlike cadavers, VR models enable the user to perform a procedure countless times.  Perhaps because of the number of complications resulting from the uncontrolled growth of laparoscopic procedures in early 1990’s many groups have pursed simulation of minimally invasive and endoscopic procedures. Advances in tissue modeling, graphics and haptic instrumentation have enabled the development of open abdominal and hollow-tube anastomosis simulators. Initial validation studies using simulators have shown differences between experienced and novice surgeons, that training scores improve overtime and that simulated task performances is correlated to actual task performances.

Computer-based training has many potential advantages:
• It is interactive.
• An instructor presence is not necessary, so student’ can practice in their free moment.
• Changes can be made that demonstrate variation in anatomy or disease state.
• Simulated position and forces can be recorded to compare with established performance matrices for assessment and credentialing.
• Students could also try different technique and loot at tissues from perspective that would be impossible during real operations.

2. SURGICAL PLANNING
          In traditional surgery planning, the surgeon calculates various parameters and procedure for surgery from his earlier experience and
imagination. The surgeon does not have an exact idea about the result of the surgery after it has been performed. So the result of the surgery depends mainly on human factors. This leads to lots of errors and even to the risk of losing the life of the patients. The incorporation of the virtual reality techniques helps in reducing the errors and plan the surgery in the most reliable manner.

          ‘The virtual reality technology can serve as useful adjunct to traditional surgical planning techniques. Basic research in image processing and segmentation of computed tomography and magnetic resonance scans has enabled reliable 3D reconstruction of important anatomical structures. This 3D imaging data have been used to further understand complex anatomical relationships in specific patient prior to surgery and also to examine and display the microsurgical anatomy of various internal operations.

          3D reconstruction has proven particularly useful in  planning stereostatic and minimally invasive neurosurgical procedures. Modeling of deformable facial tissues has enabled simulations of tissue changes and the postoperative outcome of craniofacial surgery. Other soft tissue application includes planning
Liver resection on a 3D deformable liver model with aid of a virtual laparoscopic tool.

3. IMAGE GUIDANCE
          The integration of advanced imaging technology, image processing and 3D graphical capabilities has led to great interest in image guided and computer-aided surgery. The application of computational algorithm and VR visualization to diagnostic imaging, preoperative surgical
planning and interaoperative surgical navigation is referred to as Computer Aided Surgery. Navigation in surgery relates on stereotatic principles, based on the ability to locate a given point using geometric reference. Most of the work done in this field has been within neurosurgery. It also proved useful in Robotic Surgery, a new technique in which surgeon remotely manipulate robotic tool inside the patient body. An image guided operating robot has been developed Lavellee et al, and Shahide et al have described a micro’ surgical guidance system that allows navigation based on a 3D volumetric image data set. In one case, we use intra operative mapping of 3D image overlays on live video provides the surgeon with something like ‘X-ray vision’. This has been used in conjunction with an open MRI scan to allow precise, updated views of deformable brain tissues. Other researchers have focused on applications for orthopedic procedures. Improvements in sensor and imaging technology should eventually allow updates of patient’s position and intra operative shape changes in soft tissues with in reasonable time frame.

4. TELESURGERY
          Tele-surgery allows surgeons to operate on people who are physically separated from themselves. This is usually done through a master-slave robot, with imaging supplies through video cameras configured to provide a stereoscopic view. The surgeon relies on a 3D virtual representation of the patient and benefit from dexterity enhancement afforded by the robotic apparatus’ prototype tele manipulator has been used to successfully perform basic vascular and urologic procedures in swine’s. More advanced system has been used to perform Coronary Anastomosis on exvivo swine hearts and in human undergoing endoscopic Coronary Artery Bypass grafting.

 VIRTUAL SURGERY SIMULATION
1. 3D IMAGE SIMULATION

          The first step in this is to generate a 3D model of the part of the body that undergo surgery Simulating human tissues-beit tooth enamel, skin or blood vessels-often starts with a sample from a flesh and blood person that is we should have a 3D model of the part of the body. Using computer graphics we first construct a reference model.  Depending on this simulation needed, anatomical images can be derived from a series of patient’s Magnetic Resonance Images (MRI), Computed Tomography (CT) or video recording, which are 2D images. These images are segmented using various segmentation methods like SNAKE’. The final model is obtained by deforming the reference model with  constraints
imposed by segmentation results. The image is digitally mapped on to the polygonal mesh representing whatever part of the body on organ is being examined. Each vortex of the polygon is assigned attributes like colour and reflectivity from the reference model.

          For the user to interact with the graphics there must be software algorithms that can calculate the whereabouts of the virtual instrument and determines whether it has collide with a body part or anything else. The other thing is, we should have algorithms to solve how it looks or behave when the body part is cut. We need models of how various tissues behave when cut, prodded, punctured and so on. Here VR designers often portray the tissue as polygonal meshes that react like an array of masses connected by springs and dampers. The parameters of this model can then be tweaked to match what a physician experiences

during an actual procedure. To create graphic that move without flickering collision detection and tissue deformation must be calculated at least 30 times/sec.

          Advances in medical graphic allows ordinary medical scan of a patient anatomy be enhanced into virtual 3D views-a clear advantage for surgeon who preparing to do complicated procedures. Scans from MRJ and CT produces a series of things slices of the anatomy divided into volume data point or voxels, these slices are restacked and turned into 3D images by a computer. These 3D images are color enhanced to highlight, say bone or blood vessels.

2. TOUCH SIMULATION
          The second step in the simulation of surgery is simulating haptic-touch sensation. Physicians rely a great deal on their sense of touch for everything from routine diagnosis to complex, life saving surgical procedure. So haptics, or the abili to simulate touch, goes a long way to make virtual reality simulators more life like.
          It also add a layer of technology that can stump the standard microprocessor. While the brain can be tricked into seeing seamless motion by flipping through 30 or so images per second, touch signals need to be refreshed up to once a millisecond. The precise rate at which a computer must update a haptic interface varies depending on what type of virtual surface is encountered-soft object require lower update rates than harder objects.
A low update rate may not prevent a users surgical instrument from sinking into the virtual flesh, but in soft tissues that sinking is what is expected. If we want something to come to an abrupt
stop that is in the case of born, etc it requires a higher update rates than bumping into something a little squishy like skin, liver etc.
          But still, simulating squish is no easy task either. The number of collision point between a virtual squishy object and a virtual instrument is larger and more variable than between a virtual rigid object and an instrument. Most difficult to simulate is two floppy objects interacting with each other-such as colon and sigmoidocope, the long bendable probe used to view the colon-because of multiple collision point. In addition, the mechanics of such interaction are complicated, because each object may deform the other.
          For simulating touch sensation, we have to calculate the forces applied to cut, prodde, puncture the various  tissues. Also how they react or behave when cut, prodded, punctured using surgical instruments. First we have to make physical models of various tissues. The major difficulty in modeling organs is the physical behavior as they have all kinds of complexities-they are anistropic, non homogeneous and nonlinear. In addition, a great deal more physical measurement of tissues will be needed to make realistic haptic maps of complicated parts of the body such as abdomen.
          Physical model is made assuming that tissues are polygon meshes that interact like an array of masses connected by springs and dampers. The parameter values are derived using complex nonlinear equations. The reaction forces are also calculated.
          In coming years, VR designers hope to gain a better understanding of true mechanical behavior of various tissues and organs in the body. If
the haptic device is to give a realistic impression of say pressing the skin on a patient’s arm, the mechanical contributions of the skin, the fatty tissue benefit, muscle and even bone must be summed up. The equations to solve such a complex problem are known, but so far the calculations cannot be made fast enough to update a display at 30Hz, let alone update a haptic interface at 500-1000Hz.

 WHAT IS A VIRTUAL SURGER SIMULATOR?
          The VR simulator basically consists of a powerful PC which runs the software and an interfacer- haptic interfacer- for the user to interact with the virtual environment. Usually the haptic interfacer works on force feedback loop.
          The force feedback systems are haptic interfaces that output forces reflecting input forces and position information obtained from the participant. These devices come in the form of gloves, pens, joystick and exoskeletons.
The figure (5.1) shows a haptic feedback loop, how human sense of touch interacting with a. VR system. A human hands moves the end effecter-shown here with haemostat-of a haptic device causing the device to relay its position via sensors to a computer running a VR simulation.
The computer determines what force should oppose that collision and relays force information to actuators or brakes or both, which push back against the end effecter. In the left hand loop, forces on the end effecter are detected and relayed to user’s brain. The brain, for example, commands the muscle to contract, in order to balance or overcome the force at the end effecter.    
          In medical applications, it is important that the haptic devices convey the entire spectrum of textures from rigid to elastic to fluid materials. It also essential that force feedback occur in real time to convey a sense of realism.
          The rest of the system consists mostly of off-the-shelf components. The haptic device’s driver card plugs into usually a 500MHz PC equipped with a standard graphic card and a regular colour monitor. The software includes a database of graphical and haptic information representing the surgery part. The graphics, including deformation of virtual objects is calculated separately from the haptic feedback, because the latter must be updated much more frequently.

 PHANTOM DESKTOP 3D TOUCH SYSTEM- A HAPTIC INTERFACER
SensAble technologies, a manufacture of force- feedback interface devices, has developed Phantom Desktop 3D Touch System, which supports a workspace of 6 x 5 x 5 inch. About the size of a desk lamp, the device resembles a robotic arm and has either 3 or 6 degrees of freedom and senses for relaying the arm’s position to PC. The system incorporates position sensing with 6 degrees of freedom and force-feedback with 3 degrees of freedom. A stylus with a range of motion that approximates the lower arm pivoting at the user’s wrist enables-user to feel the point of stylus in all axes and to track its orientation, including pitch, roll and yaw movement. A number of companies are incorporating haptic interfaces into VR systems to extent or enhance interactive functionality.

          The Phantom haptic device has been incorporated into the desktop display by Reachln Technologies AB Developed for a range of medical simulation and dental training applications, the system combines a stereo visual display, haptic interface and 6 degrees of freedom positioner. A software package aptly named GHOST, translates characteristics such as elasticity and roughness into commands for the arm, and the arm’s actuators in turn produce the force needed to simulate the virtual environment. The user interacts with the virtual world using one hand for navigation and control and other hand to touch and feel the virtual object. A semitransparent mirror creates an interface where graphic and haptics are collocated. The result is the user can see and feel the object in same place. Among the medical procedures that can be simulated are catheter insertion, needle injection, suturing and surgical operations.

 IMPORTANCE OF VIRTUAL REALITY IN SURGICAL FIELD
          A recent report released by Institute of Medicine in Washington DC, estimates that medical errors may cause 1,00,000 patient deaths each year in US alone. Proponent of virtual reality believes that incorporation of this technology into medical training will bring this grim statistic down.

The main advantages of virtual reality in surgery are:

• Intelligent computer backup minimizes the number of medical ‘mistakes’.
• More effective use of minimal-access surgical technique, which reduces the long length of hospital stays and rest of postoperative complications.
• Better training in anatomy and surgical skill, with reduced need for cadavers.
virtual-surgery

CONCLUSION
          Medical virtual reality has come a long way in the past 10 years as a result of advances in computer imaging, software, hardware and display devices. Commercialization of VR systems will depend on proving that they are cost effective and can improve the quality of care. One of the current limitations of VR implementation is shortcomings in the realism of the simulations. The main Impediment to realistic simulators is the cost and processing power of available hardware. Another factor hindering the progress and acceptability of VR applications is the need to improve human-computer interfaces, which can involve use of heavy head-mounted displays or bulky VR gloves that impede movement. There is also the problem of time delays in the simulator’s response to the users movements. Conflicts between sensory information can result in stimulator sickness, which includes side effects such as eyestrain, nausea, loss of balance and disorientation. Commercialization of VR systems must also address certain legal and regulatory issues.

          Despite these concerns, the benefits of VR systems in medicines have clearly been established in several areas, including improved training, better access to services, and increase cost effectiveness and accuracy in performing certain conventional surgical procedures.