Abstract
People express their mental states, including emotions, thoughts, and desires, all the time through facial expressions, vocal nuances and gestures. This is true even when they are interacting with machines. Our mental states shape the decisions that we make, govern how we communicate with others, and affect our performance. The ability to attribute mental states to others from their behavior and to use that knowledge to guide our own actions and predict those of others is known as theory of mind or mind-reading.
Existing human-computer interfaces are mind-blind oblivious to the user’s mental states and intentions. A computer may wait indefinitely for input from a user who is no longer there, or decide to do irrelevant tasks while a user is frantically working towards an imminent deadline. As a result, existing computer technologies often frustrate the user, have little persuasive power and cannot initiate interactions with the user. Even if they do take the initiative, like the now retired Microsoft Paperclip, they are often misguided and irrelevant, and simply frustrate the user. With the increasing complexity of computer technologies and the ubiquity of mobile and wearable devices, there is a need for machines that are aware of the user’s mental state and that adaptively respond to these mental states.
What is mind reading?
A computational model of mind-reading
Drawing inspiration from psychology, computer vision and machine learning, the team in the Computer Laboratory at the University of Cambridge has developed mind-reading machines — computers that implement a computational model of mind-reading to infer mental states of people from their facial signals. The goal is to enhance human-computer interaction through empathic responses, to improve the productivity of the user and to enable applications to initiate interactions with and on behalf of the user, without waiting for explicit input from that user. There are difficult challenges:
Using a digital video camera, the mind-reading computer system analyzes a person’s facial expressions in real time and infers that person’s underlying mental state, such as whether he or she is agreeing or disagreeing, interested or bored, thinking or confused.
Prior knowledge of how particular mental states are expressed in the face is combined with analysis of facial expressions and head gestures occurring in real time. The model represents these at different granularities, starting with face and head movements and building those in time and in space to form a clearer model of what mental state is being represented. Software from Nevenvision identifies 24 feature points on the face and tracks them in real time. Movement, shape and color are then analyzed to identify gestures like a smile or eyebrows being raised. Combinations of these occurring over time indicate mental states. For example, a combination of a head nod, with a smile and eyebrows raised might mean interest. The relationship between observable head and facial displays and the corresponding hidden mental states over time is modeled using Dynamic Bayesian Networks.
Why mind reading?
The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services that can read our minds and react to our moods. How would that change our use of technology and our lives? We are working with a major car manufacturer to implement this system in cars to detect driver mental states such as drowsiness, distraction and anger.
Current projects in Cambridge are considering further inputs such as body posture and gestures to improve the inference. We can then use the same models to control the animation of cartoon avatars. We are also looking at the use of mind-reading to support on-line shopping and learning systems.
The mind-reading computer system may also be used to monitor and suggest improvements in human-human interaction. The Affective Computing Group at the MIT Media Laboratory is developing an emotional-social intelligence prosthesis that explores new technologies to augment and improve people’s social interactions and communication skills.
How does it work?
The mind reading actually involves measuring the volume and oxygen level of the blood around the subject's brain, using technology called functional near-infrared spectroscopy (fNIRS).
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the head where it is absorbed by active, blood-filled tissues. The headband then measures how much light was not absorbed, letting the computer gauge the metabolic demands that the brain is making.
The results are often compared to an MRI, but can be gathered with lightweight, non-invasive equipment.
Wearing the fNIRS sensor, experimental subjects were asked to count the number of squares on a rotating onscreen cube and to perform other tasks. The subjects were then asked to rate the difficulty of the tasks, and their ratings agreed with the work intensity detected by the fNIRS system up to 83 percent of the time.
"We don't know how specific we can be about identifying users' different emotional states," cautioned Sergio Fantini, a biomedical engineering professor at Tufts. "However, the particular area of the brain where the blood-flow change occurs should provide indications of the brain's metabolic changes and by extension workload, which could be a proxy for emotions like frustration."
"Measuring mental workload, frustration and distraction is typically limited to qualitatively observing computer users or to administering surveys after completion of a task, potentially missing valuable insight into the users' changing experiences.
A computer program which can read silently spoken words by analyzing nerve signals in our mouths and throats has been developed by NASA.
Preliminary results show that using button-sized sensors, which attach under the chin and on the side of the Adam's apple, it is possible to pick up and recognize nerve signals and patterns from the tongue and vocal cords that correspond to specific words.
"Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, California, in charge of the research. Just the slightest movement in the voice box and tongue is all it needs to work, he says.
Web search
For the first test of the sensors, scientists trained the software program to recognize six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors silently said the words to themselves and the software correctly picked up the signals 92 per cent of the time.
Then researchers put the letters of the alphabet into a matrix with each column and row labeled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the program.
"This proved we could browse the web without touching a keyboard”.
Mind reading computer PPT
Advantages and uses
Mind Controlled Wheelchair
This prototype mind-controlled wheelchair developed from the University of Electro-Communications in Japan lets you feel like half Professor X and half Stephen Hawking—except with the theoretical physics skills of the former and the telekinetic skills of the latter.
A little different from the Brain-Computer Typing machine, this thing works by mapping brain waves when you think about moving left, right, forward or back, and then assigns that to a wheelchair command of actually moving left, right, forward or back.
The result of this is that you can move the wheelchair solely with the power of your mind. This device doesn't give you MIND BULLETS (apologies to Tenacious D) but it does allow people who can't use other wheelchairs get around easier.
The sensors have already been used to do simple web searches and may one day help space-walking astronauts and people who cannot talk. The system could send commands to rovers on other planets, help injured astronauts control machines, or aid disabled people.
In everyday life, they could even be used to communicate on the sly - people could use them on crowded buses without being overheard
The finding raises issues about the application of such tools for screening suspected terrorists -- as well as for predicting future dangerousness more generally. We are closer than ever to the crime-prediction technology of Minority Report.
The day when computers will be able to recognize the smallest units in the English language—the 40-odd basic sounds (or phonemes) out of which all words or verbalized thoughts can be constructed. Such skills could be put to many practical uses. The pilot of a high-speed plane or spacecraft, for instance, could simply order by thought alone some vital flight information for an all-purpose cockpit display. There would be no need to search for the right dials or switches on a crowded instrument panel.
Disadvantages and problems
Tapping Brains for Future Crimes
Researchers from the Max Planck Institute for Human Cognitive and Brain Sciences, along with scientists from London and Tokyo, asked subjects to secretly decide in advance whether to add or subtract two numbers they would later are shown. Using computer algorithms and functional magnetic resonance imaging, or fMRI, the scientists were able to determine with 70 percent accuracy what the participants' intentions were, even before they were shown the numbers. The popular press tends to over-dramatize scientific advances in mind reading. FMRI results have to account for heart rate, respiration, motion and a number of other factors that might all cause variance in the signal. Also, individual brains differ, so scientists need to study a subject's patterns before they can train a computer to identify those patterns or make predictions.
While the details of this particular study are not yet published, the subjects' limited options of either adding or subtracting the numbers means the computer already had a 50/50 chance of guessing correctly even without fMRI readings. The researchers indisputably made physiological findings that are significant for future experiments, but we're still a long way from mind reading.
Still, the more we learn about how the brain operates, the more predictable human beings seem to become. In the Dec. 19, 2006, issue of The Economist, an article questioned the scientific validity of the notion of free will: Individuals with particular congenital genetic characteristics are predisposed, if not predestined, to violence.
Studies have shown that genes and organic factors like frontal lobe impairments, low serotonin levels and dopamine receptors are highly correlated with criminal behavior. Studies of twins show that heredity is a major factor in criminal conduct. While no one gene may make you a criminal, a mixture of biological factors, exacerbated by environmental conditions, may well do so.
Looking at scientific advances like these, legal scholars are beginning to question the foundational principles of our criminal justice system.
For example, University of Florida law professor Christopher Slobogin, who is visiting at Stanford this year, has set forth a compelling case for putting prevention before retribution in criminal justice.
It's a tempting thought. If there is no such thing as free will, then a system that punishes transgressive behavior as a matter of moral condemnation does not make a lot of sense. It's compelling to contemplate a system that manages and reduces the risk of criminal behavior in the first place.
Max Planck Institute, neuroscience and bioscience are not at a point where we can reliably predict human behavior. To me, that's the most powerful objection to a preventative justice system -- if we aren't particularly good at predicting future behavior, we risk criminalizing the innocent.
We aren't particularly good at rehabilitation, either, so even if we were sufficiently accurate in identifying future offenders, we wouldn't really know what to do with them.
Nor is society ready to deal with the ethical and practical problems posed by a system that classifies and categorizes people based on oxygen flow, genetics and environmental factors that are correlated as much with poverty as with future criminality.
In time, neuroscience may produce reliable behavior predictions. But until then, we should take the lessons of science fiction to heart when deciding how to use new predictive techniques.
The preliminary tests may have been successful because of the short lengths of the words and suggests the test be repeated on many different people to test the sensors work on everyone.
The initial success "doesn't mean it will scale up", he told New Scientist. "Small-vocabulary, isolated word recognition is a quite different problem than conversational speech, not just in scale but in kind."
Conclusion
Tufts University researchers have begun a three-year research project which, if successful, will allow computers to respond to the brain activity of the computer's user. Users wear futuristic-looking headbands to shine light on their foreheads, and then perform a series of increasingly difficult tasks while the device reads what parts of the brain are absorbing the light. That info is then transferred to the computer, and from there the computer can adjust it's interface and functions to each individual.
One professor used the following example of a real world use: "If it knew which air traffic controllers were overloaded, the next incoming plane could be assigned to another controller."
Hence if we get 100% accuracy these computers may find various applications in many fields of electronics where we have very less time to react.
People express their mental states, including emotions, thoughts, and desires, all the time through facial expressions, vocal nuances and gestures. This is true even when they are interacting with machines. Our mental states shape the decisions that we make, govern how we communicate with others, and affect our performance. The ability to attribute mental states to others from their behavior and to use that knowledge to guide our own actions and predict those of others is known as theory of mind or mind-reading.
Existing human-computer interfaces are mind-blind oblivious to the user’s mental states and intentions. A computer may wait indefinitely for input from a user who is no longer there, or decide to do irrelevant tasks while a user is frantically working towards an imminent deadline. As a result, existing computer technologies often frustrate the user, have little persuasive power and cannot initiate interactions with the user. Even if they do take the initiative, like the now retired Microsoft Paperclip, they are often misguided and irrelevant, and simply frustrate the user. With the increasing complexity of computer technologies and the ubiquity of mobile and wearable devices, there is a need for machines that are aware of the user’s mental state and that adaptively respond to these mental states.
What is mind reading?
A computational model of mind-reading
Drawing inspiration from psychology, computer vision and machine learning, the team in the Computer Laboratory at the University of Cambridge has developed mind-reading machines — computers that implement a computational model of mind-reading to infer mental states of people from their facial signals. The goal is to enhance human-computer interaction through empathic responses, to improve the productivity of the user and to enable applications to initiate interactions with and on behalf of the user, without waiting for explicit input from that user. There are difficult challenges:
Using a digital video camera, the mind-reading computer system analyzes a person’s facial expressions in real time and infers that person’s underlying mental state, such as whether he or she is agreeing or disagreeing, interested or bored, thinking or confused.
Prior knowledge of how particular mental states are expressed in the face is combined with analysis of facial expressions and head gestures occurring in real time. The model represents these at different granularities, starting with face and head movements and building those in time and in space to form a clearer model of what mental state is being represented. Software from Nevenvision identifies 24 feature points on the face and tracks them in real time. Movement, shape and color are then analyzed to identify gestures like a smile or eyebrows being raised. Combinations of these occurring over time indicate mental states. For example, a combination of a head nod, with a smile and eyebrows raised might mean interest. The relationship between observable head and facial displays and the corresponding hidden mental states over time is modeled using Dynamic Bayesian Networks.
Why mind reading?
The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services that can read our minds and react to our moods. How would that change our use of technology and our lives? We are working with a major car manufacturer to implement this system in cars to detect driver mental states such as drowsiness, distraction and anger.
Current projects in Cambridge are considering further inputs such as body posture and gestures to improve the inference. We can then use the same models to control the animation of cartoon avatars. We are also looking at the use of mind-reading to support on-line shopping and learning systems.
The mind-reading computer system may also be used to monitor and suggest improvements in human-human interaction. The Affective Computing Group at the MIT Media Laboratory is developing an emotional-social intelligence prosthesis that explores new technologies to augment and improve people’s social interactions and communication skills.
How does it work?
The mind reading actually involves measuring the volume and oxygen level of the blood around the subject's brain, using technology called functional near-infrared spectroscopy (fNIRS).
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the head where it is absorbed by active, blood-filled tissues. The headband then measures how much light was not absorbed, letting the computer gauge the metabolic demands that the brain is making.
The results are often compared to an MRI, but can be gathered with lightweight, non-invasive equipment.
Wearing the fNIRS sensor, experimental subjects were asked to count the number of squares on a rotating onscreen cube and to perform other tasks. The subjects were then asked to rate the difficulty of the tasks, and their ratings agreed with the work intensity detected by the fNIRS system up to 83 percent of the time.
"We don't know how specific we can be about identifying users' different emotional states," cautioned Sergio Fantini, a biomedical engineering professor at Tufts. "However, the particular area of the brain where the blood-flow change occurs should provide indications of the brain's metabolic changes and by extension workload, which could be a proxy for emotions like frustration."
"Measuring mental workload, frustration and distraction is typically limited to qualitatively observing computer users or to administering surveys after completion of a task, potentially missing valuable insight into the users' changing experiences.
A computer program which can read silently spoken words by analyzing nerve signals in our mouths and throats has been developed by NASA.
Preliminary results show that using button-sized sensors, which attach under the chin and on the side of the Adam's apple, it is possible to pick up and recognize nerve signals and patterns from the tongue and vocal cords that correspond to specific words.
"Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, California, in charge of the research. Just the slightest movement in the voice box and tongue is all it needs to work, he says.
Web search
For the first test of the sensors, scientists trained the software program to recognize six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors silently said the words to themselves and the software correctly picked up the signals 92 per cent of the time.
Then researchers put the letters of the alphabet into a matrix with each column and row labeled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the program.
"This proved we could browse the web without touching a keyboard”.
Mind reading computer PPT
Advantages and uses
Mind Controlled Wheelchair
This prototype mind-controlled wheelchair developed from the University of Electro-Communications in Japan lets you feel like half Professor X and half Stephen Hawking—except with the theoretical physics skills of the former and the telekinetic skills of the latter.
A little different from the Brain-Computer Typing machine, this thing works by mapping brain waves when you think about moving left, right, forward or back, and then assigns that to a wheelchair command of actually moving left, right, forward or back.
The result of this is that you can move the wheelchair solely with the power of your mind. This device doesn't give you MIND BULLETS (apologies to Tenacious D) but it does allow people who can't use other wheelchairs get around easier.
The sensors have already been used to do simple web searches and may one day help space-walking astronauts and people who cannot talk. The system could send commands to rovers on other planets, help injured astronauts control machines, or aid disabled people.
In everyday life, they could even be used to communicate on the sly - people could use them on crowded buses without being overheard
The finding raises issues about the application of such tools for screening suspected terrorists -- as well as for predicting future dangerousness more generally. We are closer than ever to the crime-prediction technology of Minority Report.
The day when computers will be able to recognize the smallest units in the English language—the 40-odd basic sounds (or phonemes) out of which all words or verbalized thoughts can be constructed. Such skills could be put to many practical uses. The pilot of a high-speed plane or spacecraft, for instance, could simply order by thought alone some vital flight information for an all-purpose cockpit display. There would be no need to search for the right dials or switches on a crowded instrument panel.
Disadvantages and problems
Tapping Brains for Future Crimes
Researchers from the Max Planck Institute for Human Cognitive and Brain Sciences, along with scientists from London and Tokyo, asked subjects to secretly decide in advance whether to add or subtract two numbers they would later are shown. Using computer algorithms and functional magnetic resonance imaging, or fMRI, the scientists were able to determine with 70 percent accuracy what the participants' intentions were, even before they were shown the numbers. The popular press tends to over-dramatize scientific advances in mind reading. FMRI results have to account for heart rate, respiration, motion and a number of other factors that might all cause variance in the signal. Also, individual brains differ, so scientists need to study a subject's patterns before they can train a computer to identify those patterns or make predictions.
While the details of this particular study are not yet published, the subjects' limited options of either adding or subtracting the numbers means the computer already had a 50/50 chance of guessing correctly even without fMRI readings. The researchers indisputably made physiological findings that are significant for future experiments, but we're still a long way from mind reading.
Still, the more we learn about how the brain operates, the more predictable human beings seem to become. In the Dec. 19, 2006, issue of The Economist, an article questioned the scientific validity of the notion of free will: Individuals with particular congenital genetic characteristics are predisposed, if not predestined, to violence.
Studies have shown that genes and organic factors like frontal lobe impairments, low serotonin levels and dopamine receptors are highly correlated with criminal behavior. Studies of twins show that heredity is a major factor in criminal conduct. While no one gene may make you a criminal, a mixture of biological factors, exacerbated by environmental conditions, may well do so.
Looking at scientific advances like these, legal scholars are beginning to question the foundational principles of our criminal justice system.
For example, University of Florida law professor Christopher Slobogin, who is visiting at Stanford this year, has set forth a compelling case for putting prevention before retribution in criminal justice.
It's a tempting thought. If there is no such thing as free will, then a system that punishes transgressive behavior as a matter of moral condemnation does not make a lot of sense. It's compelling to contemplate a system that manages and reduces the risk of criminal behavior in the first place.
Max Planck Institute, neuroscience and bioscience are not at a point where we can reliably predict human behavior. To me, that's the most powerful objection to a preventative justice system -- if we aren't particularly good at predicting future behavior, we risk criminalizing the innocent.
We aren't particularly good at rehabilitation, either, so even if we were sufficiently accurate in identifying future offenders, we wouldn't really know what to do with them.
Nor is society ready to deal with the ethical and practical problems posed by a system that classifies and categorizes people based on oxygen flow, genetics and environmental factors that are correlated as much with poverty as with future criminality.
In time, neuroscience may produce reliable behavior predictions. But until then, we should take the lessons of science fiction to heart when deciding how to use new predictive techniques.
The preliminary tests may have been successful because of the short lengths of the words and suggests the test be repeated on many different people to test the sensors work on everyone.
The initial success "doesn't mean it will scale up", he told New Scientist. "Small-vocabulary, isolated word recognition is a quite different problem than conversational speech, not just in scale but in kind."
Conclusion
Tufts University researchers have begun a three-year research project which, if successful, will allow computers to respond to the brain activity of the computer's user. Users wear futuristic-looking headbands to shine light on their foreheads, and then perform a series of increasingly difficult tasks while the device reads what parts of the brain are absorbing the light. That info is then transferred to the computer, and from there the computer can adjust it's interface and functions to each individual.
One professor used the following example of a real world use: "If it knew which air traffic controllers were overloaded, the next incoming plane could be assigned to another controller."
Hence if we get 100% accuracy these computers may find various applications in many fields of electronics where we have very less time to react.