Sprinkling of neural dust opens door to electroceuticals

UC Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.

Wireless, batteryless implantable sensors could improve brain control of prosthetics, avoiding wires that go through the skull. (UC Berkeley video by Roxanne Makasdjian and Stephen McNally)

Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.

The so-called neural dust, which the team implanted in the muscles and peripheral nerves of rats, is unique in that ultrasound is used both to power and read out the measurements. Ultrasound technology is already well-developed for hospital use, and ultrasound vibrations can penetrate nearly anywhere in the body, unlike radio waves, the researchers say.

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz, an associate professor of electrical engineering and computer sciences and one of the study’s two main authors. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

sensor on a rat nerve fiber

The sensor, 3 millimeters long and 1×1 millimeters in cross section, attached to a nerve fiber in a rat. Once implanted, the batteryless sensor is powered and the data read out by ultrasound. (Ryan Neely photo)

Maharbiz, neuroscientist Jose Carmena, a professor of electrical engineering and computer sciences and a member of the Helen Wills Neuroscience Institute, and their colleagues will report their findings in the August 3 issue of the journal Neuron.

The sensors, which the researchers have already shrunk to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.

Motes sprinkled thoughout the body

In their experiment, the UC Berkeley team powered up the passive sensors every 100 microseconds with six 540-nanosecond ultrasound pulses, which gave them a continual, real-time readout. They coated the first-generation motes – 3 millimeters long, 1 millimeter high and 4/5 millimeter thick – with surgical-grade epoxy, but they are currently building motes from biocompatible thin films which would potentially last in the body without degradation for a decade or more.

sensor showing piezoelectric crystal.

The sensor mote contains a piezoelectric crystal (silver cube) plus a simple electronic circuit that responds to the voltage across two electrodes to alter the backscatter from ultrasound pulses produced by a transducer outside the body. The voltage across the electrodes can be determined by analyzing the ultrasound backscatter. (Ryan Neely photo)

While the experiments so far have involved the peripheral nervous system and muscles, the neural dust motes could work equally well in the central nervous system and brain to control prosthetics, the researchers say. Today’s implantable electrodes degrade within 1 to 2 years, and all connect to wires that pass through holes in the skull. Wireless sensors – dozens to a hundred – could be sealed in, avoiding infection and unwanted movement of the electrodes.

“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”

In a paper published online in 2013, the researchers estimated that they could shrink the sensors down to a cube 50 microns on a side – about 2 thousandths of an inch, or half the width of a human hair. At that size, the motes could nestle up to just a few nerve axons and continually record their electrical activity.

“The beauty is that now, the sensors are small enough to have a good application in the peripheral nervous system, for bladder control or appetite suppression, for example,“ Carmena said. “The technology is not really there yet to get to the 50-micron target size, which we would need for the brain and central nervous system. Once it’s clinically proven, however, neural dust will just replace wire electrodes. This time, once you close up the brain, you’re done.“

The team is working now to miniaturize the device further, find more biocompatible materials and improve the surface transceiver that sends and receives the ultrasounds, ideally using beam-steering technology to focus the sounds waves on individual motes. They are now building little backpacks for rats to hold the ultrasound transceiver that will record data from implanted motes.

Diagram showing the components of the sensor.

Diagram showing the components of the sensor. The entire device is covered in a biocompatible gel.

They’re also working to expand the motes’ ability to detect non-electrical signals, such as oxygen or hormone levels.

“The vision is to implant these neural dust motes anywhere in the body, and have a patch over the implanted site send ultrasonic waves to wake up and receive necessary information from the motes for the desired therapy you want,” said Dongjin Seo, a graduate student in electrical engineering and computer sciences. “Eventually you would use multiple implants and one patch that would ping each implant individually, or all simultaneously.”

Ultrasound vs radio

Maharbiz and Carmena conceived of the idea of neural dust about five years ago, but attempts to power an implantable device and read out the data using radio waves were disappointing. Radio attenuates very quickly with distance in tissue, so communicating with devices deep in the body would be difficult without using potentially damaging high-intensity radiation.

A sensor implanted on a peripheral nerve is powered and interrogated by an ultrasound transducer. The backscatter signal carries information about the voltage across the two electrodes. The 'dust' mote was pinged every 100 microseconds with six 540-nanosecond ultrasound pulses.

A sensor implanted on a peripheral nerve is powered and interrogated by an ultrasound transducer. The backscatter signal carries information about the voltage across the sensor’s two electrodes. The ‘dust’ mote was pinged every 100 microseconds with six 540-nanosecond ultrasound pulses.

Marharbiz hit on the idea of ultrasound, and in 2013 published a paper with Carmena, Seo and their colleagues describing how such a system might work. “Our first study demonstrated that the fundamental physics of ultrasound allowed for very, very small implants that could record and communicate neural data,” said Maharbiz. He and his students have now created that system.

“Ultrasound is much more efficient when you are targeting devices that are on the millimeter scale or smaller and that are embedded deep in the body,” Seo said. “You can get a lot of power into it and a lot more efficient transfer of energy and communication when using ultrasound as opposed to electromagnetic waves, which has been the go-to method for wirelessly transmitting power to miniature implants”

“Now that you have a reliable, minimally invasive neural pickup in your body, the technology could become the driver for a whole gamut of applications, things that today don’t even exist,“ Carmena said.

Other co-authors of the Neuron paper are graduate student Konlin Shen, undergraduate Utkarsh Singhal and UC Berkeley professors Elad Alon and Jan Rabaey. The work was supported by the Defense Advanced Research Projects Agency of the Department of Defense.

RELATED INFORMATION

The real-life Mind-Matrix…

Mail on LineThe real-life Matrix: MIT researchers reveal interface that can allow a computer to plug into the brain 

brain control

  • System could deliver optical signals and drugs directly into the brain
  • Could lead to devices for treatment of conditions such as Parkinson’s

It has been the holy grail of science fiction – an interface that allows us to plug our brain into a computer.

Now, researchers at MIT have revealed new fibres less than a width of a hair that could make it a reality.

They say their system that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

Christina Tringides, a senior at MIT and member of the research team, holds a sample of the multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

Christina Tringides, a senior at MIT and member of the research team, holds a sample of the multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

HOW IT WORKS

The new fibers are made of polymers that closely resemble the characteristics of neural tissues.

The multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

Combining the different channels could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

‘We’re building neural interfaces that will interact with tissues in a more organic way than devices that have been used previously,’ said MIT’s Polina Anikeeva, an assistant professor of materials science and engineering.

The human brain’s complexity makes it extremely challenging to study not only because of its sheer size, but also because of the variety of signaling methods it uses simultaneously.

Conventional neural probes are designed to record a single type of signaling, limiting the information that can be derived from the brain at any point in time.

Now researchers at MIT may have found a way to change that.

By producing complex fibers that could be less than the width of a hair, they have created a system that could deliver optical signals and drugs directly into the brain, along with simultaneous electrical readout to continuously monitor the effects of the various inputs.

The newC technology is described in a paper in the journal Nature Biotechnology.

The new fibers are made of polymers that closely resemble the characteristics of neural tissues, Anikeeva says, allowing them to stay in the body much longer without harming the delicate tissues around them.

To do that, her team made use of novel fiber-fabrication technology pioneered by MIT professor of materials science Yoel Fink and his team, for use in photonics and other applications.

The result, Anikeeva explains, is the fabrication of polymer fibers ‘that are soft and flexible and look more like natural nerves.’

Devices currently used for neural recording and stimulation, she says, are made of metals, semiconductors, and glass, and can damage nearby tissues during ordinary movement.

‘It’s a big problem in neural prosthetics,’ Anikeeva says.

The result, Anikeeva explains, is the fabrication of polymer fibers 'that are soft and flexible and look more like natural nerves.'

HOW IT WORKS 

The new fibers are made of polymers that closely resemble the characteristics of neural tissues.

The multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs. 

Combining the different channels could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

‘We’re building neural interfaces that will interact…

‘They are so stiff, so sharp — when you take a step and the brain moves with respect to the device, you end up scrambling the tissue.’

The key to the technology is making a larger-scale version, called a preform, of the desired arrangement of channels within the fiber: optical waveguides to carry light, hollow tubes to carry drugs, and conductive electrodes to carry electrical signals.

These polymer templates, which can have dimensions on the scale of inches, are then heated until they become soft, and drawn into a thin fiber, while retaining the exact arrangement of features within them.

A single draw of the fiber reduces the cross-section of the material 200-fold, and the process can be repeated, making the fibers thinner each time and approaching nanometer scale.

During this process, Anikeeva says, ‘Features that used to be inches across are now microns.’

Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

For example, light could be transmitted through the optical channels to enable optogenetic neural stimulation, the effects of which could then be monitored with embedded electrodes.

Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

At the same time, one or more drugs could be injected into the brain through the hollow channels, while electrical signals in the neurons are recorded to determine, in real time, exactly what effect the drugs are having.

MIT researchers discuss their novel implantable device that can deliver optical signals and drugs to the brain, without harming the brain tissue.

The system can be tailored for a specific research or therapeutic application by creating the exact combination of channels needed for that task. ‘You can have a really broad palette of devices,’ Anikeeva says.

While a single preform a few inches long can produce hundreds of feet of fiber, the materials must be carefully selected so they all soften at the same temperature.

The fibers could ultimately be used for precision mapping of the responses of different regions of the brain or spinal cord, Anikeeva says, and ultimately may also lead to long-lasting devices for treatment of conditions such as Parkinson’s disease.

John Rogers, a professor of materials science and engineering and of chemistry at the University of Illinois at Urbana-Champaign who was not involved in this research, says, ‘These authors describe a fascinating, diverse collection of multifunctional fibers, tailored for insertion into the brain where they can stimulate and record neural behaviors through electrical, optical, and fluidic means.

The results significantly expand the toolkit of techniques that will be essential to our development of a basic understanding of brain function.’

Read more: http://www.dailymail.co.uk/sciencetech/article-2927410/The-real-life-Matrix-MIT-researchers-reveal-interface-allow-computer-plug-brain.html#ixzz3Q9lDVQv4

Neuro-Technology: Pentagon’s DARPA Continues To Push “Black Box” Brain Chip Implant

Neuro-Technology: Pentagon’s DARPA Continues To Push “Black Box” Brain Chip Implant

neurochip

Pentagon wants to “help” soldiers and seniors by implanting devices to trigger memories

The Defense Advanced Research Projects Agency (DARPA), the research arm of the military, is continuing to develop implantable brain chips, according to documents newly posted as part of the agency’s increased “transparency” policy.

The agency is seeking to develop a portable, wireless device that “must incorporate implantable probes” to record and stimulate brain activity – in effect, a memory triggering ‘black box’ device.

mind

The process would entail placing wires inside the brain, and under the scalp, with electrical impulses fired up through a transmitter placed under the skin of the chest area.

Bloomberg first picked up the story last week, and since then several tech blogs have jumped on board, describing the technological push as part of a project to help injured soldiers, and part an initiative set up by the Obama administration to find treatments for brain disorders, such as Alzheimer’s.

In reality, this project has been ongoing for years, decades in fact. And given that the Pentagon war machine is spear-heading it, with $70m of funding, one must seriously question why the DoD suddenly gives a damn about war wounded vets, never mind everyday Americans with brain disorders.

The documents state that rather than aid general memory loss, such a device would enable the ability to recover “task-based motor skills” like driving cars, operating machinery, tying shoe laces or even flying planes. It would also help recover memory loss surrounding traumatic events – according to the documents.

mind

Memory loss surrounding trauma occurs for a reason, so the individual can, at some point, slowly work back toward living a normal life. One has to wonder, from this description, whether DARPA’s brain implant, would merely facilitate “patching up”, soldiers, and sending them back out to duty, as if they were defective robots.

Indeed, that is the kind of transhumanism project that DARPA revels in. Just last year it was revealed that a DARPA team has constructed a machine that functions like a human brain and would enable robots to think independently and act autonomously.

There have also long been reports of DARPA seeking to develop technology that enables military masters to literally control the brains of soldiers and make them want to fight. A 2008 report for the US military detailed this initiative, along with possible weaponry including “Pharmacological landmines” that release chemicals to incapacitate enemy soldiers and torture techniques that involve delivering electronic pulses into the brains of terror suspects.

The report, titled “Emerging Cognitive Neuroscience and Related Technologies”, detailed byWired and the London Guardian, was commissioned by the Defense Intelligence Agency, the intelligence wing of the Department of Defense. It contains scientific research into the workings of the human mind and suggestions for the development of new war fighting technologies based upon the findings.

In a section focusing on mind control, the report states

If we can alter the brain, why not control it? […] One potential use involves making soldiers want to fight. Conversely, how can we disrupt the enemy’s motivation to fight? […] How can we make people trust us more? What if we could help the brain to remove fear or pain? Is there a way to make the enemy obey our commands?

It concludes that “drugs can be utilized to achieve abnormal, diseased, or disordered psychology” and also suggests that scanners able to read the intentions or memories of soldiers could be developed.

The report clearly does not rule out the use of such mind scanning technology on civilians as it suggests that “In situations where it is important to win the hearts and minds of the local populace, it would be useful to know if they understand the information being given them.”

control

It also suggests that the technology will one day have applications in counter-terrorism and crime-fighting and “might be good enough to help identify people at a checkpoint or counter who are afraid or anxious.”

The notion of “recording” brain activity is also something that DARPA has long sought to master. The concept may seem completely outlandish, yet it has been the central focus of DARPA activities for some time with projects such as LifeLog, which seeks to gain a multimedia, digital record of everywhere a person goes and everything they see, hear, read, say and touch.

Wired Magazine has reported:

On the surface, the project seems like the latest in a long line of DARPA’s “blue sky” research efforts, most of which never make it out of the lab. But DARPA is currently asking businesses and universities for research proposals to begin moving LifeLog forward.

“What national security experts and civil libertarians want to know is, why would the Defense Department want to do such a thing?” the article asks. The answer lies in the stated goal of the US military – “Total Spectrum Dominance”.

Furthermore, assertions that the neuro technology would not be in any way dominant over a person’s capacity to think, does not tally with DARPA’s Brain Machine Interfaces enterprise, a $24 million project reported on in the August 5, 2003 Boston Globe.

The project is developing technology that “promises to directly read thoughts from a living brain – and even instill thoughts as well… It does not take much imagination to see in this the makings of a matrix-like cyberpunk dystopia: chips that impose false memories, machines that scan for wayward thoughts, cognitively augmented government security forces that impose a ruthless order on a recalcitrant population.” The Globe reported.

Government funded advances in neurotechnology which also focus on developing the ability to essentially read people’s minds should also set alarm bells ringing.

It is also well documented that the military and the federal government have been dabbling in mind control and manipulation experimentation for decades.

Brain implants are a very scary proposition, however, and selling such a thing to veterans, and especially to the wider American populace, may be a harder task than selling them a pill to pop. Which is why some, including one former DARPA director and now a Google executive, have also been developing devices such as edible chips and e-tattoos.

Transhumanism is trendy!

Steve Watson is the London based writer and editor for Alex Jones’ Infowars.com, and Prisonplanet.com. He has a Masters Degree in International Relations from the School of Politics at The University of Nottingham, and a Bachelor Of Arts Degree in Literature and Creative Writing from Nottingham Trent University.

Articles by: Steve Watson

Related content:

To Build a Supercomputer Replica of a Human Brain

The $1.3B Quest to Build a Supercomputer Replica of a Human Brain

  • By Jonathon Keats
  • 05.14.13
  • See all  Pages: 1 2 3

Even by the standards of the TED conference, Henry Markram’s 2009 TEDGlobal talk was a mind-bender. He took the stage of the Oxford Playhouse, clad in the requisite dress shirt and blue jeans, and announced a plan that—if it panned out—would deliver a fully sentient hologram within a decade. He dedicated himself to wiping out all mental disorders and creating a self-aware artificial intelligence. And the South African–born neuroscientist pronounced that he would accomplish all this through an insanely ambitious attempt to build a complete model of a human brain—from synapses to hemispheres—and simulate it on a supercomputer. Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up.

In the four years since Markram’s speech, he hasn’t backed off a nanometer. The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety—from the molecular level all the way to the mystery of consciousness—is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that’s done, once you’ve built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own.

The way Markram sees it, technology has finally caught up with the dream of AI: Computers are finally growing sophisticated enough to tackle the massive data problem that is the human brain. But not everyone is so optimistic. “There are too many things we don’t yet know,” says Caltech professor Christof Koch, chief scientific officer at one of neuroscience’s biggest data producers, the Allen Institute for Brain Science in Seattle. “The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works.” Yet over the past couple of decades, Markram’s sheer persistence has garnered the respect of people like Nobel Prize–winning neuroscientist Torsten Wiesel and Sun Microsystems cofounder Andy Bechtolsheim. He has impressed leading figures in biology, neuroscience, and computing, who believe his initiative is important even if they consider some of his ultimate goals unrealistic.

Markram has earned that support on the strength of his work at the Swiss Federal Institute of Technology in Lausanne, where he and a group of 15 postdocs have been taking a first stab at realizing his grand vision—simulating the behavior of a million-neuron portion of the rat neocortex. They’ve broken new ground on everything from the expression of individual rat genes to the organizing principles of the animal’s brain. And the team has not only published some of that data in peer-reviewed journals but also integrated it into a cohesive model so it can be simulated on an IBM Blue Gene supercomputer.

The big question is whether these methods can scale. There’s no guarantee that Markram will be able to build out the rest of the rat brain, let alone the vastly more complex human brain. And if he can, nobody knows whether even the most faithful model will behave like a real brain—that if you build it, it will think. For all his bravado, Markram can’t answer that question. “But the only way you can find out is by building it,” he says, “and just building a brain is an incredible biological discovery process.” This is too big a job for just one lab, so Markram envisions an estimated 6,000 researchers around the world funneling data into his model. His role will be that of prophet, the sort of futurist who presents worthy goals too speculative for most scientists to countenance and then backs them up with a master plan that makes the nearly impossible appear perfectly plausible. Neuroscientists can spend a whole career on a single cell or molecule. Markram will grant them the opportunity and encouragement to band together and pursue the big questions.

And now Markram has funding almost as outsized as his ideas. On January 28, 2013, the European Commission—the governing body of the European Union—awarded him 1 billion euros ($1.3 billion). For decades, neuroscientists and computer scientists have debated whether a computer brain could ever be endowed with the intelligence of a human. It’s not a hypothetical debate anymore. Markram is building it. Will he replicate consciousness? The EU has bet $1.3 billion on it.

Ancient Egyptian surgeons believed that the brain was the “marrow of the skull” (in the graphic wording of a 3,500-year-old papyrus). About 1,500 years later, Aristotle decreed that the brain was a radiator to cool the heart’s “heat and seething.” While neuroscience has come a long way since then, the amount that we know about the brain is still minuscule compared to what we don’t know.

Over the past century, brain research has made tremendous strides, but it’s all atomized and highly specific—there’s still no unified theory that explains the whole. We know that the brain is electric, an intricately connected network, and that electrical signals are modulated by chemicals. In sufficient quantity, certain combinations of chemicals (called neurotransmitters) cause a neuron to fire an electrical signal down a long pathway called an axon. At the end of the axon is a synapse, a meeting point with another neuron. The electrical spike causes neurotransmitters to be released at the synapse, where they attach to receptors in the neighboring neuron, altering its voltage by opening or closing ion channels. At the simplest level, comparisons to a computer are helpful. The synapses are roughly equivalent to the logic gates in a circuit, and axons are the wires. The combination of inputs determines an output. Memories are stored by altering the wiring. Behavior is correlated with the pattern of firing.

Yet when scientists study these systems more closely, such reductionism looks nearly as rudimentary as the Egyptian notions about skull marrow. There are dozens of different neurotransmitters (dopamine and serotonin, to name two) plus as many neuroreceptors to receive them. There are more than 350 types of ion channel, the synaptic plumbing that determines whether a neuron will fire. At its most fine-grained, at the level of molecular biology, neuroscience attempts to describe and predict the effect of neurotransmitters one ion channel at a time. At the opposite end of the scale is functional magnetic resonance imaging, the favorite tool of behavioral neuroscience. Scans can roughly track which parts of the brain are active while watching a ball game or having an orgasm, albeit only by monitoring blood flow through the gray matter: the brain again viewed as a radiator.

Two large efforts—the Allen Brain Atlas and the National Institutes of Health-funded Human Connectome Project—are working at levels in between these two extremes, attempting to get closer to that unified theory that explains the whole. The Allen Brain Atlas is mapping the correlation between specific genes and specific structures and regions in both human and mouse brains. The Human Connectome Project is using noninvasive imaging techniques that show where wires are bundled and how those bundles are connected in human brains.

To add to the brain-mapping mix, President Obama in April announced the launch of an initiative called Brain (commonly referred to as the Brain Activity Map), which he hopes Congress will make possible with a $3 billion NIH budget. (To start, Obama is pledging $100 million of his 2014 budget.) Unlike the static Human Connectome Project, the proposed Brain Activity Map would show circuits firing in real time. At present this is feasible, writes Brain Activity Map participant Ralph Greenspan, “in the little fruit fly Drosophila.”

Even scaled up to human dimensions, such a map would chart only a web of activity, leaving out much of what is known of brain function at a molecular and functional level. For Markram, the American plan is just grist for his billion-euro mill. “The Brain Activity Map and other projects are focused on generating more data,” he writes. “The Human Brain Project is about data integration.” In other words, from his exalted perspective, the NIH and President Obama are just a bunch of postdocs ready to work for him.

Mind-boggling! Science creates computer that can decode your thoughts and put them into words

Mind-boggling! Science creates computer that can decode your thoughts and put them into words

  • Technology could offer lifeline for stroke victims and people hit by degenerative diseases
  • In the study, a computer analyzed brain activity and reproduced words that people were hearing 

By Tamara Cohen
05:49 GMT, 1 February 2012

It sounds like the stuff of science fiction dreams – or nightmares.

Scientists believe they have found a way to read our minds, using a computer program that can decode brain activity in our brains and put it into words.

They say it could offer a lifeline to those whose speech has been affected by stroke or degenerative disease, but many will be concerned about the implications of a technique that can eavesdrop on thoughts and reproduce them.

Scroll down for video

 

Scientific breakthrough: An X-ray CT scan of the head of one of the volunteers, showing electrodes distributed over the brain's temporal lobe, where sounds are processed

Scientific breakthrough: An X-ray CT scan of the head of one of the volunteers, showing electrodes distributed over the brain’s temporal lobe, where sounds are processed

 
 
 
 

Weird science: Scientists believe the technique, shown here, could also be used to read and report what they were thinking of saying next

Weird science: Scientists believe the technique, shown here, could also be used to read and report what they were thinking of saying next

Neuroscientists at the University of California Berkeley put electrodes inside the skulls of brain surgery patients to monitor information from their temporal lobe, which is involved in the processing of speech and images.

As the patient listened to someone speaking, a computer program analysed how the brain processed and reproduced the words they had heard.

 

 

The scientists believe the technique could also be used to read and report what they were thinking of saying next.

In the journal PLoS Biology, they write that it takes attempts at mind reading to ‘a whole new level’.

 

Brain workings: Researchers tested 15 people who were already undergoing brain surgery to treat epilepsy or brain tumours

Brain workings: Researchers tested 15 people who were already undergoing brain surgery to treat epilepsy or brain tumours

 

Words with scientists: The top graphic shows a spectrogram of six isolated words (deep, jazz, cause) and pseudo-words (fook, ors, nim). At bottom, the speech segments how the words were reconstructed based on findings from the electrodes

Words with scientists: The top graphic shows a spectrogram of six isolated words (deep, jazz, cause) and pseudo-words (fook, ors, nim). At bottom, the speech segments how the words were reconstructed based on findings from the electrodes

Robert Knight, professor of psychology and neuroscience, added: ‘This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s [motor neurone] disease and can’t speak.

‘If you could eventually reconstruct imagined conversations from brain activity, thousands could benefit.’

 

The researchers tested 15 people who were already undergoing brain surgery to treat epilepsy or brain tumours.

They agreed to have up to 256 electrodes put on to the brain surface, as they listened to men and women saying individual words including nouns, verbs and names.

 
1
2
 

Testing: As a subject listened to someone speaking, a computer program analysed how the brain processed and reproduced the words they had heard

Breakthrough: The ability to scan the brain and read thoughts could offer a lifeline to those whose speech has been affected by a stroke or degenerative disease

Breakthrough: The ability to scan the brain and read thoughts could offer a lifeline to those whose speech has been affected by a stroke or degenerative disease

A computer programme analysed the activity from the electrodes, and reproduced the word they had heard or something very similar to it at the first attempt.

 
 

Co-author Brian Pasley said there is already mounting evidence that ‘perception and imagery may be pretty similar in the brain’.

Therefore with more work, brain recordings could allow scientists to ‘synthesise the actual sound a person is thinking, or just write out the words with a type of interface device.’

Their study also shows in sharp relief how the auditory system breaks down sound into its individual frequencies – a range of around 1 to 8,000 Hertz for human speech.

Pasley told ABC News: ‘This study mainly focused on lower-level acoustic characteristics of speech. But I think there’s a lot more happening in these brain areas than acoustic analysis’.

He added: ‘We sort of take it for granted, the ability to understand speech. But your brain is doing amazing computations to accomplish this feat.’

 
 

Analyzing words: This graphic breaks down the three ways the brain hears spoken words and processes sounds

Analyzing words: This graphic breaks down the three ways the brain hears spoken words and processes sounds

This information does not change inside the brain but can be accurately mapped and the original sound decoded by a computer. British expert Professor Jan Schnupp, from Oxford University who was not involved in the study said it was ‘quite remarkable’.

‘Neuroscientists have of course long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity’, he said.

‘But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications.’

He played down fears it could lead to range of ‘mind reading’ devices as the technique can only, at the moment, be done on patients willing to have surgery.

Non-invasive brain scans are not powerful enough to read this level of information so it will remain limited to ‘small numbers of willing patients’.

He added: ‘Perhaps luckily for all those of us who value the privacy of their own thoughts, we can rest assured that our skulls will remain an impenetrable barrier for any would-be technological mind hacker for any foreseeable future.’

Watch

http://cdnapi.kaltura.com/index.php/kwidget/wid/1_x2ou92gb/uiconf_id/5590821

Read more: http://www.dailymail.co.uk/sciencetech/article-2094671/Mind-boggling-Science-creates-decode-thoughts-words.html#ixzz1wjAdr1ov

The missing link between us and the future !

The missing link between us and the future

In the early 1990s, the IT industry got very excited about virtual reality, the idea that you could use some sort of headset display to wander around in a 3d computer-generated world. We quickly realised there are zillions of variations on this idea, and after the one that became current computer gaming (3d worlds on a 2d monitor) the biggest of the rest was augmented reality, where data and images could be superimposed on the field of view.

Now, we are seeing apps on phones and pads that claim to be augmented reality, showing where the nearest tube station is for example. To a point I guess they are, but only in as far as they can let you hold up a display in front of you and see images relevant to the location and direction. They hardly amount to a head up display, and fall a long way short of the kind of superimposition we’re been used to on sci-fi since Robocop or Terminator. It is clear that we really need a proper head-up display, one that doesn’t require you to take a gadget out and hold it up in front of you.

There are some head-up displays out there. Some make overlay displays in a small area of your field of view, often using small projectors and mirrors. Some use visors.  However the video visor based displays are opaque. They are fine for watching TV or playing games while seated, but not much use for wandering around.

This will change in the next 18 months – 2 years. Semi-transparent visors will begin to appear then. The few years after that will undoubtedly see rapid development of them, eventually bringing a full hi-res 3d overlay capability. And that will surely be a major disruptive technology. Just as we are getting used to various smart phones, pads, ebbook readers and 3d TVs, they could all be absorbed into a general purpose head up display that can be used for pretty much anything.

It is hard to overstate the potential of this kind of interface once it reaches good enough quality. It allows anything from TV, games, or the web, to be blended with any real world scene or activity. This will transform how we shop, work and socialise, how we design and use buildings, and even how we use art or display ourselves. Each of these examples could easily fill a book.  The whole of the world wide web was enabled by the convergence of just the computing and telecoms industries. The high quality video visor will enable convergence of the real world with the whole of the web, media, and virtual worlds, not just two industry sectors. Augmented reality will be a huge part of that, but even virtual reality and the zillions of variants can then start to be explored too.

In short, the semi-transparent video visor is the missing link. It is the biggest bottleneck now stopping the future arriving. Everything till we get that is a sideshow.

Man vs Machine

 

MAN vs MACHINE

Politically planned violations of human rights goes on in all EU-nations, directed to increase state power and reduce human influence. In Sweden, the FOI (Swedish Defense Research Institution) has for decades been developing remote control systems for our neurological functions, via bio-chips injected at health care. In FOI:s annual report, they describe the project as monitoring and changing the cognitive functions of people throughout their life span, i.e. thoughts, perception and common sense.

 

 Sweden’s most dangerous criminal organization. The FOI ruins both democracy and human rights by connecting the human brain to supercomputers.

The EU-Commission’s Ethical Council chaired by the Swedish Professor Goran Hermerén, in 2005 delivered a 30-page document in protest to the EU-Commission. They declared that this technology was a threat to both democracy and human autonomy in all EU-nations:

 

Brain-computer interface, or direct brain control:the technologies involved are communication technologies: they take information from the brain and externalize it…Freedom of researchers may conflict with the obligation to safeguard the health of research subjects…the freedom to use implants in ones body, might collide with potential negative social effects…How far can such implants be a threat to human autonomy when they are implanted in our brains?…How far should we be subject to the control of such devices by other people using these devices?…The use of implants in order to have a remote control over the will of people should be strictly prohibited…To what extent will this technology be misused by the military?”

 

 FOI  Director Jan-Olof Lind,cheif of the institution founded on the raping of humans. He should be held legally responsible for crimes against human rights.

This planned population project, does not only remove human rights, but transforms us via behavioral manipulation with invasive brain technology. FOI wrote in its program: We have unique tools and methodologies for the modeling of human behavior…The goal is to design systems able to exploit human cognitive potential (i.e. the ability to perceive, understand and organize information) throughout the course of a person’s life time…Regardless that the consequences of this for people are strong physical burden to bear, it includes as well a risk of serious injury”.

 The systems function via two-way radio communication, implants and supercomputers. The EU-board wrote: How far should we let implants get ’under our skins’?…Indeed, individuals are dispossessed of their own bodies and thereby of their own autonomy. The body ends up being under others’ control. Individuals are being modified, via various electronic devices, under skin chips and smart tags, to such an extent that they are increasingly turned into networked individuals…Does a human being cease to be such a ’being’ in cases where some parts of his or her body – particularly the brain – are substituted and/or supplemented by implants?”

 

Prime Minister Fredrik Reinfeldt has accepted the use of humans in experimental reserach and behavioral manipulation. He has also supported FOI´s declaration of abuse of humans in the name of science.

This is a cause of both madness, and anti-social trends in our societies. The brain project has been developed in secrecy for decades; the mentally ill have been put to sleep and implanted with electrodes in their brains and hospitals are injecting brain chips into unwitting patients on a large scale. 

 

Another view of the military conquest of the brain. The Department of Defense has taken over the Karolinska Institute´s neuroscience, and FOI-managers direct the operations. A similar relationship exists for other parts of Karolinska  Institute, which is controll

The threat to human rights, freedom and a civilized society could not be more serious or totalitarian. It does not only place a person behind an iron curtain, but create a brain barrier for one’s own thoughts and personality. The project utilizes many forms of research; biological experiments, neuroscience, social engineering, personality modification etc. This is a techno-political agenda for the future that goes a step further than any traditional dictatorships ever has done. It intends to transform us into biological machinery and exploited guinea pigs. 40 years ago, the Swedish state report Choosing Future by Alva Myrdal quoted that people would have small chances of protecting their rights, regarding to the behavior technology. Said quote is from the official state report,SOU 1972:59:

Research into brain function and behavior is designed primarily to clarify the nature and extent of the changes that can be achieved with the different methods and thus provide information on new opportunities to alleviate human suffering, and new risks of control and modification of behavior against peoples will.”

Several leading professors have suggested that antisocial trends have been spread through these military brain systems. TheEU-Ethical Council asked what danger the military constituted. In fact, they are the central factor in the game of the human brain. Fredrik Reinfeldt had even before he became Prime Minister, made up his mind to stand for the FOI’s abuse. He did the same 2007, after forming the new government; he reaffirmed his opinion concerning this matter.

 

 I, Eric R Naeslund has written this paper and formed the accuse. Also supported a network including journalists, organizations and brain activists in a joint project to bring the issue to media attention and everybody´s  knowledge.

 

 

For the last 40 years and without practically any media attention, there has been an ongoing debate within the state regarding the brain technology. A former Director General of Data Inspection, Stina Wahlstrom, took up the subject in relation to human rights in the annual book 1989-1990. She wrote:Obviously, research must include the same ethical values that are generally the basis of law in our society…It is necessary to limit the research and this restriction is needed in a democratic society… People’s integrity has been violated which often means, unwittingly or unwillingly being forced to participate in a research project. Legislation for such coercion does not belong in a democratic society.It was written 20 years ago, yet it has been developed as far that the 12 members of the European Ethical Council stated: “…when these implants are within our own brains.”This is a threat intended to include all of us in experimental research and behavioral manipulation.


A Prime Minister who accepts the secret brain project, has also launched a battle against people’s fundamental freedom and human rights.
In Fredrik Reinfeldt’s state, it is important to repeat terms like ‘justice’ and ‘the open society’ as indoctrinated concepts, to hide the reality of building even higher walls of coercion and censorships, than any previous despot has done. Metaphorical impenetrable barbed wire fences that are – to replace freedom with control – being steadily created in more people’s brains, to replace freedom, with control. The State’s ravaging has FOI as its spearhead. Several times they declare, not only to accept destruction of humans, but also that certain research is actually based on causing harm, suffering and trauma in people. FOI’s takeover of key components of the Karolinska Institute has facilitated the professors and researchers’ projects. A neuro-professor stated in a speech, they couldn’t avoid creating diseases and death amongst those they misused. A scary reality that is not uncommon.

In order to describe the subject from an international perspective, New York Times have had the courage to challenge the U.S. government’s covert brain project. Accusing the Pentagon and the CIA of perpetrating the same systems of persecution as the Defence Departments within the EU, they published three political editorials, 50 articles and demanded better knowledge and a public debate. The first editorial was published in 1967 under the heading Push Button People. They warned of the possibility of enslaving the brain and wrote that it was likely; some nations had plans to suppress its citizens by brain technology. The second came in 1970 under the newly formed term Brain Wave. They indicated that we had to update the word ‘brain washing’ to ‘brain waving’, and assumed that Orwell’s 1984had expired and a new and worse danger was at hand. That every newborn child’s first experience of life would be neuro surgery, to be implanted with a transmitter and for their lifetime get emotions and reasons controlled by the state. The third editorial was published in August 1977 after that the New York Times during the summer published 30 revealing articles on the CIA’s brain projects. Under the heading Control CIA Not Behavior they stated that no one knows how many were injured or killed, and they demanded legal action against those involved and financial compensation for victims.

It’s a bigger nightmare here and now, since the experimental program has developed into a permanent state operation. Senator John Glenn spent his final three years in the Senate (1994-1997) trying to regulate the abuses. In his closing speech in January 1997 he called the question for one of the most important of our time. Here we stand at one of mankind’s crucial crossroads in relation to individual freedom – vs. unlimited state power to reduce man to Governmental components. Who wants to live his life with chips and manipulated perceptions? None of course! The EU Council wrote that they wanted to give people the power against the introduction of systems to reduce freedom and autonomy. This topic, more so than any other issue, is reshaping the future, the human brain and life. It must obviously come up for debate in both parliament and the media. We all have a responsibility to contribute; journalists, social activists and of course those parliamentary members whom are opposed to brain chips, behavior control, human experimentation and undemocratic ideas, must of course make themselves heard too.

   braintexts@hotmail.com

or anyone who wants more information concerning the issue – there are an extensive information material both in English and Swedish – contact

Grid-Based Computing to Fight Neurological Disease

ScienceDaily: Your source for the latest research news<br />
and science breakthroughs -- updated daily

Grid-Based Computing to Fight Neurological Disease

ScienceDaily (Apr. 11, 2012) — Grid computing, long used by physicists and astronomers to crunch masses of data quickly and efficiently, is making the leap into the world of biomedicine. Supported by EU-funding, researchers have networked hundreds of computers to help find treatments for neurological diseases such as Alzheimer’s. They are calling their system the ‘Google for brain imaging.’



Through the Neugrid project, the pan-European grid computing infrastructure has opened up new channels of research into degenerative neurological disorders and other illnesses, while also holding the promise of quicker and more accurate clinical diagnoses of individual patients.

The infrastructure, set up with the support of EUR 2.8 million in funding from the European Commission, was developed over three years by researchers in seven countries. Their aim, primarily, was to give neuroscientists the ability to quickly and efficiently analyse ‘Magnetic resonance imaging’ (MRI) scans of the brains of patients suffering from Alzheimer’s disease. But their work has also helped open the door to the use of grid computing for research into other neurological disorders, and many other areas of medicine.

‘Neugrid was launched to address a very real need. Neurology departments in most hospitals do not have quick and easy access to sophisticated MRI analysis resources. They would have to send researchers to other labs every time they needed to process a scan. So we thought, why not bring the resources to the researchers rather than sending the researchers to the resources,’ explains Giovanni Frisoni, a neurologist and the deputy scientific director of IRCCS Fatebenefratelli, the Italian National Centre for Alzheimer’s and Mental Diseases, in Brescia.

Five years’ work in two weeks The Neugrid team, led by David Manset from MaatG in France and Richard McClatchey from the University of the West of England in Bristol, laid the foundations for the grid infrastructure, starting with five distributed nodes of 100 cores (CPUs) each, interconnected with grid middleware and accessible via the internet with an easy-to-use web browser interface. To test the infrastructure, the team used datasets of images from the Alzheimer’s Disease Neuroimaging Initiative in the United States, the largest public database of MRI scans of patients with Alzheimer’s disease and a lesser condition termed ‘Mild cognitive impairment’.

‘In Neugrid we have been able to complete the largest computational challenge ever attempted in neuroscience: we extracted 6,500 MRI scans of patients with different degrees of cognitive impairment and analysed them in two weeks,’ Dr. Frisoni, the lead researcher on the project, says, ‘on an ordinary computer it would have taken five years!’.

Though Alzheimer’s disease affects about half of all people aged 85 and older, its causes and progression remain poorly understood. Worldwide more than 35 million people suffer from Alzheimer’s, a figure that is projected to rise to over 115 million by 2050 as the world’s population ages.

Patients with early symptoms have difficulty recalling the names of people and places, remembering recent events and solving simple maths problems. As the brain degenerates, patients in advanced stages of the disease lose mental and physical functions and require round-the-clock care.

The analysis of MRI scans conducted as part of the Neugrid project should help researchers gain important insights into some of the big questions surrounding the disease such as which areas of the brain deteriorate first, what changes occur in the brain that can be identified as biomarkers for the disease and what sort of drugs might work to slow or prevent progression.

Neugrid built on research conducted by two prior EU-funded projects: Mammogrid, which set up a grid infrastructure to analyse mammography data, and AddNeuroMed, which sought biomarkers for Alzheimer’s. The team are now continuing their work in a series of follow-up projects. An expanded grid and a new paradigm Neugrid for You (N4U), a direct continuation of Neugrid, will build upon the grid infrastructure, integrating it with ‘High performance computing’ (HPC) and cloud computing resources. Using EUR 3.5 million in European Commission funding, it will also expand the user services, algorithm pipelines and datasets to establish a virtual laboratory for neuroscientists.

‘In Neugrid we built the grid infrastructure, addressing technical challenges such as the interoperability of core computing resources and ensuring the scalability of the architecture. In N4U we will focus on the user-facing side of the infrastructure, particularly the services and tools available to researchers,’ Dr. Frisoni says. ‘We want to try to make using the infrastructure for research as simple and easy as possible,’ he continues, ‘the learning curve should not be much more difficult than learning to use an iPhone!’

N4U will also expand the grid infrastructure from the initial five computing clusters through connections with CPU nodes at new sites, including 2,500 CPUs recently added in Paris in collaboration with the French Alternative Energies and Atomic Energy Commission (CEA), and in partnership with ‘Enabling grids for e-science Biomed VO’, a biomedical virtual organisation.

Another follow-up initiative, outGRID, will federate the Neugrid infrastructure, linking it with similar grid computing resources set up in the United States by the Laboratory of Neuro Imaging at the University of California, Los Angeles, and the CBRAIN brain imaging research platform developed by McGill University in Montreal, Canada. A workshop was recently held at the International Telecommunication Union, an agency of the United Nations, to foster this effort.

Dr. Frisoni is also the scientific coordinator of the DECIDE project, which will work on developing clinical diagnostic tools for doctors built upon the Neugrid grid infrastructure. ‘There are a couple of important differences between using brain imaging datasets for research and for diagnosis,’ he explains. ‘Researchers compare many images to many others, whereas doctors are interested in comparing images from a single patient against a wider set of data to help diagnose a disease. On top of that, datasets used by researchers are anonymous, whereas images from a single patient are not and protecting patient data becomes an issue.’

The DECIDE project will address these questions in order to use the grid infrastructure to help doctors treat patients. Though the main focus of all these new projects is on using grid computing for neuroscience, Dr. Frisoni emphasises that the same infrastructure, architecture and technology could be used to enable new research — and new, more efficient diagnostic tools — in other fields of medicine. ‘We are helping to lay the foundations for a new paradigm in grid-enabled medical research,’ he says.

Neugrid received research funding under the European Union’s Seventh Framework Programme (FP7).

Developing a human brain in brain chip for a hybrid brain,,,

BBC News

 Tuesday, 11 March 2008, 10:32 GMT 

Chemical brain controls nanobots
By Jonathan Fildes
Science and technology reporter, BBC News

Artificial brain
The researchers have already built larger ‘brains’

A tiny chemical “brain” which could one day act as a remote control for swarms of nano-machines has been invented.

The molecular device – just two billionths of a metre across – was able to control eight of the microscopic machines simultaneously in a test.

Writing in Proceedings of the National Academy of Sciences, scientists say it could also be used to boost the processing power of future computers.

Many experts have high hopes for nano-machines in treating disease.

“If [in the future] you want to remotely operate on a tumour you might want to send some molecular machines there,” explained Dr Anirban Bandyopadhyay of the International Center for Young Scientists, Tsukuba, Japan.

“But you cannot just put them into the blood and [expect them] to go to the right place.”

Dr Bandyopadhyay believes his device may offer a solution. One day they may be able to guide the nanobots through the body and control their functions, he said.

“That kind of device simply did not exist; this is the first time we have created a nano-brain,” he told BBC News.

Computer brain

The machine is made from 17 molecules of the chemical duroquinone. Each one is known as a “logic device”.

How nanotechnology is building the future from the bottom up

They each resemble a ring with four protruding spokes that can be independently rotated to represent four different states.

One duroquinone molecule sits at the centre of a ring formed by the remaining 16. All are connected by chemical bonds, known as hydrogen bonds.

The state of the control molecule at the centre is switched by a scanning tunnelling microscope (STM).

These large machines are a standard part of the nanotechnologist’s tool kit, and allow the viewing and manipulation of atomic surfaces.

Using the STM, the researchers showed they could change the central molecule’s state and simultaneously switch the states of the surrounding 16.

“We instruct only one molecule and it simultaneously and logically instructs 16 others at a time,” said Dr Bandyopadhyay.

The configuration allows four billion different possible combinations of outcome.

The two nanometre diameter structure was inspired by the parallel communication of glial cells inside a human brain, according to the team.

Robot control

To test the control unit, the researchers simulated docking eight existing nano-machines to the structure, creating a “nano-factory” or a kind of “chemical swiss army knife”.

Nano dust (SPL)

Scientists believe nano-machines could have medical applications

The attached devices, created by other research groups, included the “world’s tiniest elevator”, a molecular platform that can be raised or lowered on command.

The device is about two and a half nanometres (billionths of a metre) high, and the lift moves less than one nanometre up and down.

All eight machines simultaneously responded to a single instruction in the simulation.

“We have clear cut evidence that we can control those machines,” said Dr Bandyopadhyay.

This “one-to-many” communication and the device’s ability to act as a central control unit also raises the possibility of using the device in future computers, he said.

Machines built using devices such as this would be able to process 16 bits of information simultaneously.

Current silicon Central Processing Units (CPUs) can only carry out one instruction at a time, albeit millions of times per second.

The researchers say they have already built faster machines, capable of 256 simultaneous operations, and have designed one capable of 1024.

However, according to Professor Andrew Adamatzky of the University of the West England (UWE), making a workable computer would be very difficult at the moment.

“As with other implementations of unconventional computers the application is very limited, because they operate [it] using scanning tunnel microscopy,” he said.

But, he said, the work is promising.

“I am sure with time such molecular CPUs can be integrated in molecular robots, so they will simply interact with other molecular parts autonomously.”

Revolution in Artificial Intelligence,,,

 ScienceDaily: Your source for the latest research news<br /><br />
and science breakthroughs -- updated daily

 

Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

ScienceDaily (Apr. 2, 2012) — As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.



Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing’s work to its next logical step. She is translating her 1993 discovery of what she has dubbed “Super-Turing” computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue ofNeural Computation.

“This model is inspired by the brain,” she says. “It is a mathematical formulation of the brain’s neural networks with their adaptive abilities.” The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine’s lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

“Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.”

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they’ve been told what to expect and how to respond, Siegelmann says. But they can’t take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called “adaptive inference.” In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the “calculating computer” model and more like Turing’s prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

“I was young enough to be curious, wanting to understand why the Turing model looked really strong,” she recalls. “I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.”

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 20, possible behaviors. “If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

“If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain,” she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.