The real-life Mind-Matrix…

Mail on LineThe real-life Matrix: MIT researchers reveal interface that can allow a computer to plug into the brain 

brain control

  • System could deliver optical signals and drugs directly into the brain
  • Could lead to devices for treatment of conditions such as Parkinson’s

It has been the holy grail of science fiction – an interface that allows us to plug our brain into a computer.

Now, researchers at MIT have revealed new fibres less than a width of a hair that could make it a reality.

They say their system that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

Christina Tringides, a senior at MIT and member of the research team, holds a sample of the multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

Christina Tringides, a senior at MIT and member of the research team, holds a sample of the multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.


The new fibers are made of polymers that closely resemble the characteristics of neural tissues.

The multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.

Combining the different channels could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

‘We’re building neural interfaces that will interact with tissues in a more organic way than devices that have been used previously,’ said MIT’s Polina Anikeeva, an assistant professor of materials science and engineering.

The human brain’s complexity makes it extremely challenging to study not only because of its sheer size, but also because of the variety of signaling methods it uses simultaneously.

Conventional neural probes are designed to record a single type of signaling, limiting the information that can be derived from the brain at any point in time.

Now researchers at MIT may have found a way to change that.

By producing complex fibers that could be less than the width of a hair, they have created a system that could deliver optical signals and drugs directly into the brain, along with simultaneous electrical readout to continuously monitor the effects of the various inputs.

The newC technology is described in a paper in the journal Nature Biotechnology.

The new fibers are made of polymers that closely resemble the characteristics of neural tissues, Anikeeva says, allowing them to stay in the body much longer without harming the delicate tissues around them.

To do that, her team made use of novel fiber-fabrication technology pioneered by MIT professor of materials science Yoel Fink and his team, for use in photonics and other applications.

The result, Anikeeva explains, is the fabrication of polymer fibers ‘that are soft and flexible and look more like natural nerves.’

Devices currently used for neural recording and stimulation, she says, are made of metals, semiconductors, and glass, and can damage nearby tissues during ordinary movement.

‘It’s a big problem in neural prosthetics,’ Anikeeva says.

The result, Anikeeva explains, is the fabrication of polymer fibers 'that are soft and flexible and look more like natural nerves.'


The new fibers are made of polymers that closely resemble the characteristics of neural tissues.

The multifunction fiber that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs. 

Combining the different channels could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

‘We’re building neural interfaces that will interact…

‘They are so stiff, so sharp — when you take a step and the brain moves with respect to the device, you end up scrambling the tissue.’

The key to the technology is making a larger-scale version, called a preform, of the desired arrangement of channels within the fiber: optical waveguides to carry light, hollow tubes to carry drugs, and conductive electrodes to carry electrical signals.

These polymer templates, which can have dimensions on the scale of inches, are then heated until they become soft, and drawn into a thin fiber, while retaining the exact arrangement of features within them.

A single draw of the fiber reduces the cross-section of the material 200-fold, and the process can be repeated, making the fibers thinner each time and approaching nanometer scale.

During this process, Anikeeva says, ‘Features that used to be inches across are now microns.’

Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

For example, light could be transmitted through the optical channels to enable optogenetic neural stimulation, the effects of which could then be monitored with embedded electrodes.

Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes.

At the same time, one or more drugs could be injected into the brain through the hollow channels, while electrical signals in the neurons are recorded to determine, in real time, exactly what effect the drugs are having.

MIT researchers discuss their novel implantable device that can deliver optical signals and drugs to the brain, without harming the brain tissue.

The system can be tailored for a specific research or therapeutic application by creating the exact combination of channels needed for that task. ‘You can have a really broad palette of devices,’ Anikeeva says.

While a single preform a few inches long can produce hundreds of feet of fiber, the materials must be carefully selected so they all soften at the same temperature.

The fibers could ultimately be used for precision mapping of the responses of different regions of the brain or spinal cord, Anikeeva says, and ultimately may also lead to long-lasting devices for treatment of conditions such as Parkinson’s disease.

John Rogers, a professor of materials science and engineering and of chemistry at the University of Illinois at Urbana-Champaign who was not involved in this research, says, ‘These authors describe a fascinating, diverse collection of multifunctional fibers, tailored for insertion into the brain where they can stimulate and record neural behaviors through electrical, optical, and fluidic means.

The results significantly expand the toolkit of techniques that will be essential to our development of a basic understanding of brain function.’

Read more:

To Build a Supercomputer Replica of a Human Brain

The $1.3B Quest to Build a Supercomputer Replica of a Human Brain

  • By Jonathon Keats
  • 05.14.13
  • See all  Pages: 1 2 3

Even by the standards of the TED conference, Henry Markram’s 2009 TEDGlobal talk was a mind-bender. He took the stage of the Oxford Playhouse, clad in the requisite dress shirt and blue jeans, and announced a plan that—if it panned out—would deliver a fully sentient hologram within a decade. He dedicated himself to wiping out all mental disorders and creating a self-aware artificial intelligence. And the South African–born neuroscientist pronounced that he would accomplish all this through an insanely ambitious attempt to build a complete model of a human brain—from synapses to hemispheres—and simulate it on a supercomputer. Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up.

In the four years since Markram’s speech, he hasn’t backed off a nanometer. The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety—from the molecular level all the way to the mystery of consciousness—is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that’s done, once you’ve built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own.

The way Markram sees it, technology has finally caught up with the dream of AI: Computers are finally growing sophisticated enough to tackle the massive data problem that is the human brain. But not everyone is so optimistic. “There are too many things we don’t yet know,” says Caltech professor Christof Koch, chief scientific officer at one of neuroscience’s biggest data producers, the Allen Institute for Brain Science in Seattle. “The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works.” Yet over the past couple of decades, Markram’s sheer persistence has garnered the respect of people like Nobel Prize–winning neuroscientist Torsten Wiesel and Sun Microsystems cofounder Andy Bechtolsheim. He has impressed leading figures in biology, neuroscience, and computing, who believe his initiative is important even if they consider some of his ultimate goals unrealistic.

Markram has earned that support on the strength of his work at the Swiss Federal Institute of Technology in Lausanne, where he and a group of 15 postdocs have been taking a first stab at realizing his grand vision—simulating the behavior of a million-neuron portion of the rat neocortex. They’ve broken new ground on everything from the expression of individual rat genes to the organizing principles of the animal’s brain. And the team has not only published some of that data in peer-reviewed journals but also integrated it into a cohesive model so it can be simulated on an IBM Blue Gene supercomputer.

The big question is whether these methods can scale. There’s no guarantee that Markram will be able to build out the rest of the rat brain, let alone the vastly more complex human brain. And if he can, nobody knows whether even the most faithful model will behave like a real brain—that if you build it, it will think. For all his bravado, Markram can’t answer that question. “But the only way you can find out is by building it,” he says, “and just building a brain is an incredible biological discovery process.” This is too big a job for just one lab, so Markram envisions an estimated 6,000 researchers around the world funneling data into his model. His role will be that of prophet, the sort of futurist who presents worthy goals too speculative for most scientists to countenance and then backs them up with a master plan that makes the nearly impossible appear perfectly plausible. Neuroscientists can spend a whole career on a single cell or molecule. Markram will grant them the opportunity and encouragement to band together and pursue the big questions.

And now Markram has funding almost as outsized as his ideas. On January 28, 2013, the European Commission—the governing body of the European Union—awarded him 1 billion euros ($1.3 billion). For decades, neuroscientists and computer scientists have debated whether a computer brain could ever be endowed with the intelligence of a human. It’s not a hypothetical debate anymore. Markram is building it. Will he replicate consciousness? The EU has bet $1.3 billion on it.

Ancient Egyptian surgeons believed that the brain was the “marrow of the skull” (in the graphic wording of a 3,500-year-old papyrus). About 1,500 years later, Aristotle decreed that the brain was a radiator to cool the heart’s “heat and seething.” While neuroscience has come a long way since then, the amount that we know about the brain is still minuscule compared to what we don’t know.

Over the past century, brain research has made tremendous strides, but it’s all atomized and highly specific—there’s still no unified theory that explains the whole. We know that the brain is electric, an intricately connected network, and that electrical signals are modulated by chemicals. In sufficient quantity, certain combinations of chemicals (called neurotransmitters) cause a neuron to fire an electrical signal down a long pathway called an axon. At the end of the axon is a synapse, a meeting point with another neuron. The electrical spike causes neurotransmitters to be released at the synapse, where they attach to receptors in the neighboring neuron, altering its voltage by opening or closing ion channels. At the simplest level, comparisons to a computer are helpful. The synapses are roughly equivalent to the logic gates in a circuit, and axons are the wires. The combination of inputs determines an output. Memories are stored by altering the wiring. Behavior is correlated with the pattern of firing.

Yet when scientists study these systems more closely, such reductionism looks nearly as rudimentary as the Egyptian notions about skull marrow. There are dozens of different neurotransmitters (dopamine and serotonin, to name two) plus as many neuroreceptors to receive them. There are more than 350 types of ion channel, the synaptic plumbing that determines whether a neuron will fire. At its most fine-grained, at the level of molecular biology, neuroscience attempts to describe and predict the effect of neurotransmitters one ion channel at a time. At the opposite end of the scale is functional magnetic resonance imaging, the favorite tool of behavioral neuroscience. Scans can roughly track which parts of the brain are active while watching a ball game or having an orgasm, albeit only by monitoring blood flow through the gray matter: the brain again viewed as a radiator.

Two large efforts—the Allen Brain Atlas and the National Institutes of Health-funded Human Connectome Project—are working at levels in between these two extremes, attempting to get closer to that unified theory that explains the whole. The Allen Brain Atlas is mapping the correlation between specific genes and specific structures and regions in both human and mouse brains. The Human Connectome Project is using noninvasive imaging techniques that show where wires are bundled and how those bundles are connected in human brains.

To add to the brain-mapping mix, President Obama in April announced the launch of an initiative called Brain (commonly referred to as the Brain Activity Map), which he hopes Congress will make possible with a $3 billion NIH budget. (To start, Obama is pledging $100 million of his 2014 budget.) Unlike the static Human Connectome Project, the proposed Brain Activity Map would show circuits firing in real time. At present this is feasible, writes Brain Activity Map participant Ralph Greenspan, “in the little fruit fly Drosophila.”

Even scaled up to human dimensions, such a map would chart only a web of activity, leaving out much of what is known of brain function at a molecular and functional level. For Markram, the American plan is just grist for his billion-euro mill. “The Brain Activity Map and other projects are focused on generating more data,” he writes. “The Human Brain Project is about data integration.” In other words, from his exalted perspective, the NIH and President Obama are just a bunch of postdocs ready to work for him.

When brain implants arrive, will we still be “us”?

When brain implants arrive, will we still be “us”?


By | November 19, 2012, 11:38 AM PST

 Brain Storm

What happens when non-biological implants in our bodies — along the lines of cochlear implants to improve hearing in the deaf — include brain-related devices that might enhance our memories? Will we still be “us”? Will we be more of a cyborg than we were if, say, we had another type of implant? And for those who believe we would not be, at what point do we lose our selves to a more machine-like incarnation? When do we stop being human?

This is all pretty heavy, mind-bending stuff. But while such thoughts might seem like the domain of science fiction, when considering the trends toward smaller and more powerful computer chips and wearable computing, such theoretical musings might be relevant in innovation theory. And Ray Kurzweil, the often controversial author and futurist, has some opinions on this scenario in his new book How to Create a Human Mind: The Secret of Human Thought Revealed. (Andrew Nusca, SmartPlanet’s editor, recently reported on his talk at the Techonomy conference). Kurzweil argues that even without any non-biological implants, our physical selves are always changing.

In a very short excerpt on Slate, he draws a simple and compelling (if not exactly parallel) comparison:

We naturally undergo a gradual replacement process. Most of our cells in our body are continually being replaced. (You just replaced 100 million of them in the course of reading the last sentence.) Cells in the inner lining of the small intestine turn over in about a week. The life span of white blood cells range from a few days to a few months, depending on the type. Neurons persist, but their organelles and their constituent molecules turn over within a month.  So you are completely replaced in a matter of months. Are you the same person you were a few months ago?

He argues that as our gadgets become smaller, they could eventually become part of our physical selves just as widely accepted health equipment, inserted into our bodies surgically, is today. Plus, he adds, we are increasingly “outsourcing” more of our information and even our memories — in terms of our precious photos, videos, recordings, and even thoughts, in terms of our writings and other materials — to the cloud, versus storing them in our brains.

It’s possible to clearly imagine such disparate trends emerge and converge in some way. But critics suggest that businesses might be wise to also consider the possible pitfalls of future internal, brain-enhancing machinery as they research and develop it. As Publishers Weekly wrote, in How to Create a Mind, Kurzweil can be “uncritically optimistic about the possibilities of our technologies.” Yet perhaps that’s the strength of his ideas: they can be seen as scene-building narratives that focus on a positive prediction of tomorrow. As Kirkus Reviews pointed out, Kurzweil’s new book can be understood as (italics mine) ”a fascinating exercise in futurology.” And, it seems clear, a conversation starter.


What kind of privacy and security measures are needed when a machine can read your mind?

What kind of privacy and security measures are needed when a machine can read your mind?

In recent decades, meetings between information technology, biotechnology, and neuroscience have produced entirely new research, which is developing new, previously unknown products and services.

From nanotechnology opportunities for computer-brain integration occurs even an entirely new civil-military research, to develop a communication between computers and human minds / thoughts, called synthetic or artificial telepathy.

Understanding how the human brain works is not only leading to innovations in medicine, but also providing new models for energy-efficient, fault tolerant and adaptive computing technologies.

Research about artificial neural networks (signal processing) systems, and evolutionary, genetic algorithms, resulting in that you can now construct a self-learning computer programming themselves among others to read the human brain’s memories, feelings and knowledge.

Bioelectronics and a miniaturized signal processing systems in the brain may play in brain functional arkitektuer and through the spoken language to find out what the signals mean.

It is about creating a computer model of the brain including the evidence should provide the answer to what a person is, what is a conscience? What a responsibility is? Whence arises norms and values, etc.?None of these questions can be answered without copy the brain’s functional architecture.

Research Council Ethics Committee wrote the following on medical ethics Nano 2004:
Plus and minus with nanotechnology.

+ It is good to give medicine into the brain via the blood-brain barrier. + It is good to insert electrodes into the brain to give sight to a blind or to control a prosthetic hand. + It is good to use nanotechnology to stem terrorism on innocent people. + It is good for those who can afford to exploit nanotechnology for their own health and their own prosperity.

It’s not good when the particles that enter the body through the lungs and stresses the heart and other organs. – It’s not good if the technology used to read or to influence others’ thoughts, feelings and intentions. – There is no good if the same technology used to control and manage the innocent people.– It’s not good for the poor, who do not have access to the advanced technology.


Is it ethical for researchers to retain parts of uploaded minds (copied biologically conscious) that when the copied person is deceased?

Scientific psychological approach that studies the mechanisms underlying human thought processes. In the cognitive psychology main areas of work include memory , perception , knowledge representation,language , problem solving , decision making, awareness and intelligence .


Charles Darwin collected on his time in a variety of materials to describe the diversity of species and to announce his great work in 1859, if the origin of species (evolution theory)

Just as Charles Darwin collected the amounts of material, now played human neurons and nervous systems in bit by bit, in order to simulate the human brain and nervous system of the computer models.As computers developed enough power, research will be able to simulate a human brain in real time.

There are already injectable bioelectronics and multimedia technology as a “hang out” with people for years to clone their feelings, memories and knowledge. The protection against illegal recording and exploitation of people, according to Swedish European professors are not enough.

Ethical aspects of so-called ICT (Information and Comunication Technologies) implants in the human body are discussed for several years at the European level of The European Group on Ethics in Science and New Technologies under the guidance of such Professor Goran Hermerén. One of the recommendations is that the dangers of ICT implants will be discussed in EU countries. But this has in any event not occurred in Sweden.

By using the new technology to read and copy human neurons and nervous systems so computers can learn ontologies and later “artificial intelligence”, an intelligence that has no ethical foundations and values.

“Artificial intelligence” is a research area that aims to develop computer-based applications that behave and act in a manner that is indistinguishable from human behavior.

The next step in computer development, computers / software that imitate humans. These computers come with their artificial intelligence to be able to threaten the man’s integrity, identity, autonomy and spirituality.

Listen to Anders Holst and the Swedish Institute of Computer Science (SICS) in the SRS radio interview on AI and to simulate the brains of computers.

Years of recordings of people using the new brain chips and broadband technology visualizes piecemeal man’s own self, this is copied to the new more powerful computers.

A radio program where Asa Wikfors associate professor of theoretical philosophy, Lars Bergstrom Professor Emeritus of Philosophy and Martin Ingvar professor of neurophysiology talking about the mind, brain implants and how the view of man’s own in the future I will be able to change.

Some of the research with brain implants (ICT) to clone the human brain is conducted according to many sources of criminal, without informed consent. This is probably because the ethical appeal can not be approved for life-long computerized study of brain implants, where the consequences for the individual is less than the benefits of the research.

Illegal computer cloning could lead to unprecedented physical, psychological and legal consequences for man and society. Illegal data cloning also involves research to do everything in their power to bring technology to the ICT implants read and copy pro men’s thoughts is not disclosed.

Nanoscience and biological implants can lead to serious problems if the technology is used in ways that violate people’s privacy. It is almost impossible to find electronic components, when incorporated in nanoscale particles. Businesses and governments will this new technology to find out things about people in a whole new way. Therefore, nanotechnology will also require new laws and regulations, just as the development of computers has contributed to the enactment of such Personal Data Act.

Swedish Professors also ask, how can you prevent and control the unauthorized use of nanotechnology, although there are legislation? Traceability, or rather the scarcity of traceability, is a perennial topic of debate on ethics, risk and safety. Another recurring theme is the monitoring, how nanotechnology can be used for monitoring purposes, where the individual or group is unaware of the surveillance and unable to find out if she / they are supervised (e) or not.

The government and their ethical advice, according to the EU has a responsibility to inform and educate the community in this new area of research. This has not been entrusted to the government was aware of the technologies already in 2003.

That some of today’s important scientific breakthroughs in nanotechnology / bioelectronics and information not published, because the established academic, financial and political centers of power to preserve their interests and protect unethical research on humans, research thus miss opportunities revealed. Research and its implications are misleading in relation to the judiciary and traditional medical diagnostics. It also goes against all human rights conventions.

Instead of Sweden and Europe, through their political gatekeepers favors confidential unethical civilian-military research on the civilian population during the development of software and networking technologies for medical and military surveillance would research it can make its research progress and the new paradigm’s insights.


In this way Sweden could use progress to solve many of its current political problems and be able to make an international pioneer work for the benefit of all mankind.

We want this website to create an awareness and an awareness that many of the new technologies described developed on the civilian population in Sweden and the rest of the world, without their consent and / or knowledge, this for many years.

Mindtech cooperate with the media and the Swedish Church to try to push the ethical debate that the EU research council and Professor Goran Hermerén initiated in this topic back in 2004. An ethical debate that has since been blacked out by the research and its representatives.

Know someone who is multi-media online but do not dare talk about it?

It is easy not to be believed for a person who alleges that a paradigm shift in computer-brain integration and multimedia technology is already here.

We are aware that portions of the information here may sound like pure science fiction, but it is already a real reality.

By: Magnus Olsson

See also:


CNN: Your Behavior Will Be Controlled by a Brain Chip!

CNN: Your Behavior Will Be Controlled by a Brain Chip

“Smart phone will be implanted”

Paul Joseph Watson
October 9, 2012

A new CNN article predicts that within 25 years people will have embedded microchips within their brain that will allow their behavior to be controlled by a third party.

The story, entitled Smartphone of the future will be in your brain, offers a semi-satirical look at transhumanism and the idea of humans becoming part cyborg by having communications devices implanted in their body.

Predicting first the widespread popularity of wearable smartphones, already in production by Google, the article goes on to forecast how humans will communicate by the end of the century.

“Technology takes a huge leap in 25 years. Microchip can be installed directly in the user’s brain. Apple, along with a handful of companies, makes these chips. Thoughts connect instantly when people dial to “call” each other. But there’s one downside: “Advertisements” can occasionally control the user’s behavior because of an impossible-to-resolve glitch. If a user encounters this glitch — a 1 in a billion probability — every piece of data that his brain delivers is uploaded to companies’ servers so that they may “serve customers better.”

The tone of the CNN piece is somewhat sophomoric, but the notion that humans will eventually merge with machines as the realization of the technological singularity arrives is one shared by virtually all top futurists.

Indeed, people like inventor and futurist Ray Kurzweil don’t think we’ll have to wait 25 years to see smartphones implanted in the brain. He sees this coming to pass within just 20 years.

In his 1999 book The Age of Spiritual Machines, Kurzweil successfully predicted the arrival of the iPad, Kindle, iTunes, You Tube and on demand services like Netflix.

By 2019, Kurzweil forecasts that wearable smartphones will be all the rage and that by 2029, computers and cellphones will now be implanted in people’s eyes and ears, creating a “human underclass” that is viewed as backwards and unproductive because it refuses to acquiesce to the singularity.

Although the CNN piece doesn’t even foresee implantable brain chips until the end of the century, Kurzweil’s predictions are far beyond this. According to him, by 2099, the entire planet is run by artificially intelligent computer systems which are smarter than the entire human race combined – similar to the Skynet system fictionalized in the Terminator franchise.

Humans who have resisted altering themselves by becoming part-cyborg will be ostracized from society.

“Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do,” writes Kurzweil.

Kurzweil’s forecasts are echoed by Sun Microsystems’ Bill Joy, who in a 2000 Wired Magazine article entitled Why The Future Doesn’t Need Us, predicted that technological advancements in robotics would render most humans obsolete.

As a result the elite, “may simply decide to exterminate the mass of humanity,” wrote Joy.


Paul Joseph Watson is the editor and writer for Prison He is the author of Order Out Of Chaos. Watson is also a regular fill-in host for The Alex Jones Show and Infowars Nightly News.

IBM get close to mimicking a human brain !

Man vs. machine

 Computerchip from IBM get close to mimicking a human brain

By Jordan Robertson Friday, August 19, 2011


 Computers, like humans, can learn. But when Google tries to fill in your search box based only on a few keystrokes, or your iPhone predicts words as you type a text message, it’s only a narrow mimicry of what the human brain is capable of.The challenge in training a machine to behave like a human brain is technological and physiological, testing the limits of computer and neuroscience. But IBM researchers say they’ve made a key step toward combining the two worlds.


The company announced it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.The chips represent a milestone in a six-year project that has involved 100 researchers and $41 million in funding from the government’s Defense Advanced Research Projects Agency, or DARPA. IBM has also committed an undisclosed amount of money.

The prototypes offer further evidence of the growing importance of “parallel processing,” or computers doing multiple tasks simultaneously. That is important for rendering graphics and crunching large amounts of data.

The uses of the IBM chips so far are prosaic, such as steering a simulated car through a maze, or playing Pong. It may be a decade or longer before the chips make their way out of the lab and into actual products.


But what’s important is not what the chips are doing, but how they’re doing it, said Giulio Tononi, a professor of psychiatry at the University of Wisconsin at Madison who worked with IBM on the project.The chips’ ability to adapt to types of information that they weren’t specifically programmed to expect is a key feature.

“There’s a lot of work to do still, but the most important thing is usually the first step,” Tononi said in an interview. “And this is not one step; it’s a few steps.”

Technologists have long imagined computers that learn like humans. Your iPhone or Google’s servers can be programmed to predict certain behavior based on past events. But the techniques being explored by IBM and other companies and university research labs around “cognitive computing” could lead to chips that are better able to adapt to unexpected information.

IBM’s interest in the chips lies in their ability to potentially help process real-world signals, such as temperature or sound or motion, and make sense of them for computers.


IBM, based in Armonk, N.Y., is a leader in a movement to link physical infrastructure, such as power plants or traffic lights, and information technology, such as servers and software that help regulate their functions. Such projects can be made more efficient with tools to monitor the myriad analog signals present in those environments.Dharmendra Modha, project leader for IBM Research, said the new chips have parts that behave like digital “neurons” and “synapses” that make them different from other chips. Each “core,” or processing engine, has computing, communication and memory functions.

“You have to throw out virtually everything we know about how these chips are designed,” he said. “The key, key, key difference really is the memory and the processor are very closely brought together. There’s a massive, massive amount of parallelism.”

The project is part of the same research that led to IBM’s announcement in 2009 that it had simulated a cat’s cerebral cortex, the thinking part of the brain, using a massive supercomputer. Using progressively bigger supercomputers, IBM previously had simulated 40 percent of a mouse’s brain in 2006, a rat’s full brain in 2007, and 1 percent of a human’s cerebral cortex in 2009.

A computer with the power of a human brain is not yet near. But Modha said the latest development is an important step.

“It really changes the perspective from ‘What if?’ to ‘What now?’” Modha said. “Today we proved it was possible. There have been many skeptics, and there will be more, but this completes in a certain sense our first round of innovation.”

– Associated Press

How to Use Light to Control the Brain

How to Use Light to Control the Brain

Stephen Dougherty, Scientific American
Date: 01 April 2012 Time: 09:38 AM

In the film Amèlie, the main character is a young eccentric woman who attempts to change the lives of those around her for the better. One day Amèlie finds an old rusty tin box of childhood mementos in her apartment, hidden by a boy decades earlier. After tracking down Bretodeau, the owner, she lures him to a phone booth where he discovers the box. Upon opening the box and seeing a few marbles, a sudden flash of vivid images come flooding into his mind. Next thing you know, Bretodeau is transported to a time when he was in the schoolyard scrambling to stuff his pockets with hundreds of marbles while a teacher is yelling at him to hurry up.

We have all experienced this: a seemingly insignificant trigger, a scent, a song, or an old photograph transports us to another time and place. Now a group of neuroscientists have investigated the fascinating question: Can a few neurons trigger a full memory?
In a new study, published in Nature, a group of researchers from MIT showed for the first time that it is possible to activate a memory on demand, by stimulating only a few neurons with light, using a technique known as optogenetics. Optogenetics is a powerful technology that enables researchers to control genetically modified neurons with a brief pulse of light.

To artificially turn on a memory, researchers first set out to identify the neurons that are activated when a mouse is making a new memory. To accomplish this, they focused on a part of the brain called the hippocampus, known for its role in learning and memory, especially for discriminating places. Then they inserted a gene that codes for a light-sensitive protein into hippocampal neurons, enabling them to use light to control the neurons.

With the light-sensitive proteins in place, the researchers gave the mouse a new memory. They put the animal in an environment where it received a mild foot shock, eliciting the normal fear behavior in mice: freezing in place. The mouse learned to associate a particular environment with the shock.

Next, the researchers attempted to answer the big question: Could they artificially activate the fear memory? They directed light on the hippocampus, activating a portion of the neurons involved in the memory, and the animals showed a clear freezing response. Stimulating the neurons appears to have triggered the entire memory.

The researchers performed several key tests to confirm that it was really the original memory recalled. They tested mice with the same light-sensitive protein but without the shock; they tested mice without the light-sensitive protein; and they tested mice in a different environment not associated with fear. None of these tests yielded the freezing response, reinforcing the conclusion that the pulse of light indeed activated the old fear memory.

In 2010, optogenetics was named the scientific Method of the Year by the journal Nature Methods. The technology was introduced in 2004 by a research group at Stanford University led by Karl Deisseroth, a collaborator on this research. The critical advantage that optogenetics provides over traditional neuroscience techniques, like electrical stimulation or chemical agents, is speed and precision. Electrical stimulation and chemicals can only be used to alter neural activity in nonspecific ways and without precise timing. Light stimulation enables control over a small subset of neurons on a millisecond time scale.

Over the last several years, optogenetics has provided powerful insights into the neural underpinnings of brain disorders like depression, Parkinson’s disease, anxiety, and schizophrenia. Now, in the context of memory research, this study shows that it is possible to artificially stimulate a few neurons to activate an old memory, controlling an animals’ behavior without any sensory input. This is significant because it provides a new approach to understand how complex memories are formed in the first place.

Lest ye worry about implanted memories and mind control, this technology is still a long way from reaching any human brains. Nevertheless, the first small steps towards the clinical application of optogenetics have already begun. A group at Brown University, for example, is working on a wireless optical electrode that can deliver light to neurons in the human brain. Who knows, someday, instead of new technology enabling us to erase memories á la Eternal Sunshine of the Spotless Mind, we may actually undergo memory enhancement therapy with a brief session under the lights.

This article was first published on Scientific American. © 2012 Follow Scientific American on Twitter @SciAm and @SciamBlogs. for the latest in science, health and technology news.