Scientists to build ‘human brain’: Supercomputer will simulate the entire mind and will help fight against brain diseases

mind control

Scientists to build ‘human brain’: Supercomputer will simulate the entire mind and will help fight against brain diseases

  • The ‘brain’ will take 12 years to build
  • It will feature thousands of three-dimensional images built around a semi-circular ‘cockpit’

PUBLISHED: 18:27 GMT, 15 April 2012 | UPDATED: 19:14 GMT, 15 April 2012 

The human brain’s power could rival any machine. And now scientists are trying to build one using the world’s most powerful computer.

It is intended to combine all the information so far uncovered about its mysterious workings – and replicate them on a screen, right down to the level of individual cells and molecules.

If it works it could be revolutionary for understanding devastating neurological diseases such as Alzheimer’s and Parkinson’s, and even shedding light into how we think, and make decisions.

 
Ambitious: Scientists are hoping to build a computer that will simulate the entire human brain
 
Ambitious: Scientists are hoping to build a computer that will simulate the entire human brain

Leading the project is Professor Henry Markram based in Switzerland, who will be working with scientists from across Europe including the Wellcome Trust Sanger Institute at Cambridge.

They hope to complete it within 12 years. He said: ‘The complexity of the brain, with its billions of interconnected neurons, makes it hard for neuroscientists to truly understand how it works.

‘Simulating it will make it much easier – allowing them to manipulate and measure any aspect of the brain.’

Housed at a facility in Dusseldorf in Germany, the ‘brain’ will feature thousands of three-dimensional images built around a semi-circular ‘cockpit’ so scientists can virtually ‘fly’ around different areas and watch how they communicate with each other.

It aims to integrate all the neuroscience research being carried out all over the world – an estimated 60,000 scientific papers every year – into one platform.

The project has received some funding from the EU and has been shortlisted for a 1 billion euro (£825million) EU grant which will be decided next month.

When complete it could be used to test new drugs, which could dramatically shorten the time required for licencing them than human trials, and pave the way for more intelligent robots and computers. 

There are inevitably concerns about the consequences of this ‘manipulation’ and creating computers which can think for themselves. In Germany the media have dubbed the researchers ‘Team Frankenstein’.

 
The various areas of the human brain
Graphic: Corbis

But Prof Markram said: ‘This will, when successful, help two billion people annually who suffer from some type of brain impairment.

‘This is one of the three grand challenges for humanity. We need to understand earth, space and the brain. We need to understand what makes us human.’

Over the past 15 years his team have painstakingly studied and managed to produce a computer simulation of a cortical column – one of the small building blocks of a mammal’s brain.

They have also simulated part of a rat’s brain using a computer. But the human brain is a totally different proposition.

High energy consumption: The computer will require the output of a nuclear power station
 
High energy consumption: The computer will require the output of a nuclear power station like Sellafield, pictured here

Read more: http://www.dailymail.co.uk/sciencetech/article-2130124/Scientists-build-human-brain-Supercomputer-simulate-mind-exactly-help-fight-brain-diseases.html#ixzz1yiRQqhoy

Revolution in Artificial Intelligence,,,

 ScienceDaily: Your source for the latest research news<br /><br />
and science breakthroughs -- updated daily

 

Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

ScienceDaily (Apr. 2, 2012) — As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.



Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing’s work to its next logical step. She is translating her 1993 discovery of what she has dubbed “Super-Turing” computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue ofNeural Computation.

“This model is inspired by the brain,” she says. “It is a mathematical formulation of the brain’s neural networks with their adaptive abilities.” The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine’s lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

“Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.”

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they’ve been told what to expect and how to respond, Siegelmann says. But they can’t take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called “adaptive inference.” In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the “calculating computer” model and more like Turing’s prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

“I was young enough to be curious, wanting to understand why the Turing model looked really strong,” she recalls. “I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.”

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 20, possible behaviors. “If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

“If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain,” she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.

Efficiency in Multi-Core Chips “computerbrain”

New Bandwidth Management Techniques Boost Operating Efficiency in Multi-Core Chips

ScienceDaily (May 25, 2011) — Researchers from North Carolina State University have developed two new techniques to help maximize the performance of multi-core computer chips by allowing them to retrieve data more efficiently, which boosts chip performance by 10 to 40 percent.


To do this, the new techniques allow multi-core chips to deal with two things more efficiently: allocating bandwidth and “prefetching” data.

Multi-core chips are supposed to make our computers run faster. Each core on a chip is its own central processing unit, or computer brain. However, there are things that can slow these cores. For example, each core needs to retrieve data from memory that is not stored on its chip. There is a limited pathway — or bandwidth — these cores can use to retrieve that off-chip data. As chips have incorporated more and more cores, the bandwidth has become increasingly congested — slowing down system performance.

One of the ways to expedite core performance is called prefetching. Each chip has its own small memory component, called a cache. In prefetching, the cache predicts what data a core will need in the future and retrieves that data from off-chip memory before the core needs it. Ideally, this improves the core’s performance. But, if the cache’s prediction is inaccurate, it unnecessarily clogs the bandwidth while retrieving the wrong data. This actually slows the chip’s overall performance.

“The first technique relies on criteria we developed to determine how much bandwidth should be allotted to each core on a chip,” says Dr. Yan Solihin, associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research. Some cores require more off-chip data than others. The researchers use easily-collected data from the hardware counters on each chip to determine which cores need more bandwidth. “By better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance,” Solihin says.

“The second technique relies on a set of criteria we developed for determining when prefetching will boost performance and should be utilized,” Solihin says, “as well as when prefetching would slow things down and should be avoided.” These criteria also use data from each chip’s hardware counters. The prefetching criteria would allow manufacturers to make multi-core chips that operate more efficiently, because each of the individual cores would automatically turn prefetching on or off as needed.

Utilizing both sets of criteria, the researchers were able to boost multi-core chip performance by 40 percent, compared to multi-core chips that do not prefetch data, and by 10 percent over multi-core chips that always prefetch data.

The paper, “Studying the Impact of Hardware Prefetching and Bandwidth Partitioning in Chip-Multiprocessors,” will be presented June 9 at the International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS) in San Jose, Calif. The paper was co-authored by Dr. Fang Liu, a former Ph.D. student at NC State. The research was supported, in part, by the National Science Foundation.

NC State’s Department of Electrical and Computer Engineering is part of the university’s College of Engineering.

Roll over headlines to view top news summaries: