The $1.3B Quest to Build a Supercomputer Replica of a Human Brain
Even by the standards of the TED conference, Henry Markram’s 2009 TEDGlobal talk was a mind-bender. He took the stage of the Oxford Playhouse, clad in the requisite dress shirt and blue jeans, and announced a plan that—if it panned out—would deliver a fully sentient hologram within a decade. He dedicated himself to wiping out all mental disorders and creating a self-aware artificial intelligence. And the South African–born neuroscientist pronounced that he would accomplish all this through an insanely ambitious attempt to build a complete model of a human brain—from synapses to hemispheres—and simulate it on a supercomputer. Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up.
In the four years since Markram’s speech, he hasn’t backed off a nanometer. The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety—from the molecular level all the way to the mystery of consciousness—is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that’s done, once you’ve built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own.
The way Markram sees it, technology has finally caught up with the dream of AI: Computers are finally growing sophisticated enough to tackle the massive data problem that is the human brain. But not everyone is so optimistic. “There are too many things we don’t yet know,” says Caltech professor Christof Koch, chief scientific officer at one of neuroscience’s biggest data producers, the Allen Institute for Brain Science in Seattle. “The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works.” Yet over the past couple of decades, Markram’s sheer persistence has garnered the respect of people like Nobel Prize–winning neuroscientist Torsten Wiesel and Sun Microsystems cofounder Andy Bechtolsheim. He has impressed leading figures in biology, neuroscience, and computing, who believe his initiative is important even if they consider some of his ultimate goals unrealistic.
Markram has earned that support on the strength of his work at the Swiss Federal Institute of Technology in Lausanne, where he and a group of 15 postdocs have been taking a first stab at realizing his grand vision—simulating the behavior of a million-neuron portion of the rat neocortex. They’ve broken new ground on everything from the expression of individual rat genes to the organizing principles of the animal’s brain. And the team has not only published some of that data in peer-reviewed journals but also integrated it into a cohesive model so it can be simulated on an IBM Blue Gene supercomputer.
The big question is whether these methods can scale. There’s no guarantee that Markram will be able to build out the rest of the rat brain, let alone the vastly more complex human brain. And if he can, nobody knows whether even the most faithful model will behave like a real brain—that if you build it, it will think. For all his bravado, Markram can’t answer that question. “But the only way you can find out is by building it,” he says, “and just building a brain is an incredible biological discovery process.” This is too big a job for just one lab, so Markram envisions an estimated 6,000 researchers around the world funneling data into his model. His role will be that of prophet, the sort of futurist who presents worthy goals too speculative for most scientists to countenance and then backs them up with a master plan that makes the nearly impossible appear perfectly plausible. Neuroscientists can spend a whole career on a single cell or molecule. Markram will grant them the opportunity and encouragement to band together and pursue the big questions.
And now Markram has funding almost as outsized as his ideas. On January 28, 2013, the European Commission—the governing body of the European Union—awarded him 1 billion euros ($1.3 billion). For decades, neuroscientists and computer scientists have debated whether a computer brain could ever be endowed with the intelligence of a human. It’s not a hypothetical debate anymore. Markram is building it. Will he replicate consciousness? The EU has bet $1.3 billion on it.
Ancient Egyptian surgeons believed that the brain was the “marrow of the skull” (in the graphic wording of a 3,500-year-old papyrus). About 1,500 years later, Aristotle decreed that the brain was a radiator to cool the heart’s “heat and seething.” While neuroscience has come a long way since then, the amount that we know about the brain is still minuscule compared to what we don’t know.
Over the past century, brain research has made tremendous strides, but it’s all atomized and highly specific—there’s still no unified theory that explains the whole. We know that the brain is electric, an intricately connected network, and that electrical signals are modulated by chemicals. In sufficient quantity, certain combinations of chemicals (called neurotransmitters) cause a neuron to fire an electrical signal down a long pathway called an axon. At the end of the axon is a synapse, a meeting point with another neuron. The electrical spike causes neurotransmitters to be released at the synapse, where they attach to receptors in the neighboring neuron, altering its voltage by opening or closing ion channels. At the simplest level, comparisons to a computer are helpful. The synapses are roughly equivalent to the logic gates in a circuit, and axons are the wires. The combination of inputs determines an output. Memories are stored by altering the wiring. Behavior is correlated with the pattern of firing.
Yet when scientists study these systems more closely, such reductionism looks nearly as rudimentary as the Egyptian notions about skull marrow. There are dozens of different neurotransmitters (dopamine and serotonin, to name two) plus as many neuroreceptors to receive them. There are more than 350 types of ion channel, the synaptic plumbing that determines whether a neuron will fire. At its most fine-grained, at the level of molecular biology, neuroscience attempts to describe and predict the effect of neurotransmitters one ion channel at a time. At the opposite end of the scale is functional magnetic resonance imaging, the favorite tool of behavioral neuroscience. Scans can roughly track which parts of the brain are active while watching a ball game or having an orgasm, albeit only by monitoring blood flow through the gray matter: the brain again viewed as a radiator.
Two large efforts—the Allen Brain Atlas and the National Institutes of Health-funded Human Connectome Project—are working at levels in between these two extremes, attempting to get closer to that unified theory that explains the whole. The Allen Brain Atlas is mapping the correlation between specific genes and specific structures and regions in both human and mouse brains. The Human Connectome Project is using noninvasive imaging techniques that show where wires are bundled and how those bundles are connected in human brains.
To add to the brain-mapping mix, President Obama in April announced the launch of an initiative called Brain (commonly referred to as the Brain Activity Map), which he hopes Congress will make possible with a $3 billion NIH budget. (To start, Obama is pledging $100 million of his 2014 budget.) Unlike the static Human Connectome Project, the proposed Brain Activity Map would show circuits firing in real time. At present this is feasible, writes Brain Activity Map participant Ralph Greenspan, “in the little fruit fly Drosophila.”
Even scaled up to human dimensions, such a map would chart only a web of activity, leaving out much of what is known of brain function at a molecular and functional level. For Markram, the American plan is just grist for his billion-euro mill. “The Brain Activity Map and other projects are focused on generating more data,” he writes. “The Human Brain Project is about data integration.” In other words, from his exalted perspective, the NIH and President Obama are just a bunch of postdocs ready to work for him.
Scientists: Humans and machines will merge in the future!
LONDON, England (CNN) — A group of experts from around the world will hold a first of its kind conference on global catastrophic risks.
Some experts say humans will merge with machines before the end of this century.
They will discuss what should be done to prevent these risks from becoming realities that could lead to the end of human life on Earth as we know it.
Speakers at the four-day event at Oxford University in Britain will talk about topics including nuclear terrorism and what to do if a large asteroid were to be on a collision course with our planet.
On the final day of the Global Catastrophic Risk Conference, experts will focus on what could be the unintended consequences of new technologies, such as superintelligent machines that, if ill-conceived, might cause the demise of Homo sapiens.
“Any entity which is radically smarter than human beings would also be very powerful,” said Dr. Nick Bostrom, director of Oxford’s Future of Humanity Institute, host of the symposium. “If we get something wrong, you could imagine the consequences would involve the extinction of the human species.”
Bostrom is a philosopher and a leading thinker of transhumanism, a movement that advocates not only the study of the potential threats and promises that future technologies could pose to human life but also the ways in which emergent technologies could be used to make the very act of living better.
“We want to preserve the best of what it is to be human and maybe even amplify that,” Bostrom said.
Transhumanists, according to Bostrom, anticipate an era in which biotechnology, molecular nanotechnologies, artificial intelligence and other new types of cognitive tools will be used to amplify our intellectual capacity, improve our physical capabilities and even enhance our emotional well-being.
The end result would be a new form of “posthuman” life with beings that possess qualities and skills so exceedingly advanced they no longer can be classified simply as humans.
“We will begin to use science and technology not just to manage the world around us but to manage our own human biology as well,” Bostrom said. “The changes will be faster and more profound than the very, very slow changes that would occur over tens of thousands of years as a result of natural selection and biological evolution.”
Bostrom declined to predict an exact time frame when this revolutionary biotechnological metamorphosis might occur. “Maybe it will take eight years or 200 years,” he said. “It is very hard to predict.”
Other experts are already getting ready for what they say could be a radical transformation of the human race in as little as two decades.
“This will happen faster than people realize,” said Dr. Ray Kurzweil, an inventor and futurist who calculates technology trends using what he calls the law of accelerating returns, a mathematical concept that measures the exponential growth of technological evolution.
In the 1980s, Kurzweil predicted that a tiny handheld device would be invented early in the 21st century, allowing blind people to read documents from anywhere at anytime; this year, such a device was publicly unveiled. He also anticipated the explosive growth of the Internet in the 1990s.
Now, Kurzweil is predicting the arrival of something called the Singularity, which he defines in his book on the subject as “the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots.”
“There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality,” he writes.
Singularity will approach at an accelerating rate as human-created technologies become exponentially smaller and increasingly powerful and as fields such as biology and medicine are understood more and more in terms of information processes that can be simulated with computers.
By the 2030s, Kurzweil said, humans will become more non-biological than biological, capable of uploading our minds onto the Internet, living in various virtual worlds and even avoiding aging and evading death.
In the 2040s, Kurzweil predicts that non-biological intelligence will be billions of times better than the biological intelligence humans have today, possibly rendering our present brains obsolete.
“Our brains are a million times slower than electronics,” Kurzweil said. “We will increasingly become software entities if you go out enough decades.”
This movement towards the merger of man and machine, according to Kurzweil, is already starting to happen and is most visible in the field of biotechnology.
As scientists gain deeper insights into the genetic processes that underlie life, they are able to effectively reprogram human biology through the development of new forms of gene therapies and medications capable of turning on or off enzymes and RNA interference, or gene silencing.
“Biology and health and medicine used to be hit or miss,” Kurzweil sad. “It wasn’t based on any coherent theory about how it works.”
The emerging biotechnology revolution will lead to at least a thousand new drugs that could do anything from slow down the process of aging to reverse the onset of diseases, like heart disease and cancer, Kurzweil said.
By 2020, Kurzweil predicts a second revolution in the area of nanotechnology. According to his calculations, it is already showing signs of exponential growth as scientists begin to test first generation nanobots that can cure Type 1 diabetes in rats or heal spinal cord injuries in mice.
One scientist is developing something called a respirocyte, a robotic red blood cell that, if injected into the bloodstream, would allow humans to do an Olympic sprint for 15 minutes without taking a breath or sit at the bottom of a swimming pool for hours at a time.
Other researchers are developing nanoparticles that can locate tumors and one day even eradicate them.
And some Parkinson’s patients now have pea-sized computers implanted in their brains that replace neurons destroyed by the disease; new software can be downloaded to the mini computers from outside the human body.
“Nanotechnology will not just be used to reprogram but to transcend biology and go beyond its limitations by merging with non-biological systems,” Kurzweil said. “If we rebuild biological systems with nanotechnology, we can go beyond its limits.”
The final revolution leading to the advent of Singularity will be the creation of artificial intelligence, or superintelligence, which, according to Kurzweil, could be capable of solving many of our biggest threats, like environmental destruction, poverty and disease.
“A more intelligent process will inherently outcompete one that is less intelligent, making intelligence the most powerful force in the universe,” Kurzweil writes.
Yet the invention of so many high-powered technologies and the possibility of merging these new technologies with humans may pose both peril and promise for the future of mankind.
“I think there are grave dangers,” Kurzweil said. “Technology has always been a double-edged sword.”