2050 – and immortality is within our grasp

2050 – and immortality is within our grasp

 

 David Smith, technology correspondent

Britain’s leading thinker on the future offers an extraordinary vision of life in the next 45 years

Cross section of the human brain

Supercomputers could render the wetware of the human brain redundant. Photograph: Gregor Schuster/Getty Images

Aeroplanes will be too afraid to crash, yoghurts will wish you good morning before being eaten and human consciousness will be stored on supercomputers, promising immortality for all – though it will help to be rich.

These fantastic claims are not made by a science fiction writer or a crystal ball-gazing lunatic. They are the deadly earnest predictions of Ian Pearson, head of the futurology unit at BT.

‘If you draw the timelines, realistically by 2050 we would expect to be able to download your mind into a machine, so when you die it’s not a major career problem,’ Pearson told The Observer. ‘If you’re rich enough then by 2050 it’s feasible. If you’re poor you’ll probably have to wait until 2075 or 2080 when it’s routine. We are very serious about it. That’s how fast this technology is moving: 45 years is a hell of a long time in IT.’

Pearson, 44, has formed his mind-boggling vision of the future after graduating in applied mathematics and theoretical physics, spending four years working in missile design and the past 20 years working in optical networks, broadband network evolution and cybernetics in BT’s laboratories. He admits his prophecies are both ‘very exciting’ and ‘very scary’.

He believes that today’s youngsters may never have to die, and points to the rapid advances in computing power demonstrated last week, when Sony released the first details of its PlayStation 3. It is 35 times more powerful than previous games consoles. ‘The new PlayStation is 1 per cent as powerful as a human brain,’ he said. ‘It is into supercomputer status compared to 10 years ago. PlayStation 5 will probably be as powerful as the human brain.’

The world’s fastest computer, IBM’s BlueGene, can perform 70.72 trillion calculations per second (teraflops) and is accelerating all the time. But anyone who believes in the uniqueness of consciousness or the soul will find Pearson’s next suggestion hard to swallow. ‘We’re already looking at how you might structure a computer that could possibly become conscious. There are quite a lot of us now who believe it’s entirely feasible.

‘We don’t know how to do it yet but we’ve begun looking in the same directions, for example at the techniques we think that consciousness is based on: information comes in from the outside world but also from other parts of your brain and each part processes it on an internal sensing basis. Consciousness is just another sense, effectively, and that’s what we’re trying to design in a computer. Not everyone agrees, but it’s my conclusion that it is possible to make a conscious computer with superhuman levels of intelligence before 2020.’

He continued: ‘It would definitely have emotions – that’s one of the primary reasons for doing it. If I’m on an aeroplane I want the computer to be more terrified of crashing than I am so it does everything to stay in the air until it’s supposed to be on the ground.

‘You can also start automating an awful lots of jobs. Instead of phoning up a call centre and getting a machine that says, “Type 1 for this and 2 for that and 3 for the other,” if you had machine personalities you could have any number of call staff, so you can be dealt with without ever waiting in a queue at a call centre again.’

Pearson, from Whitehaven in Cumbria, collaborates on technology with some developers and keeps a watching brief on advances around the world. He concedes the need to debate the implications of progress. ‘You need a completely global debate. Whether we should be building machines as smart as people is a really big one. Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’

In the shorter term, Pearson identifies the next phase of progress as ‘ambient intelligence’: chips with everything. He explained: ‘For example, if you have a pollen count sensor in your car you take some antihistamine before you get out. Chips will come small enough that you can start impregnating them into the skin. We’re talking about video tattoos as very, very thin sheets of polymer that you just literally stick on to the skin and they stay there for several days. You could even build in cellphones and connect it to the network, use it as a video phone and download videos or receive emails.’

Philips, the electronics giant, is developing the world’s first rollable display which is just a millimetre thick and has a 12.5cm screen which can be wrapped around the arm. It expects to start production within two years.

The next age, he predicts, will be that of ‘simplicity’ in around 2013-2015. ‘This is where the IT has actually become mature enough that people will be able to drive it without having to go on a training course.

‘Forget this notion that you have to have one single chip in the computer which does everything. Why not just get a stack of little self-organising chips in a box and they’ll hook up and do it themselves. It won’t be able to get any viruses because most of the operating system will be stored in hardware which the hackers can’t write to. If your machine starts going wrong, you just push a button and it’s reset to the factory setting.’

Pearson’s third age is ‘virtual worlds’ in around 2020. ‘We will spend a lot of time in virtual space, using high quality, 3D, immersive, computer generated environments to socialise and do business in. When technology gives you a life-size 3D image and the links to your nervous system allow you to shake hands, it’s like being in the other person’s office. It’s impossible to believe that won’t be the normal way of communicating.

“Humanity is about going beyond biological limitations”

Image: Drawing of The Vitruvian Man

Leonardo da Vinci’s drawing of The Vitruvian man.

NEW YORK Dreams of immortality inspired the fantastical tales of Greek historian Herodotus and Spanish explorer Juan Ponce de Leon’s legendary search for the fountain of youth. Nowadays, visionaries push for the technologies to transplant human brains into new bodies and download human consciousness into hologram-like avatars.

The latest science and schemes for achieving long life and the “singularity” moment of smarter-than-human intelligence came together at the Singularity Summit held here October 15-16. Some researchers explored cutting-edge, serious work about regenerating human body parts and defining the boundaries of consciousness in brain studies. Other speakers pushed visions of extending human existence in “Avatar”- style bodies — one initiative previously backed by action film star Steven Seagal — with fuzzier ideas about how to create such a world.

Above all, the summit buzzed with optimism about technology’s ability to reshape the world to exceed humanity’s wildest dreams, as well as a desire to share that vision with everyone. True believers were even offered the chance to apply for a credit card that transfers purchase rewards to the Singularity Institute.

“Humanity is about going beyond biological limitations,” said Ray Kurzweil, the inventor and futurist whose vision drives the Singularity Institute.

Rebuilding a healthy body The most immediate advances related to living longer and better may come from regenerative medicine. Pioneering physicians have already regrown the tips of people’s fingers and replaced cancer-ridden parts of human bodies with healthy new cells.

“What we’re talking about here is not necessarily increasing the quantity of life but the quality of life,” said Stephen Badylak, deputy director of the McGowan Institute for Regenerative Medicine at the University of Pittsburgh in Pennsylvania.

Success so far has come from using a special connective tissue — called the extracellular matrix (ECM) — to act as a biological scaffold for healthy cells to build upon. Badylak showed a video where his team of surgeons stripped out the cancerous lining of a patient’s esophagus like pulling out a sock, and relined the esophagus with an ECM taken from pigs. The patient remains cancer-free several years after the experimental trial.

The connective tissue of other animals doesn’t provoke a negative response in human bodies, because it lacks the foreign animal cells that would typically provoke the immune system to attack. It has served the same role as a biological foundation for so long that it represents a “medical device that’s gone through hundreds of millions of years of R&D,” Badylak said.

If work goes well, Badylak envisions someday treating stroke patients by regenerating pieces of the functioning human brain.

Live long and prosper The work of such researchers could do more than just keep humans happy and healthy. By tackling end-of-life chronic diseases such as cancer, medical advances could nearly double human life expectancy beyond almost 80 years in the U.S. to 150 years, said Sonia Arrison, a futurist at the Pacific Research Institute in San Francisco, Calif.

Long-lived humans could lead to problems such as anger over a “longevity gap” between haves and have-nots and perhaps add to stress on food, water and energy sources. But Arrison took a more positive view of how “health begets wealth” in a talk based on her new book, “100 Plus” (Basic Books, 2011).

Having healthier people around for longer means that they can remain productive far later in life, Arrison pointed out. Many past innovators accomplished some of their greatest or most creative work relatively late in life — Leonardo da Vinci began painting the Mona Lisa at 51, and Benjamin Franklin conducted his kite experiment at 46.

“Innovation is a late-peak field,” Arrison told the audience gathered at the Singularity Summit.

Even religion might find a renewed role in a world where death increasingly looks far off, Arrison said. Religion remains as popular as ever despite a doubling of human life expectancy up until now, and so Arrison suggested that religions focused on providing purpose or guidance in life could do well. But religions focused on the afterlife may want to rethink their strategy.

Making ‘Avatar’ real (or not) The boldest scheme for immortality came from media mogul Dmitry Itskov, who introduced his “Project Immortality 2045: Russian Experience.” He claimed support from the Russian Federation’s Ministry of Education and Science, as well as actor Seagal, to create a research center capable of giving humans life-extending bodies.

Itskov’s wildly ambitious plans include creating a humanoid avatar body within five to seven years, transplanting a human brain into a new “body B” in 10 to 15 years, digitally uploading a human brain’s consciousness in 20 to 25 years, and moving human consciousness to hologram-like bodies in 30 to 35 years.

That vision may have exceeded even the optimism of many Singularity Summit attendees, given the apparent restlessness of the crowd during Itskov’s presentation. But it did little to dampen the conference’s overall sense that humanity has a positive future within its collective grasp — even if some people still need to be convinced.

“We are storming the fricking barricades of death, both physically and intellectually, so we have to make it sexy,” said Jason Silva, a filmmaker and founding producer/host for Current TV.

By Jeremy Hsu

    10/17/2011 7:39:40 PM ET2011-10-17T23:39:40

You can follow InnovationNewsDaily Senior Writer Jeremy Hsu on Twitter @ScienceHsu. Follow InnovationNewsDaily on Twitter @News_Innovation, or on Facebook.

The missing link between us and the future !

The missing link between us and the future

In the early 1990s, the IT industry got very excited about virtual reality, the idea that you could use some sort of headset display to wander around in a 3d computer-generated world. We quickly realised there are zillions of variations on this idea, and after the one that became current computer gaming (3d worlds on a 2d monitor) the biggest of the rest was augmented reality, where data and images could be superimposed on the field of view.

Now, we are seeing apps on phones and pads that claim to be augmented reality, showing where the nearest tube station is for example. To a point I guess they are, but only in as far as they can let you hold up a display in front of you and see images relevant to the location and direction. They hardly amount to a head up display, and fall a long way short of the kind of superimposition we’re been used to on sci-fi since Robocop or Terminator. It is clear that we really need a proper head-up display, one that doesn’t require you to take a gadget out and hold it up in front of you.

There are some head-up displays out there. Some make overlay displays in a small area of your field of view, often using small projectors and mirrors. Some use visors.  However the video visor based displays are opaque. They are fine for watching TV or playing games while seated, but not much use for wandering around.

This will change in the next 18 months – 2 years. Semi-transparent visors will begin to appear then. The few years after that will undoubtedly see rapid development of them, eventually bringing a full hi-res 3d overlay capability. And that will surely be a major disruptive technology. Just as we are getting used to various smart phones, pads, ebbook readers and 3d TVs, they could all be absorbed into a general purpose head up display that can be used for pretty much anything.

It is hard to overstate the potential of this kind of interface once it reaches good enough quality. It allows anything from TV, games, or the web, to be blended with any real world scene or activity. This will transform how we shop, work and socialise, how we design and use buildings, and even how we use art or display ourselves. Each of these examples could easily fill a book.  The whole of the world wide web was enabled by the convergence of just the computing and telecoms industries. The high quality video visor will enable convergence of the real world with the whole of the web, media, and virtual worlds, not just two industry sectors. Augmented reality will be a huge part of that, but even virtual reality and the zillions of variants can then start to be explored too.

In short, the semi-transparent video visor is the missing link. It is the biggest bottleneck now stopping the future arriving. Everything till we get that is a sideshow.

Artificial Hippocampus, the Borg Hive Mind, and Other Neurological Endeavors

Artificial Hippocampus, the Borg Hive Mind, and Other Neurological Endeavors

November 15

Many of us know about ‘Borg Hive Mind’ from TV programs where the characters are linked through brain-to-brain or computer-to-brain interactions. However, this is more than a science fiction fantasy. The idea was contemplated seriously in the 2002 National Science Foundation report, Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science. ‘Techlepathy‘ is the word coined, referring to the communication of information directly from one mind to another (i.e. telepathy) with the assistance of technology.

Many research activities focus on neuro-engineering and the cognitive sciences. Many neuroscientists and bioengineers now work on:

  • cognitive computing
  • digitally mapping the human brain (see here and here); the mouse brain map has just been published
  • developing microcircuits that can repair brain damage, and
  • other numerous projects related to changing the cognitive abilities and functioning of humans, and artificial intelligence.

Journals exist for all of these activities — including the Human Brain Mappingjournal. Some envision a Human Cognome Project. James Albus, a senior fellow and founder of the Intelligent Systems Division of the National Institute of Standards and Technology believes the era of ‘engineering the mind‘ is here. He has proposed a national program for developing a scientific theory of the mind.

Neuromorphic engineering, Wikipedia says, “is a new interdisciplinary discipline that takes inspiration from biology, physics, mathematics, computer science and engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.”

mind computer

There are many examples.

Researchers from Harvard University have linked nanowire field-effect transistors to neurons. Three applications are envisioned: hybrid biological/electronic devices, interfaces to neural prosthetics, and the capture of high-resolution information about electrical signals in the brain. Research is advancing in four areas: neuronal networks, interfaces between the brain and external neural prosthetics, real-time cellular assays, and hybrid circuits that couple digital nanoelectronic and biological computing components.

Numenta, a company formed in 2005, states on its webpage that it “is developing a new type of computer memory system modelled after the human neocortex.”

Kwabena Boahen, an associate professor of bioengineering at Stanford University, has developed Neurogrid, “a specialized hardware platform that will enable the cortex’s inner workings to be simulated in detail — something outside the reach of even the fastest supercomputers.” He is also working on a silicon retina and a silicon chip that emulates the way the juvenile brain wires itself up.

Researchers at the University of Washington are working on an implantable electronic chip that may help to establish new nerve connections in the part of the brain that controls movement.

The Blue Brain project — a collaboration of IBM and the Ecole Polytechnique Federale de Lausanne, in Lausanne, Switzerland – will create a detailed model of the circuitry in the neocortex.

A DNA switchnanoactuator‘ has been developed by Dr. Keith Firman at the University of Portsmouth and other European researchers, which can interface living organisms with computers.

Kevin Warwick had an RFID transmitter (a future column will deal with RFID chips) implanted beneath his skin in 1998, which allowed him to control doors, lights, heaters, and other computer-controlled devices in his proximity. In anotherexperiment, he and his wife Irena each had electrodes surgically implanted in their arms. The electrodes were linked by radio signals to a computer which created a direct link between their nervous systems. Kevin’s wife felt when he moved his arm.

mind

In his book I, Cyborg, Kevin Warwick imagines that 50 years from now most human brains will be linked electronically through a global computer network.

St. Joseph’s Hospital in the United States has implanted neurostimulators (deep brain stimulators) using nanowires to connect a stimulating device to brain. A pacemaker-like device is implanted in the chest, and flexible wires are implanted in the brain. Electrical impulses sent from the ‘pacemaker’ to the brain are used to treat Parkinson’s, migraine headaches and chronic pain, depression, obsessive-compulsive disorder, improve the mobility of stroke victims, and curb cravings in drug addicts.

In 2003/2004 a variety of publications (see links below) reported on the efforts of professor Theodore W. Berger, director of the Center for Neural Engineering at the University of Southern California, and his colleagues, to develop the world’s firstbrain prosthesis – an ‘artificial hippocampus’ which is supposed to act as a memory bank. These publications highlighted in particular the use of such implants for Alzheimer’s patients.

The research program is proceeding in four stages: (1) tests on slices of rat brains kept alive in cerebrospinal fluid… reported as successful in 2004; (2) tests on live rats which are to take place within three years; (3) tests on live monkeys; and (4) tests on humans — very likely on Alzheimer’s patients first.

The Choice is Yours

If these advancements come to pass, they will create many ethical, legal, privacy and social issues. For the artificial hippocampus we should ask: would brain implants force some people to remember things they would rather forget? Could someone manipulate our memory? What would be the consequence of uploading information (see my education column)? Will we still have control over what we remember? Could we be forced to remember something over and over? If we can communicate with each other through a computer what will be the consequence of a Global Brain?

It is important that people become more involved in the governance of neuro-engineering and cognitive science projects. We should not neglect these areas because we perceive them to be science fiction. We also need to look beyond the outlined ‘medical applications.’ If the artificial hippocampus works, it will likely be used for more than dealing with diseases.

I will cover brain-machine interfaces, neuro-pharmaceutical-based ‘cognitive enhancement,’ and neuroethics and the ethics of artificial intelligence in future columns.

Gregor Wolbring is a biochemist, bioethicist, science and technology ethicist, disability/vari-ability studies scholar, and health policy and science and technology studies researcher at the University of Calgary. He is a member of the Center for Nanotechnology and Society at Arizona State University; Member CAC/ISO – Canadian Advisory Committees for the International Organization for Standardization section TC229 Nanotechnologies; Member of the editorial team for the Nanotechnology for Development portal of the Development Gateway Foundation; Chair of the Bioethics Taskforce of Disabled People’s International; and Member of the Executive of the Canadian Commission for UNESCO. He publishes the Bioethics, Culture and Disability website, moderates a weblog forthe International Network for Social Research on Diasbility, and authors a weblogon NBICS and its social implications.

Resources
 

How to Use Light to Control the Brain

How to Use Light to Control the Brain

Stephen Dougherty, Scientific American
Date: 01 April 2012 Time: 09:38 AM
 

In the film Amèlie, the main character is a young eccentric woman who attempts to change the lives of those around her for the better. One day Amèlie finds an old rusty tin box of childhood mementos in her apartment, hidden by a boy decades earlier. After tracking down Bretodeau, the owner, she lures him to a phone booth where he discovers the box. Upon opening the box and seeing a few marbles, a sudden flash of vivid images come flooding into his mind. Next thing you know, Bretodeau is transported to a time when he was in the schoolyard scrambling to stuff his pockets with hundreds of marbles while a teacher is yelling at him to hurry up.

We have all experienced this: a seemingly insignificant trigger, a scent, a song, or an old photograph transports us to another time and place. Now a group of neuroscientists have investigated the fascinating question: Can a few neurons trigger a full memory?
In a new study, published in Nature, a group of researchers from MIT showed for the first time that it is possible to activate a memory on demand, by stimulating only a few neurons with light, using a technique known as optogenetics. Optogenetics is a powerful technology that enables researchers to control genetically modified neurons with a brief pulse of light.

To artificially turn on a memory, researchers first set out to identify the neurons that are activated when a mouse is making a new memory. To accomplish this, they focused on a part of the brain called the hippocampus, known for its role in learning and memory, especially for discriminating places. Then they inserted a gene that codes for a light-sensitive protein into hippocampal neurons, enabling them to use light to control the neurons.

With the light-sensitive proteins in place, the researchers gave the mouse a new memory. They put the animal in an environment where it received a mild foot shock, eliciting the normal fear behavior in mice: freezing in place. The mouse learned to associate a particular environment with the shock.

Next, the researchers attempted to answer the big question: Could they artificially activate the fear memory? They directed light on the hippocampus, activating a portion of the neurons involved in the memory, and the animals showed a clear freezing response. Stimulating the neurons appears to have triggered the entire memory.

The researchers performed several key tests to confirm that it was really the original memory recalled. They tested mice with the same light-sensitive protein but without the shock; they tested mice without the light-sensitive protein; and they tested mice in a different environment not associated with fear. None of these tests yielded the freezing response, reinforcing the conclusion that the pulse of light indeed activated the old fear memory.

In 2010, optogenetics was named the scientific Method of the Year by the journal Nature Methods. The technology was introduced in 2004 by a research group at Stanford University led by Karl Deisseroth, a collaborator on this research. The critical advantage that optogenetics provides over traditional neuroscience techniques, like electrical stimulation or chemical agents, is speed and precision. Electrical stimulation and chemicals can only be used to alter neural activity in nonspecific ways and without precise timing. Light stimulation enables control over a small subset of neurons on a millisecond time scale.

Over the last several years, optogenetics has provided powerful insights into the neural underpinnings of brain disorders like depression, Parkinson’s disease, anxiety, and schizophrenia. Now, in the context of memory research, this study shows that it is possible to artificially stimulate a few neurons to activate an old memory, controlling an animals’ behavior without any sensory input. This is significant because it provides a new approach to understand how complex memories are formed in the first place.

Lest ye worry about implanted memories and mind control, this technology is still a long way from reaching any human brains. Nevertheless, the first small steps towards the clinical application of optogenetics have already begun. A group at Brown University, for example, is working on a wireless optical electrode that can deliver light to neurons in the human brain. Who knows, someday, instead of new technology enabling us to erase memories á la Eternal Sunshine of the Spotless Mind, we may actually undergo memory enhancement therapy with a brief session under the lights.

This article was first published on Scientific American. © 2012 ScientificAmerican.com. Follow Scientific American on Twitter @SciAm and @SciamBlogs. VisitScientificAmerican.com for the latest in science, health and technology news.

New Surveillance System Identifies Your Face By Searching Through 36 Million Images Per Second

New Surveillance System Identifies Your Face By Searching Through 36 Million Images Per Second

 

When it comes to surveillance, your face may now be your biggest liability.

Privacy advocates, brace yourselves – the search capabilities of the latest surveillance technology is nightmare fuel. Hitachi Kokusai Electric recently demonstrated the development of a surveillance camera system capable of searching through 36 million images per second to match a person’s face taken from a mobile phone or captured by surveillance. While the minimum resolution required for a match is 40 x 40 pixels, the facial recognition software allows a variance in the position of the person’s head, such that someone can be turned away from the camera horizontally or vertically by 30 degrees and it can still make a match. Furthermore, the software identifies faces in surveillance video as it is recorded, meaning that users can immediately watch before and after recorded footage from the timepoint.

This means that the biggest barrier in video surveillance, which is watching hours of video to find what you want, is gone.

The power of the search capabilities is in the algorithms that group similar faces together. When a search is conducted, results are immediately shown as thumbnails, and selecting a thumbnail pulls up the stored footage for review. Because the search results are displayed as a grid, mistaken identifications can be ruled out quickly or verified by pulling up the entire video for more information.

The scenarios that this system could be useful for are endless. The police, for instance, could find individuals from old surveillance video or pick them out of large crowds, whether they are suspects or people who’ve been kidnapped. Or if a retail customer is caught stealing something on camera, the system could pull up footage from each time the customer has been in the store to identify other thefts that went unnoticed.

Rapid search of the video database allows users to review video around key timepoints.

The company, which specializes in video cameras for the imaging, medical, and security markets, states that the system is ideally suited for large-scale customers, such as law enforcement agencies, transportation centers, and retail centers. The system will be released in the next fiscal year presumably customized to specific customer’s needs. Interested parties have to contact the company directly, which is probably wise in order to control whose hands it ends up in. And this means that soon, the only thing that’s going to be anonymous anymore are the agencies and organizations using the software.

While this news should make anyone concerned about privacy shudder, it really was only a matter of time before something like this was developed. Likewise, it means that competing systems will follow until systems like this are common. So it will be up to legislators to define how the technology can be used legally as with other surveillance systems, like license-plate recognition cameras.

Check out the video from the security trade show so you can see for yourself just how easy it is to be Big Brother with this system:

[Media: YouTube]

[Sources: DigInfoDigital TrendsPhysOrg]

Grid-Based Computing to Fight Neurological Disease

ScienceDaily: Your source for the latest research news<br />
and science breakthroughs -- updated daily

Grid-Based Computing to Fight Neurological Disease

ScienceDaily (Apr. 11, 2012) — Grid computing, long used by physicists and astronomers to crunch masses of data quickly and efficiently, is making the leap into the world of biomedicine. Supported by EU-funding, researchers have networked hundreds of computers to help find treatments for neurological diseases such as Alzheimer’s. They are calling their system the ‘Google for brain imaging.’



Through the Neugrid project, the pan-European grid computing infrastructure has opened up new channels of research into degenerative neurological disorders and other illnesses, while also holding the promise of quicker and more accurate clinical diagnoses of individual patients.

The infrastructure, set up with the support of EUR 2.8 million in funding from the European Commission, was developed over three years by researchers in seven countries. Their aim, primarily, was to give neuroscientists the ability to quickly and efficiently analyse ‘Magnetic resonance imaging’ (MRI) scans of the brains of patients suffering from Alzheimer’s disease. But their work has also helped open the door to the use of grid computing for research into other neurological disorders, and many other areas of medicine.

‘Neugrid was launched to address a very real need. Neurology departments in most hospitals do not have quick and easy access to sophisticated MRI analysis resources. They would have to send researchers to other labs every time they needed to process a scan. So we thought, why not bring the resources to the researchers rather than sending the researchers to the resources,’ explains Giovanni Frisoni, a neurologist and the deputy scientific director of IRCCS Fatebenefratelli, the Italian National Centre for Alzheimer’s and Mental Diseases, in Brescia.

Five years’ work in two weeks The Neugrid team, led by David Manset from MaatG in France and Richard McClatchey from the University of the West of England in Bristol, laid the foundations for the grid infrastructure, starting with five distributed nodes of 100 cores (CPUs) each, interconnected with grid middleware and accessible via the internet with an easy-to-use web browser interface. To test the infrastructure, the team used datasets of images from the Alzheimer’s Disease Neuroimaging Initiative in the United States, the largest public database of MRI scans of patients with Alzheimer’s disease and a lesser condition termed ‘Mild cognitive impairment’.

‘In Neugrid we have been able to complete the largest computational challenge ever attempted in neuroscience: we extracted 6,500 MRI scans of patients with different degrees of cognitive impairment and analysed them in two weeks,’ Dr. Frisoni, the lead researcher on the project, says, ‘on an ordinary computer it would have taken five years!’.

Though Alzheimer’s disease affects about half of all people aged 85 and older, its causes and progression remain poorly understood. Worldwide more than 35 million people suffer from Alzheimer’s, a figure that is projected to rise to over 115 million by 2050 as the world’s population ages.

Patients with early symptoms have difficulty recalling the names of people and places, remembering recent events and solving simple maths problems. As the brain degenerates, patients in advanced stages of the disease lose mental and physical functions and require round-the-clock care.

The analysis of MRI scans conducted as part of the Neugrid project should help researchers gain important insights into some of the big questions surrounding the disease such as which areas of the brain deteriorate first, what changes occur in the brain that can be identified as biomarkers for the disease and what sort of drugs might work to slow or prevent progression.

Neugrid built on research conducted by two prior EU-funded projects: Mammogrid, which set up a grid infrastructure to analyse mammography data, and AddNeuroMed, which sought biomarkers for Alzheimer’s. The team are now continuing their work in a series of follow-up projects. An expanded grid and a new paradigm Neugrid for You (N4U), a direct continuation of Neugrid, will build upon the grid infrastructure, integrating it with ‘High performance computing’ (HPC) and cloud computing resources. Using EUR 3.5 million in European Commission funding, it will also expand the user services, algorithm pipelines and datasets to establish a virtual laboratory for neuroscientists.

‘In Neugrid we built the grid infrastructure, addressing technical challenges such as the interoperability of core computing resources and ensuring the scalability of the architecture. In N4U we will focus on the user-facing side of the infrastructure, particularly the services and tools available to researchers,’ Dr. Frisoni says. ‘We want to try to make using the infrastructure for research as simple and easy as possible,’ he continues, ‘the learning curve should not be much more difficult than learning to use an iPhone!’

N4U will also expand the grid infrastructure from the initial five computing clusters through connections with CPU nodes at new sites, including 2,500 CPUs recently added in Paris in collaboration with the French Alternative Energies and Atomic Energy Commission (CEA), and in partnership with ‘Enabling grids for e-science Biomed VO’, a biomedical virtual organisation.

Another follow-up initiative, outGRID, will federate the Neugrid infrastructure, linking it with similar grid computing resources set up in the United States by the Laboratory of Neuro Imaging at the University of California, Los Angeles, and the CBRAIN brain imaging research platform developed by McGill University in Montreal, Canada. A workshop was recently held at the International Telecommunication Union, an agency of the United Nations, to foster this effort.

Dr. Frisoni is also the scientific coordinator of the DECIDE project, which will work on developing clinical diagnostic tools for doctors built upon the Neugrid grid infrastructure. ‘There are a couple of important differences between using brain imaging datasets for research and for diagnosis,’ he explains. ‘Researchers compare many images to many others, whereas doctors are interested in comparing images from a single patient against a wider set of data to help diagnose a disease. On top of that, datasets used by researchers are anonymous, whereas images from a single patient are not and protecting patient data becomes an issue.’

The DECIDE project will address these questions in order to use the grid infrastructure to help doctors treat patients. Though the main focus of all these new projects is on using grid computing for neuroscience, Dr. Frisoni emphasises that the same infrastructure, architecture and technology could be used to enable new research — and new, more efficient diagnostic tools — in other fields of medicine. ‘We are helping to lay the foundations for a new paradigm in grid-enabled medical research,’ he says.

Neugrid received research funding under the European Union’s Seventh Framework Programme (FP7).

Scientists at MIT replicate brain activity with chip,,,

BBC

Scientists at MIT replicate brain activity with chip

A graphic of a brain
17 November 2011  at 20:42 GMT
The chip replicates how information flows around the brain

Scientists are getting closer to the dream of creating computer systems that can replicate the brain.

Researchers at the Massachusetts Institute of Technology have designed a computer chip that mimics how the brain’s neurons adapt in response to new information.

Such chips could eventually enable communication between artificially created body parts and the brain.

It could also pave the way for artificial intelligence devices.

There are about 100 billion neurons in the brain, each of which forms synapses – the connections between neurons that allow information to flow – with many other neurons.

This process is known as plasticity and is believed to underpin many brain functions, such as learning and memory.

Neural functions

The MIT team, led by research scientist Chi-Sang Poon, has been able to design a computer chip that can simulate the activity of a single brain synapse.

Activity in the synapses relies on so-called ion channels which control the flow of charged atoms such as sodium, potassium and calcium.

The ‘brain chip’ has about 400 transistors and is wired up to replicate the circuitry of the brain.

Current flows through the transistors in the same way as ions flow through ion channels in a brain cell.

“We can tweak the parameters of the circuit to match specific ions channels… We now have a way to capture each and every ionic process that’s going on in a neuron,” said Mr Poon.

Neurobiologists seem to be impressed.

It represents “a significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto …chips,” said Dean Buonomano, a professor of neurobiology at the University of California.

“The level of biological realism is impressive,” he added.

The team plans to use their chip to build systems to model specific neural functions, such as visual processing.

Such systems could be much faster than computers which take hours or even days to simulate a brain circuit. The chip could ultimately prove to be even faster than the biological process.

More on This Story

Related Stories

Developing a human brain in brain chip for a hybrid brain,,,

BBC News

 Tuesday, 11 March 2008, 10:32 GMT 

Chemical brain controls nanobots
By Jonathan Fildes
Science and technology reporter, BBC News

Artificial brain
The researchers have already built larger ‘brains’

A tiny chemical “brain” which could one day act as a remote control for swarms of nano-machines has been invented.

The molecular device – just two billionths of a metre across – was able to control eight of the microscopic machines simultaneously in a test.

Writing in Proceedings of the National Academy of Sciences, scientists say it could also be used to boost the processing power of future computers.

Many experts have high hopes for nano-machines in treating disease.

“If [in the future] you want to remotely operate on a tumour you might want to send some molecular machines there,” explained Dr Anirban Bandyopadhyay of the International Center for Young Scientists, Tsukuba, Japan.

“But you cannot just put them into the blood and [expect them] to go to the right place.”

Dr Bandyopadhyay believes his device may offer a solution. One day they may be able to guide the nanobots through the body and control their functions, he said.

“That kind of device simply did not exist; this is the first time we have created a nano-brain,” he told BBC News.

Computer brain

The machine is made from 17 molecules of the chemical duroquinone. Each one is known as a “logic device”.

How nanotechnology is building the future from the bottom up

They each resemble a ring with four protruding spokes that can be independently rotated to represent four different states.

One duroquinone molecule sits at the centre of a ring formed by the remaining 16. All are connected by chemical bonds, known as hydrogen bonds.

The state of the control molecule at the centre is switched by a scanning tunnelling microscope (STM).

These large machines are a standard part of the nanotechnologist’s tool kit, and allow the viewing and manipulation of atomic surfaces.

Using the STM, the researchers showed they could change the central molecule’s state and simultaneously switch the states of the surrounding 16.

“We instruct only one molecule and it simultaneously and logically instructs 16 others at a time,” said Dr Bandyopadhyay.

The configuration allows four billion different possible combinations of outcome.

The two nanometre diameter structure was inspired by the parallel communication of glial cells inside a human brain, according to the team.

Robot control

To test the control unit, the researchers simulated docking eight existing nano-machines to the structure, creating a “nano-factory” or a kind of “chemical swiss army knife”.

Nano dust (SPL)

Scientists believe nano-machines could have medical applications

The attached devices, created by other research groups, included the “world’s tiniest elevator”, a molecular platform that can be raised or lowered on command.

The device is about two and a half nanometres (billionths of a metre) high, and the lift moves less than one nanometre up and down.

All eight machines simultaneously responded to a single instruction in the simulation.

“We have clear cut evidence that we can control those machines,” said Dr Bandyopadhyay.

This “one-to-many” communication and the device’s ability to act as a central control unit also raises the possibility of using the device in future computers, he said.

Machines built using devices such as this would be able to process 16 bits of information simultaneously.

Current silicon Central Processing Units (CPUs) can only carry out one instruction at a time, albeit millions of times per second.

The researchers say they have already built faster machines, capable of 256 simultaneous operations, and have designed one capable of 1024.

However, according to Professor Andrew Adamatzky of the University of the West England (UWE), making a workable computer would be very difficult at the moment.

“As with other implementations of unconventional computers the application is very limited, because they operate [it] using scanning tunnel microscopy,” he said.

But, he said, the work is promising.

“I am sure with time such molecular CPUs can be integrated in molecular robots, so they will simply interact with other molecular parts autonomously.”

Revolution in Artificial Intelligence,,,

 ScienceDaily: Your source for the latest research news<br /><br />
and science breakthroughs -- updated daily

 

Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

ScienceDaily (Apr. 2, 2012) — As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.



Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing’s work to its next logical step. She is translating her 1993 discovery of what she has dubbed “Super-Turing” computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue ofNeural Computation.

“This model is inspired by the brain,” she says. “It is a mathematical formulation of the brain’s neural networks with their adaptive abilities.” The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine’s lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

“Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.”

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they’ve been told what to expect and how to respond, Siegelmann says. But they can’t take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called “adaptive inference.” In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the “calculating computer” model and more like Turing’s prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

“I was young enough to be curious, wanting to understand why the Turing model looked really strong,” she recalls. “I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.”

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 20, possible behaviors. “If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

“If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain,” she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.