2050 – and immortality is within our grasp

2050 – and immortality is within our grasp

 

 David Smith, technology correspondent

Britain’s leading thinker on the future offers an extraordinary vision of life in the next 45 years

Cross section of the human brain

Supercomputers could render the wetware of the human brain redundant. Photograph: Gregor Schuster/Getty Images

Aeroplanes will be too afraid to crash, yoghurts will wish you good morning before being eaten and human consciousness will be stored on supercomputers, promising immortality for all – though it will help to be rich.

These fantastic claims are not made by a science fiction writer or a crystal ball-gazing lunatic. They are the deadly earnest predictions of Ian Pearson, head of the futurology unit at BT.

‘If you draw the timelines, realistically by 2050 we would expect to be able to download your mind into a machine, so when you die it’s not a major career problem,’ Pearson told The Observer. ‘If you’re rich enough then by 2050 it’s feasible. If you’re poor you’ll probably have to wait until 2075 or 2080 when it’s routine. We are very serious about it. That’s how fast this technology is moving: 45 years is a hell of a long time in IT.’

Pearson, 44, has formed his mind-boggling vision of the future after graduating in applied mathematics and theoretical physics, spending four years working in missile design and the past 20 years working in optical networks, broadband network evolution and cybernetics in BT’s laboratories. He admits his prophecies are both ‘very exciting’ and ‘very scary’.

He believes that today’s youngsters may never have to die, and points to the rapid advances in computing power demonstrated last week, when Sony released the first details of its PlayStation 3. It is 35 times more powerful than previous games consoles. ‘The new PlayStation is 1 per cent as powerful as a human brain,’ he said. ‘It is into supercomputer status compared to 10 years ago. PlayStation 5 will probably be as powerful as the human brain.’

The world’s fastest computer, IBM’s BlueGene, can perform 70.72 trillion calculations per second (teraflops) and is accelerating all the time. But anyone who believes in the uniqueness of consciousness or the soul will find Pearson’s next suggestion hard to swallow. ‘We’re already looking at how you might structure a computer that could possibly become conscious. There are quite a lot of us now who believe it’s entirely feasible.

‘We don’t know how to do it yet but we’ve begun looking in the same directions, for example at the techniques we think that consciousness is based on: information comes in from the outside world but also from other parts of your brain and each part processes it on an internal sensing basis. Consciousness is just another sense, effectively, and that’s what we’re trying to design in a computer. Not everyone agrees, but it’s my conclusion that it is possible to make a conscious computer with superhuman levels of intelligence before 2020.’

He continued: ‘It would definitely have emotions – that’s one of the primary reasons for doing it. If I’m on an aeroplane I want the computer to be more terrified of crashing than I am so it does everything to stay in the air until it’s supposed to be on the ground.

‘You can also start automating an awful lots of jobs. Instead of phoning up a call centre and getting a machine that says, “Type 1 for this and 2 for that and 3 for the other,” if you had machine personalities you could have any number of call staff, so you can be dealt with without ever waiting in a queue at a call centre again.’

Pearson, from Whitehaven in Cumbria, collaborates on technology with some developers and keeps a watching brief on advances around the world. He concedes the need to debate the implications of progress. ‘You need a completely global debate. Whether we should be building machines as smart as people is a really big one. Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’

In the shorter term, Pearson identifies the next phase of progress as ‘ambient intelligence’: chips with everything. He explained: ‘For example, if you have a pollen count sensor in your car you take some antihistamine before you get out. Chips will come small enough that you can start impregnating them into the skin. We’re talking about video tattoos as very, very thin sheets of polymer that you just literally stick on to the skin and they stay there for several days. You could even build in cellphones and connect it to the network, use it as a video phone and download videos or receive emails.’

Philips, the electronics giant, is developing the world’s first rollable display which is just a millimetre thick and has a 12.5cm screen which can be wrapped around the arm. It expects to start production within two years.

The next age, he predicts, will be that of ‘simplicity’ in around 2013-2015. ‘This is where the IT has actually become mature enough that people will be able to drive it without having to go on a training course.

‘Forget this notion that you have to have one single chip in the computer which does everything. Why not just get a stack of little self-organising chips in a box and they’ll hook up and do it themselves. It won’t be able to get any viruses because most of the operating system will be stored in hardware which the hackers can’t write to. If your machine starts going wrong, you just push a button and it’s reset to the factory setting.’

Pearson’s third age is ‘virtual worlds’ in around 2020. ‘We will spend a lot of time in virtual space, using high quality, 3D, immersive, computer generated environments to socialise and do business in. When technology gives you a life-size 3D image and the links to your nervous system allow you to shake hands, it’s like being in the other person’s office. It’s impossible to believe that won’t be the normal way of communicating.

Scientists at MIT replicate brain activity with chip,,,

BBC

Scientists at MIT replicate brain activity with chip

A graphic of a brain
17 November 2011  at 20:42 GMT
The chip replicates how information flows around the brain

Scientists are getting closer to the dream of creating computer systems that can replicate the brain.

Researchers at the Massachusetts Institute of Technology have designed a computer chip that mimics how the brain’s neurons adapt in response to new information.

Such chips could eventually enable communication between artificially created body parts and the brain.

It could also pave the way for artificial intelligence devices.

There are about 100 billion neurons in the brain, each of which forms synapses – the connections between neurons that allow information to flow – with many other neurons.

This process is known as plasticity and is believed to underpin many brain functions, such as learning and memory.

Neural functions

The MIT team, led by research scientist Chi-Sang Poon, has been able to design a computer chip that can simulate the activity of a single brain synapse.

Activity in the synapses relies on so-called ion channels which control the flow of charged atoms such as sodium, potassium and calcium.

The ‘brain chip’ has about 400 transistors and is wired up to replicate the circuitry of the brain.

Current flows through the transistors in the same way as ions flow through ion channels in a brain cell.

“We can tweak the parameters of the circuit to match specific ions channels… We now have a way to capture each and every ionic process that’s going on in a neuron,” said Mr Poon.

Neurobiologists seem to be impressed.

It represents “a significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto …chips,” said Dean Buonomano, a professor of neurobiology at the University of California.

“The level of biological realism is impressive,” he added.

The team plans to use their chip to build systems to model specific neural functions, such as visual processing.

Such systems could be much faster than computers which take hours or even days to simulate a brain circuit. The chip could ultimately prove to be even faster than the biological process.

More on This Story

Related Stories

Developing a human brain in brain chip for a hybrid brain,,,

BBC News

 Tuesday, 11 March 2008, 10:32 GMT 

Chemical brain controls nanobots
By Jonathan Fildes
Science and technology reporter, BBC News

Artificial brain
The researchers have already built larger ‘brains’

A tiny chemical “brain” which could one day act as a remote control for swarms of nano-machines has been invented.

The molecular device – just two billionths of a metre across – was able to control eight of the microscopic machines simultaneously in a test.

Writing in Proceedings of the National Academy of Sciences, scientists say it could also be used to boost the processing power of future computers.

Many experts have high hopes for nano-machines in treating disease.

“If [in the future] you want to remotely operate on a tumour you might want to send some molecular machines there,” explained Dr Anirban Bandyopadhyay of the International Center for Young Scientists, Tsukuba, Japan.

“But you cannot just put them into the blood and [expect them] to go to the right place.”

Dr Bandyopadhyay believes his device may offer a solution. One day they may be able to guide the nanobots through the body and control their functions, he said.

“That kind of device simply did not exist; this is the first time we have created a nano-brain,” he told BBC News.

Computer brain

The machine is made from 17 molecules of the chemical duroquinone. Each one is known as a “logic device”.

How nanotechnology is building the future from the bottom up

They each resemble a ring with four protruding spokes that can be independently rotated to represent four different states.

One duroquinone molecule sits at the centre of a ring formed by the remaining 16. All are connected by chemical bonds, known as hydrogen bonds.

The state of the control molecule at the centre is switched by a scanning tunnelling microscope (STM).

These large machines are a standard part of the nanotechnologist’s tool kit, and allow the viewing and manipulation of atomic surfaces.

Using the STM, the researchers showed they could change the central molecule’s state and simultaneously switch the states of the surrounding 16.

“We instruct only one molecule and it simultaneously and logically instructs 16 others at a time,” said Dr Bandyopadhyay.

The configuration allows four billion different possible combinations of outcome.

The two nanometre diameter structure was inspired by the parallel communication of glial cells inside a human brain, according to the team.

Robot control

To test the control unit, the researchers simulated docking eight existing nano-machines to the structure, creating a “nano-factory” or a kind of “chemical swiss army knife”.

Nano dust (SPL)

Scientists believe nano-machines could have medical applications

The attached devices, created by other research groups, included the “world’s tiniest elevator”, a molecular platform that can be raised or lowered on command.

The device is about two and a half nanometres (billionths of a metre) high, and the lift moves less than one nanometre up and down.

All eight machines simultaneously responded to a single instruction in the simulation.

“We have clear cut evidence that we can control those machines,” said Dr Bandyopadhyay.

This “one-to-many” communication and the device’s ability to act as a central control unit also raises the possibility of using the device in future computers, he said.

Machines built using devices such as this would be able to process 16 bits of information simultaneously.

Current silicon Central Processing Units (CPUs) can only carry out one instruction at a time, albeit millions of times per second.

The researchers say they have already built faster machines, capable of 256 simultaneous operations, and have designed one capable of 1024.

However, according to Professor Andrew Adamatzky of the University of the West England (UWE), making a workable computer would be very difficult at the moment.

“As with other implementations of unconventional computers the application is very limited, because they operate [it] using scanning tunnel microscopy,” he said.

But, he said, the work is promising.

“I am sure with time such molecular CPUs can be integrated in molecular robots, so they will simply interact with other molecular parts autonomously.”

Revolution in Artificial Intelligence,,,

 ScienceDaily: Your source for the latest research news<br /><br />
and science breakthroughs -- updated daily

 

Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

ScienceDaily (Apr. 2, 2012) — As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.



Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing’s work to its next logical step. She is translating her 1993 discovery of what she has dubbed “Super-Turing” computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue ofNeural Computation.

“This model is inspired by the brain,” she says. “It is a mathematical formulation of the brain’s neural networks with their adaptive abilities.” The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine’s lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

“Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.”

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they’ve been told what to expect and how to respond, Siegelmann says. But they can’t take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called “adaptive inference.” In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the “calculating computer” model and more like Turing’s prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

“I was young enough to be curious, wanting to understand why the Turing model looked really strong,” she recalls. “I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.”

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 20, possible behaviors. “If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

“If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain,” she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.

Efficiency in Multi-Core Chips “computerbrain”

New Bandwidth Management Techniques Boost Operating Efficiency in Multi-Core Chips

ScienceDaily (May 25, 2011) — Researchers from North Carolina State University have developed two new techniques to help maximize the performance of multi-core computer chips by allowing them to retrieve data more efficiently, which boosts chip performance by 10 to 40 percent.


To do this, the new techniques allow multi-core chips to deal with two things more efficiently: allocating bandwidth and “prefetching” data.

Multi-core chips are supposed to make our computers run faster. Each core on a chip is its own central processing unit, or computer brain. However, there are things that can slow these cores. For example, each core needs to retrieve data from memory that is not stored on its chip. There is a limited pathway — or bandwidth — these cores can use to retrieve that off-chip data. As chips have incorporated more and more cores, the bandwidth has become increasingly congested — slowing down system performance.

One of the ways to expedite core performance is called prefetching. Each chip has its own small memory component, called a cache. In prefetching, the cache predicts what data a core will need in the future and retrieves that data from off-chip memory before the core needs it. Ideally, this improves the core’s performance. But, if the cache’s prediction is inaccurate, it unnecessarily clogs the bandwidth while retrieving the wrong data. This actually slows the chip’s overall performance.

“The first technique relies on criteria we developed to determine how much bandwidth should be allotted to each core on a chip,” says Dr. Yan Solihin, associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research. Some cores require more off-chip data than others. The researchers use easily-collected data from the hardware counters on each chip to determine which cores need more bandwidth. “By better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance,” Solihin says.

“The second technique relies on a set of criteria we developed for determining when prefetching will boost performance and should be utilized,” Solihin says, “as well as when prefetching would slow things down and should be avoided.” These criteria also use data from each chip’s hardware counters. The prefetching criteria would allow manufacturers to make multi-core chips that operate more efficiently, because each of the individual cores would automatically turn prefetching on or off as needed.

Utilizing both sets of criteria, the researchers were able to boost multi-core chip performance by 40 percent, compared to multi-core chips that do not prefetch data, and by 10 percent over multi-core chips that always prefetch data.

The paper, “Studying the Impact of Hardware Prefetching and Bandwidth Partitioning in Chip-Multiprocessors,” will be presented June 9 at the International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS) in San Jose, Calif. The paper was co-authored by Dr. Fang Liu, a former Ph.D. student at NC State. The research was supported, in part, by the National Science Foundation.

NC State’s Department of Electrical and Computer Engineering is part of the university’s College of Engineering.

Roll over headlines to view top news summaries: