The missing link between us and the future !

The missing link between us and the future

In the early 1990s, the IT industry got very excited about virtual reality, the idea that you could use some sort of headset display to wander around in a 3d computer-generated world. We quickly realised there are zillions of variations on this idea, and after the one that became current computer gaming (3d worlds on a 2d monitor) the biggest of the rest was augmented reality, where data and images could be superimposed on the field of view.

Now, we are seeing apps on phones and pads that claim to be augmented reality, showing where the nearest tube station is for example. To a point I guess they are, but only in as far as they can let you hold up a display in front of you and see images relevant to the location and direction. They hardly amount to a head up display, and fall a long way short of the kind of superimposition we’re been used to on sci-fi since Robocop or Terminator. It is clear that we really need a proper head-up display, one that doesn’t require you to take a gadget out and hold it up in front of you.

There are some head-up displays out there. Some make overlay displays in a small area of your field of view, often using small projectors and mirrors. Some use visors.  However the video visor based displays are opaque. They are fine for watching TV or playing games while seated, but not much use for wandering around.

This will change in the next 18 months – 2 years. Semi-transparent visors will begin to appear then. The few years after that will undoubtedly see rapid development of them, eventually bringing a full hi-res 3d overlay capability. And that will surely be a major disruptive technology. Just as we are getting used to various smart phones, pads, ebbook readers and 3d TVs, they could all be absorbed into a general purpose head up display that can be used for pretty much anything.

It is hard to overstate the potential of this kind of interface once it reaches good enough quality. It allows anything from TV, games, or the web, to be blended with any real world scene or activity. This will transform how we shop, work and socialise, how we design and use buildings, and even how we use art or display ourselves. Each of these examples could easily fill a book.  The whole of the world wide web was enabled by the convergence of just the computing and telecoms industries. The high quality video visor will enable convergence of the real world with the whole of the web, media, and virtual worlds, not just two industry sectors. Augmented reality will be a huge part of that, but even virtual reality and the zillions of variants can then start to be explored too.

In short, the semi-transparent video visor is the missing link. It is the biggest bottleneck now stopping the future arriving. Everything till we get that is a sideshow.

Artificial Hippocampus, the Borg Hive Mind, and Other Neurological Endeavors

Artificial Hippocampus, the Borg Hive Mind, and Other Neurological Endeavors

November 15

Many of us know about ‘Borg Hive Mind’ from TV programs where the characters are linked through brain-to-brain or computer-to-brain interactions. However, this is more than a science fiction fantasy. The idea was contemplated seriously in the 2002 National Science Foundation report, Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science. ‘Techlepathy‘ is the word coined, referring to the communication of information directly from one mind to another (i.e. telepathy) with the assistance of technology.

Many research activities focus on neuro-engineering and the cognitive sciences. Many neuroscientists and bioengineers now work on:

  • cognitive computing
  • digitally mapping the human brain (see here and here); the mouse brain map has just been published
  • developing microcircuits that can repair brain damage, and
  • other numerous projects related to changing the cognitive abilities and functioning of humans, and artificial intelligence.

Journals exist for all of these activities — including the Human Brain Mappingjournal. Some envision a Human Cognome Project. James Albus, a senior fellow and founder of the Intelligent Systems Division of the National Institute of Standards and Technology believes the era of ‘engineering the mind‘ is here. He has proposed a national program for developing a scientific theory of the mind.

Neuromorphic engineering, Wikipedia says, “is a new interdisciplinary discipline that takes inspiration from biology, physics, mathematics, computer science and engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.”

mind computer

There are many examples.

Researchers from Harvard University have linked nanowire field-effect transistors to neurons. Three applications are envisioned: hybrid biological/electronic devices, interfaces to neural prosthetics, and the capture of high-resolution information about electrical signals in the brain. Research is advancing in four areas: neuronal networks, interfaces between the brain and external neural prosthetics, real-time cellular assays, and hybrid circuits that couple digital nanoelectronic and biological computing components.

Numenta, a company formed in 2005, states on its webpage that it “is developing a new type of computer memory system modelled after the human neocortex.”

Kwabena Boahen, an associate professor of bioengineering at Stanford University, has developed Neurogrid, “a specialized hardware platform that will enable the cortex’s inner workings to be simulated in detail — something outside the reach of even the fastest supercomputers.” He is also working on a silicon retina and a silicon chip that emulates the way the juvenile brain wires itself up.

Researchers at the University of Washington are working on an implantable electronic chip that may help to establish new nerve connections in the part of the brain that controls movement.

The Blue Brain project — a collaboration of IBM and the Ecole Polytechnique Federale de Lausanne, in Lausanne, Switzerland – will create a detailed model of the circuitry in the neocortex.

A DNA switchnanoactuator‘ has been developed by Dr. Keith Firman at the University of Portsmouth and other European researchers, which can interface living organisms with computers.

Kevin Warwick had an RFID transmitter (a future column will deal with RFID chips) implanted beneath his skin in 1998, which allowed him to control doors, lights, heaters, and other computer-controlled devices in his proximity. In anotherexperiment, he and his wife Irena each had electrodes surgically implanted in their arms. The electrodes were linked by radio signals to a computer which created a direct link between their nervous systems. Kevin’s wife felt when he moved his arm.


In his book I, Cyborg, Kevin Warwick imagines that 50 years from now most human brains will be linked electronically through a global computer network.

St. Joseph’s Hospital in the United States has implanted neurostimulators (deep brain stimulators) using nanowires to connect a stimulating device to brain. A pacemaker-like device is implanted in the chest, and flexible wires are implanted in the brain. Electrical impulses sent from the ‘pacemaker’ to the brain are used to treat Parkinson’s, migraine headaches and chronic pain, depression, obsessive-compulsive disorder, improve the mobility of stroke victims, and curb cravings in drug addicts.

In 2003/2004 a variety of publications (see links below) reported on the efforts of professor Theodore W. Berger, director of the Center for Neural Engineering at the University of Southern California, and his colleagues, to develop the world’s firstbrain prosthesis – an ‘artificial hippocampus’ which is supposed to act as a memory bank. These publications highlighted in particular the use of such implants for Alzheimer’s patients.

The research program is proceeding in four stages: (1) tests on slices of rat brains kept alive in cerebrospinal fluid… reported as successful in 2004; (2) tests on live rats which are to take place within three years; (3) tests on live monkeys; and (4) tests on humans — very likely on Alzheimer’s patients first.

The Choice is Yours

If these advancements come to pass, they will create many ethical, legal, privacy and social issues. For the artificial hippocampus we should ask: would brain implants force some people to remember things they would rather forget? Could someone manipulate our memory? What would be the consequence of uploading information (see my education column)? Will we still have control over what we remember? Could we be forced to remember something over and over? If we can communicate with each other through a computer what will be the consequence of a Global Brain?

It is important that people become more involved in the governance of neuro-engineering and cognitive science projects. We should not neglect these areas because we perceive them to be science fiction. We also need to look beyond the outlined ‘medical applications.’ If the artificial hippocampus works, it will likely be used for more than dealing with diseases.

I will cover brain-machine interfaces, neuro-pharmaceutical-based ‘cognitive enhancement,’ and neuroethics and the ethics of artificial intelligence in future columns.

Gregor Wolbring is a biochemist, bioethicist, science and technology ethicist, disability/vari-ability studies scholar, and health policy and science and technology studies researcher at the University of Calgary. He is a member of the Center for Nanotechnology and Society at Arizona State University; Member CAC/ISO – Canadian Advisory Committees for the International Organization for Standardization section TC229 Nanotechnologies; Member of the editorial team for the Nanotechnology for Development portal of the Development Gateway Foundation; Chair of the Bioethics Taskforce of Disabled People’s International; and Member of the Executive of the Canadian Commission for UNESCO. He publishes the Bioethics, Culture and Disability website, moderates a weblog forthe International Network for Social Research on Diasbility, and authors a weblogon NBICS and its social implications.


Man vs Machine



Politically planned violations of human rights goes on in all EU-nations, directed to increase state power and reduce human influence. In Sweden, the FOI (Swedish Defense Research Institution) has for decades been developing remote control systems for our neurological functions, via bio-chips injected at health care. In FOI:s annual report, they describe the project as monitoring and changing the cognitive functions of people throughout their life span, i.e. thoughts, perception and common sense.


 Sweden’s most dangerous criminal organization. The FOI ruins both democracy and human rights by connecting the human brain to supercomputers.

The EU-Commission’s Ethical Council chaired by the Swedish Professor Goran Hermerén, in 2005 delivered a 30-page document in protest to the EU-Commission. They declared that this technology was a threat to both democracy and human autonomy in all EU-nations:


Brain-computer interface, or direct brain control:the technologies involved are communication technologies: they take information from the brain and externalize it…Freedom of researchers may conflict with the obligation to safeguard the health of research subjects…the freedom to use implants in ones body, might collide with potential negative social effects…How far can such implants be a threat to human autonomy when they are implanted in our brains?…How far should we be subject to the control of such devices by other people using these devices?…The use of implants in order to have a remote control over the will of people should be strictly prohibited…To what extent will this technology be misused by the military?”


 FOI  Director Jan-Olof Lind,cheif of the institution founded on the raping of humans. He should be held legally responsible for crimes against human rights.

This planned population project, does not only remove human rights, but transforms us via behavioral manipulation with invasive brain technology. FOI wrote in its program: We have unique tools and methodologies for the modeling of human behavior…The goal is to design systems able to exploit human cognitive potential (i.e. the ability to perceive, understand and organize information) throughout the course of a person’s life time…Regardless that the consequences of this for people are strong physical burden to bear, it includes as well a risk of serious injury”.

 The systems function via two-way radio communication, implants and supercomputers. The EU-board wrote: How far should we let implants get ’under our skins’?…Indeed, individuals are dispossessed of their own bodies and thereby of their own autonomy. The body ends up being under others’ control. Individuals are being modified, via various electronic devices, under skin chips and smart tags, to such an extent that they are increasingly turned into networked individuals…Does a human being cease to be such a ’being’ in cases where some parts of his or her body – particularly the brain – are substituted and/or supplemented by implants?”


Prime Minister Fredrik Reinfeldt has accepted the use of humans in experimental reserach and behavioral manipulation. He has also supported FOI´s declaration of abuse of humans in the name of science.

This is a cause of both madness, and anti-social trends in our societies. The brain project has been developed in secrecy for decades; the mentally ill have been put to sleep and implanted with electrodes in their brains and hospitals are injecting brain chips into unwitting patients on a large scale. 


Another view of the military conquest of the brain. The Department of Defense has taken over the Karolinska Institute´s neuroscience, and FOI-managers direct the operations. A similar relationship exists for other parts of Karolinska  Institute, which is controll

The threat to human rights, freedom and a civilized society could not be more serious or totalitarian. It does not only place a person behind an iron curtain, but create a brain barrier for one’s own thoughts and personality. The project utilizes many forms of research; biological experiments, neuroscience, social engineering, personality modification etc. This is a techno-political agenda for the future that goes a step further than any traditional dictatorships ever has done. It intends to transform us into biological machinery and exploited guinea pigs. 40 years ago, the Swedish state report Choosing Future by Alva Myrdal quoted that people would have small chances of protecting their rights, regarding to the behavior technology. Said quote is from the official state report,SOU 1972:59:

Research into brain function and behavior is designed primarily to clarify the nature and extent of the changes that can be achieved with the different methods and thus provide information on new opportunities to alleviate human suffering, and new risks of control and modification of behavior against peoples will.”

Several leading professors have suggested that antisocial trends have been spread through these military brain systems. TheEU-Ethical Council asked what danger the military constituted. In fact, they are the central factor in the game of the human brain. Fredrik Reinfeldt had even before he became Prime Minister, made up his mind to stand for the FOI’s abuse. He did the same 2007, after forming the new government; he reaffirmed his opinion concerning this matter.


 I, Eric R Naeslund has written this paper and formed the accuse. Also supported a network including journalists, organizations and brain activists in a joint project to bring the issue to media attention and everybody´s  knowledge.



For the last 40 years and without practically any media attention, there has been an ongoing debate within the state regarding the brain technology. A former Director General of Data Inspection, Stina Wahlstrom, took up the subject in relation to human rights in the annual book 1989-1990. She wrote:Obviously, research must include the same ethical values that are generally the basis of law in our society…It is necessary to limit the research and this restriction is needed in a democratic society… People’s integrity has been violated which often means, unwittingly or unwillingly being forced to participate in a research project. Legislation for such coercion does not belong in a democratic society.It was written 20 years ago, yet it has been developed as far that the 12 members of the European Ethical Council stated: “…when these implants are within our own brains.”This is a threat intended to include all of us in experimental research and behavioral manipulation.

A Prime Minister who accepts the secret brain project, has also launched a battle against people’s fundamental freedom and human rights.
In Fredrik Reinfeldt’s state, it is important to repeat terms like ‘justice’ and ‘the open society’ as indoctrinated concepts, to hide the reality of building even higher walls of coercion and censorships, than any previous despot has done. Metaphorical impenetrable barbed wire fences that are – to replace freedom with control – being steadily created in more people’s brains, to replace freedom, with control. The State’s ravaging has FOI as its spearhead. Several times they declare, not only to accept destruction of humans, but also that certain research is actually based on causing harm, suffering and trauma in people. FOI’s takeover of key components of the Karolinska Institute has facilitated the professors and researchers’ projects. A neuro-professor stated in a speech, they couldn’t avoid creating diseases and death amongst those they misused. A scary reality that is not uncommon.

In order to describe the subject from an international perspective, New York Times have had the courage to challenge the U.S. government’s covert brain project. Accusing the Pentagon and the CIA of perpetrating the same systems of persecution as the Defence Departments within the EU, they published three political editorials, 50 articles and demanded better knowledge and a public debate. The first editorial was published in 1967 under the heading Push Button People. They warned of the possibility of enslaving the brain and wrote that it was likely; some nations had plans to suppress its citizens by brain technology. The second came in 1970 under the newly formed term Brain Wave. They indicated that we had to update the word ‘brain washing’ to ‘brain waving’, and assumed that Orwell’s 1984had expired and a new and worse danger was at hand. That every newborn child’s first experience of life would be neuro surgery, to be implanted with a transmitter and for their lifetime get emotions and reasons controlled by the state. The third editorial was published in August 1977 after that the New York Times during the summer published 30 revealing articles on the CIA’s brain projects. Under the heading Control CIA Not Behavior they stated that no one knows how many were injured or killed, and they demanded legal action against those involved and financial compensation for victims.

It’s a bigger nightmare here and now, since the experimental program has developed into a permanent state operation. Senator John Glenn spent his final three years in the Senate (1994-1997) trying to regulate the abuses. In his closing speech in January 1997 he called the question for one of the most important of our time. Here we stand at one of mankind’s crucial crossroads in relation to individual freedom – vs. unlimited state power to reduce man to Governmental components. Who wants to live his life with chips and manipulated perceptions? None of course! The EU Council wrote that they wanted to give people the power against the introduction of systems to reduce freedom and autonomy. This topic, more so than any other issue, is reshaping the future, the human brain and life. It must obviously come up for debate in both parliament and the media. We all have a responsibility to contribute; journalists, social activists and of course those parliamentary members whom are opposed to brain chips, behavior control, human experimentation and undemocratic ideas, must of course make themselves heard too.

or anyone who wants more information concerning the issue – there are an extensive information material both in English and Swedish – contact

Scientists Warn of Ethical Battle Concerning Military Mind Control

Scientists Warn of Ethical Battle Concerning Military Mind Control

Advances in neuroscience are closer than ever to becoming a reality, but scientists are warning the military – along with their peers – that with great power comes great responsibility!

March 20, 2012

A future of brain-controlled tanks, automated attack drones and mind-reading interrogation techniques may arrive sooner than later, but advances in neuroscience that will usher in a new era of combat come with tough ethical implications for both the military and scientists responsible for the technology, according to one of the country’s leading bioethicists.

“Everybody agrees that conflict will be changed as new technologies are coming on,” says Jonathan Moreno, author ofMind Wars: Brain Science and the Military in the 21st Century. “But nobody knows where that technology is going.”

[See pictures of Navy SEALs]

Moreno warns in an essay published in the science journal PLoS Biology Tuesday that the military’s interest in neuroscience advancements “generates a tension in its relationship with science.”

“The goals of national security and the goals of science may conflict. The latter employs rigorous standards of validation in the expansion of knowledge, while the former depends on the most promising deployable solutions for the defense of the nation,” he writes.

Much of neuroscience focuses on returning function to people with traumatic brain injuries, he says. Just as Albert Einstein didn’t know his special theory of relativity could one day be used to create a nuclear weapon, neuroscience researchintended to heal could soon be used to harm.

“Neuroscientists may not consider how their work contributes to warfare,” he adds.

Moreno says there is a fine line between using neuroscience devices to allow an injured person to regain baseline functions and enhancing someone’s body to perform better than their natural body ever could.

“Where one draws that line is not obvious, and how one decides to cross that line is not easy. People will say ‘Why would we want to deny warfighters these advantages?'” he says.

[Mind Control, Biometrics Could Change the World]

Moreno isn’t the only one thinking about this. The Brookings Institution’s Peter Singer writes in his book, Wired for War: The Robotics Revolution and Conflict in the 21st Century, that “‘the Pentagon’s real-world record with things like the aboveground testing of atomic bombs, Agent Orange, and Gulf War syndrome certainly doesn’t inspire the greatest confidence among the first generation of soldiers involved [in brain enhancementresearch.]”

The military, scientists and ethicists are increasingly wondering how neuroscience technology changes the battlefield. The staggering possibilities are further along than many think. There is already development on automated drones that are programmed to make their own decisions about who to kill within the rules of war. Other ideas that are closer-than-you-think to becoming a military reality: Tanks controlled from half a world away, memory erasures that could prevent PTSD, and “brain fingerprinting” that could be used to extract secrets from enemies.Moreno foretold some of these developments when he first published Mind Wars in 2006, but not without trepidation.

“I was afraid I’d be dismissed as a paranoid schizophrenic when I first published the book,” he says. But then a funny thing happened—the Department of Defense and other military groups began holding panels on neurotechnology to determine how and when it should be used. I was surprised how quickly the policy questions moved forward. Questions like: ‘Can we use autonomous attack drones?’ ‘Must there be a human being in the vehicle?’ ‘How much of a payload can it have?’. There are real questions coming up in the international legal community.”

All of those questions will have to be answered sooner than later, Moreno says, along with a host of others. Should soldiers have the right to refuse “experimental” brain implants? Will the military want to use some of this technology before science deems it safe?

“There’s a tremendous tension about this,” he says. “There’s a great feeling of responsibility that we push this stuff out so we’re ahead of our adversaries.”

The program is to model machine intelligence on human intelligence by
understanding how our brains work:
Then allow machines to be fully autonomous and enable them to follow their
programming, already machines built from these methods are capable of learning
Why would we want a stupid human factored in to being a pilot? An electrical circuit is
thousands of times faster than protein synapses, why have a worm when you can have an
eagle? Of course we are capable of ‘tele-war’ by connecting nervous systems to devices
tele-tanks are also a possibility.
In the next decade, the program’s timetable is to have functioning real-
life terminators:
Here are other citations you may find interesting:
Functional Magnetic Resonance Imaging (fMRI) and computational models to decode
and reconstruct people’s dynamic visual experiences:
Data mining opens the door to predictive neuroscience:
New drone has no pilot anywhere, so who’s accountable:
Artificial synapses could lead to advanced computer memory and machines that
The problem is all of this is old news, the military-industrial complex is 5-10 years
ahead of the public. If I were to say that DARPA is currently working on
smart-dust nanotechnology and is even capable of adding, editing, and
deleting memory in humans would you believe me? Doesn’t matter, time will prove it.

Optics InfoBase is the Optical Society’s online library for flagship journals,
partnered and copublished journals, and recent proceedings from OSA conferences.

How to Use Light to Control the Brain

How to Use Light to Control the Brain

Stephen Dougherty, Scientific American
Date: 01 April 2012 Time: 09:38 AM

In the film Amèlie, the main character is a young eccentric woman who attempts to change the lives of those around her for the better. One day Amèlie finds an old rusty tin box of childhood mementos in her apartment, hidden by a boy decades earlier. After tracking down Bretodeau, the owner, she lures him to a phone booth where he discovers the box. Upon opening the box and seeing a few marbles, a sudden flash of vivid images come flooding into his mind. Next thing you know, Bretodeau is transported to a time when he was in the schoolyard scrambling to stuff his pockets with hundreds of marbles while a teacher is yelling at him to hurry up.

We have all experienced this: a seemingly insignificant trigger, a scent, a song, or an old photograph transports us to another time and place. Now a group of neuroscientists have investigated the fascinating question: Can a few neurons trigger a full memory?
In a new study, published in Nature, a group of researchers from MIT showed for the first time that it is possible to activate a memory on demand, by stimulating only a few neurons with light, using a technique known as optogenetics. Optogenetics is a powerful technology that enables researchers to control genetically modified neurons with a brief pulse of light.

To artificially turn on a memory, researchers first set out to identify the neurons that are activated when a mouse is making a new memory. To accomplish this, they focused on a part of the brain called the hippocampus, known for its role in learning and memory, especially for discriminating places. Then they inserted a gene that codes for a light-sensitive protein into hippocampal neurons, enabling them to use light to control the neurons.

With the light-sensitive proteins in place, the researchers gave the mouse a new memory. They put the animal in an environment where it received a mild foot shock, eliciting the normal fear behavior in mice: freezing in place. The mouse learned to associate a particular environment with the shock.

Next, the researchers attempted to answer the big question: Could they artificially activate the fear memory? They directed light on the hippocampus, activating a portion of the neurons involved in the memory, and the animals showed a clear freezing response. Stimulating the neurons appears to have triggered the entire memory.

The researchers performed several key tests to confirm that it was really the original memory recalled. They tested mice with the same light-sensitive protein but without the shock; they tested mice without the light-sensitive protein; and they tested mice in a different environment not associated with fear. None of these tests yielded the freezing response, reinforcing the conclusion that the pulse of light indeed activated the old fear memory.

In 2010, optogenetics was named the scientific Method of the Year by the journal Nature Methods. The technology was introduced in 2004 by a research group at Stanford University led by Karl Deisseroth, a collaborator on this research. The critical advantage that optogenetics provides over traditional neuroscience techniques, like electrical stimulation or chemical agents, is speed and precision. Electrical stimulation and chemicals can only be used to alter neural activity in nonspecific ways and without precise timing. Light stimulation enables control over a small subset of neurons on a millisecond time scale.

Over the last several years, optogenetics has provided powerful insights into the neural underpinnings of brain disorders like depression, Parkinson’s disease, anxiety, and schizophrenia. Now, in the context of memory research, this study shows that it is possible to artificially stimulate a few neurons to activate an old memory, controlling an animals’ behavior without any sensory input. This is significant because it provides a new approach to understand how complex memories are formed in the first place.

Lest ye worry about implanted memories and mind control, this technology is still a long way from reaching any human brains. Nevertheless, the first small steps towards the clinical application of optogenetics have already begun. A group at Brown University, for example, is working on a wireless optical electrode that can deliver light to neurons in the human brain. Who knows, someday, instead of new technology enabling us to erase memories á la Eternal Sunshine of the Spotless Mind, we may actually undergo memory enhancement therapy with a brief session under the lights.

This article was first published on Scientific American. © 2012 Follow Scientific American on Twitter @SciAm and @SciamBlogs. for the latest in science, health and technology news.

New Surveillance System Identifies Your Face By Searching Through 36 Million Images Per Second

New Surveillance System Identifies Your Face By Searching Through 36 Million Images Per Second


When it comes to surveillance, your face may now be your biggest liability.

Privacy advocates, brace yourselves – the search capabilities of the latest surveillance technology is nightmare fuel. Hitachi Kokusai Electric recently demonstrated the development of a surveillance camera system capable of searching through 36 million images per second to match a person’s face taken from a mobile phone or captured by surveillance. While the minimum resolution required for a match is 40 x 40 pixels, the facial recognition software allows a variance in the position of the person’s head, such that someone can be turned away from the camera horizontally or vertically by 30 degrees and it can still make a match. Furthermore, the software identifies faces in surveillance video as it is recorded, meaning that users can immediately watch before and after recorded footage from the timepoint.

This means that the biggest barrier in video surveillance, which is watching hours of video to find what you want, is gone.

The power of the search capabilities is in the algorithms that group similar faces together. When a search is conducted, results are immediately shown as thumbnails, and selecting a thumbnail pulls up the stored footage for review. Because the search results are displayed as a grid, mistaken identifications can be ruled out quickly or verified by pulling up the entire video for more information.

The scenarios that this system could be useful for are endless. The police, for instance, could find individuals from old surveillance video or pick them out of large crowds, whether they are suspects or people who’ve been kidnapped. Or if a retail customer is caught stealing something on camera, the system could pull up footage from each time the customer has been in the store to identify other thefts that went unnoticed.

Rapid search of the video database allows users to review video around key timepoints.

The company, which specializes in video cameras for the imaging, medical, and security markets, states that the system is ideally suited for large-scale customers, such as law enforcement agencies, transportation centers, and retail centers. The system will be released in the next fiscal year presumably customized to specific customer’s needs. Interested parties have to contact the company directly, which is probably wise in order to control whose hands it ends up in. And this means that soon, the only thing that’s going to be anonymous anymore are the agencies and organizations using the software.

While this news should make anyone concerned about privacy shudder, it really was only a matter of time before something like this was developed. Likewise, it means that competing systems will follow until systems like this are common. So it will be up to legislators to define how the technology can be used legally as with other surveillance systems, like license-plate recognition cameras.

Check out the video from the security trade show so you can see for yourself just how easy it is to be Big Brother with this system:

[Media: YouTube]

[Sources: DigInfoDigital TrendsPhysOrg]

Grid-Based Computing to Fight Neurological Disease

ScienceDaily: Your source for the latest research news<br />
and science breakthroughs -- updated daily

Grid-Based Computing to Fight Neurological Disease

ScienceDaily (Apr. 11, 2012) — Grid computing, long used by physicists and astronomers to crunch masses of data quickly and efficiently, is making the leap into the world of biomedicine. Supported by EU-funding, researchers have networked hundreds of computers to help find treatments for neurological diseases such as Alzheimer’s. They are calling their system the ‘Google for brain imaging.’

Through the Neugrid project, the pan-European grid computing infrastructure has opened up new channels of research into degenerative neurological disorders and other illnesses, while also holding the promise of quicker and more accurate clinical diagnoses of individual patients.

The infrastructure, set up with the support of EUR 2.8 million in funding from the European Commission, was developed over three years by researchers in seven countries. Their aim, primarily, was to give neuroscientists the ability to quickly and efficiently analyse ‘Magnetic resonance imaging’ (MRI) scans of the brains of patients suffering from Alzheimer’s disease. But their work has also helped open the door to the use of grid computing for research into other neurological disorders, and many other areas of medicine.

‘Neugrid was launched to address a very real need. Neurology departments in most hospitals do not have quick and easy access to sophisticated MRI analysis resources. They would have to send researchers to other labs every time they needed to process a scan. So we thought, why not bring the resources to the researchers rather than sending the researchers to the resources,’ explains Giovanni Frisoni, a neurologist and the deputy scientific director of IRCCS Fatebenefratelli, the Italian National Centre for Alzheimer’s and Mental Diseases, in Brescia.

Five years’ work in two weeks The Neugrid team, led by David Manset from MaatG in France and Richard McClatchey from the University of the West of England in Bristol, laid the foundations for the grid infrastructure, starting with five distributed nodes of 100 cores (CPUs) each, interconnected with grid middleware and accessible via the internet with an easy-to-use web browser interface. To test the infrastructure, the team used datasets of images from the Alzheimer’s Disease Neuroimaging Initiative in the United States, the largest public database of MRI scans of patients with Alzheimer’s disease and a lesser condition termed ‘Mild cognitive impairment’.

‘In Neugrid we have been able to complete the largest computational challenge ever attempted in neuroscience: we extracted 6,500 MRI scans of patients with different degrees of cognitive impairment and analysed them in two weeks,’ Dr. Frisoni, the lead researcher on the project, says, ‘on an ordinary computer it would have taken five years!’.

Though Alzheimer’s disease affects about half of all people aged 85 and older, its causes and progression remain poorly understood. Worldwide more than 35 million people suffer from Alzheimer’s, a figure that is projected to rise to over 115 million by 2050 as the world’s population ages.

Patients with early symptoms have difficulty recalling the names of people and places, remembering recent events and solving simple maths problems. As the brain degenerates, patients in advanced stages of the disease lose mental and physical functions and require round-the-clock care.

The analysis of MRI scans conducted as part of the Neugrid project should help researchers gain important insights into some of the big questions surrounding the disease such as which areas of the brain deteriorate first, what changes occur in the brain that can be identified as biomarkers for the disease and what sort of drugs might work to slow or prevent progression.

Neugrid built on research conducted by two prior EU-funded projects: Mammogrid, which set up a grid infrastructure to analyse mammography data, and AddNeuroMed, which sought biomarkers for Alzheimer’s. The team are now continuing their work in a series of follow-up projects. An expanded grid and a new paradigm Neugrid for You (N4U), a direct continuation of Neugrid, will build upon the grid infrastructure, integrating it with ‘High performance computing’ (HPC) and cloud computing resources. Using EUR 3.5 million in European Commission funding, it will also expand the user services, algorithm pipelines and datasets to establish a virtual laboratory for neuroscientists.

‘In Neugrid we built the grid infrastructure, addressing technical challenges such as the interoperability of core computing resources and ensuring the scalability of the architecture. In N4U we will focus on the user-facing side of the infrastructure, particularly the services and tools available to researchers,’ Dr. Frisoni says. ‘We want to try to make using the infrastructure for research as simple and easy as possible,’ he continues, ‘the learning curve should not be much more difficult than learning to use an iPhone!’

N4U will also expand the grid infrastructure from the initial five computing clusters through connections with CPU nodes at new sites, including 2,500 CPUs recently added in Paris in collaboration with the French Alternative Energies and Atomic Energy Commission (CEA), and in partnership with ‘Enabling grids for e-science Biomed VO’, a biomedical virtual organisation.

Another follow-up initiative, outGRID, will federate the Neugrid infrastructure, linking it with similar grid computing resources set up in the United States by the Laboratory of Neuro Imaging at the University of California, Los Angeles, and the CBRAIN brain imaging research platform developed by McGill University in Montreal, Canada. A workshop was recently held at the International Telecommunication Union, an agency of the United Nations, to foster this effort.

Dr. Frisoni is also the scientific coordinator of the DECIDE project, which will work on developing clinical diagnostic tools for doctors built upon the Neugrid grid infrastructure. ‘There are a couple of important differences between using brain imaging datasets for research and for diagnosis,’ he explains. ‘Researchers compare many images to many others, whereas doctors are interested in comparing images from a single patient against a wider set of data to help diagnose a disease. On top of that, datasets used by researchers are anonymous, whereas images from a single patient are not and protecting patient data becomes an issue.’

The DECIDE project will address these questions in order to use the grid infrastructure to help doctors treat patients. Though the main focus of all these new projects is on using grid computing for neuroscience, Dr. Frisoni emphasises that the same infrastructure, architecture and technology could be used to enable new research — and new, more efficient diagnostic tools — in other fields of medicine. ‘We are helping to lay the foundations for a new paradigm in grid-enabled medical research,’ he says.

Neugrid received research funding under the European Union’s Seventh Framework Programme (FP7).

Scientists at MIT replicate brain activity with chip,,,


Scientists at MIT replicate brain activity with chip

A graphic of a brain
17 November 2011  at 20:42 GMT
The chip replicates how information flows around the brain

Scientists are getting closer to the dream of creating computer systems that can replicate the brain.

Researchers at the Massachusetts Institute of Technology have designed a computer chip that mimics how the brain’s neurons adapt in response to new information.

Such chips could eventually enable communication between artificially created body parts and the brain.

It could also pave the way for artificial intelligence devices.

There are about 100 billion neurons in the brain, each of which forms synapses – the connections between neurons that allow information to flow – with many other neurons.

This process is known as plasticity and is believed to underpin many brain functions, such as learning and memory.

Neural functions

The MIT team, led by research scientist Chi-Sang Poon, has been able to design a computer chip that can simulate the activity of a single brain synapse.

Activity in the synapses relies on so-called ion channels which control the flow of charged atoms such as sodium, potassium and calcium.

The ‘brain chip’ has about 400 transistors and is wired up to replicate the circuitry of the brain.

Current flows through the transistors in the same way as ions flow through ion channels in a brain cell.

“We can tweak the parameters of the circuit to match specific ions channels… We now have a way to capture each and every ionic process that’s going on in a neuron,” said Mr Poon.

Neurobiologists seem to be impressed.

It represents “a significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto …chips,” said Dean Buonomano, a professor of neurobiology at the University of California.

“The level of biological realism is impressive,” he added.

The team plans to use their chip to build systems to model specific neural functions, such as visual processing.

Such systems could be much faster than computers which take hours or even days to simulate a brain circuit. The chip could ultimately prove to be even faster than the biological process.

More on This Story

Related Stories

Developing a human brain in brain chip for a hybrid brain,,,

BBC News

 Tuesday, 11 March 2008, 10:32 GMT 

Chemical brain controls nanobots
By Jonathan Fildes
Science and technology reporter, BBC News

Artificial brain
The researchers have already built larger ‘brains’

A tiny chemical “brain” which could one day act as a remote control for swarms of nano-machines has been invented.

The molecular device – just two billionths of a metre across – was able to control eight of the microscopic machines simultaneously in a test.

Writing in Proceedings of the National Academy of Sciences, scientists say it could also be used to boost the processing power of future computers.

Many experts have high hopes for nano-machines in treating disease.

“If [in the future] you want to remotely operate on a tumour you might want to send some molecular machines there,” explained Dr Anirban Bandyopadhyay of the International Center for Young Scientists, Tsukuba, Japan.

“But you cannot just put them into the blood and [expect them] to go to the right place.”

Dr Bandyopadhyay believes his device may offer a solution. One day they may be able to guide the nanobots through the body and control their functions, he said.

“That kind of device simply did not exist; this is the first time we have created a nano-brain,” he told BBC News.

Computer brain

The machine is made from 17 molecules of the chemical duroquinone. Each one is known as a “logic device”.

How nanotechnology is building the future from the bottom up

They each resemble a ring with four protruding spokes that can be independently rotated to represent four different states.

One duroquinone molecule sits at the centre of a ring formed by the remaining 16. All are connected by chemical bonds, known as hydrogen bonds.

The state of the control molecule at the centre is switched by a scanning tunnelling microscope (STM).

These large machines are a standard part of the nanotechnologist’s tool kit, and allow the viewing and manipulation of atomic surfaces.

Using the STM, the researchers showed they could change the central molecule’s state and simultaneously switch the states of the surrounding 16.

“We instruct only one molecule and it simultaneously and logically instructs 16 others at a time,” said Dr Bandyopadhyay.

The configuration allows four billion different possible combinations of outcome.

The two nanometre diameter structure was inspired by the parallel communication of glial cells inside a human brain, according to the team.

Robot control

To test the control unit, the researchers simulated docking eight existing nano-machines to the structure, creating a “nano-factory” or a kind of “chemical swiss army knife”.

Nano dust (SPL)

Scientists believe nano-machines could have medical applications

The attached devices, created by other research groups, included the “world’s tiniest elevator”, a molecular platform that can be raised or lowered on command.

The device is about two and a half nanometres (billionths of a metre) high, and the lift moves less than one nanometre up and down.

All eight machines simultaneously responded to a single instruction in the simulation.

“We have clear cut evidence that we can control those machines,” said Dr Bandyopadhyay.

This “one-to-many” communication and the device’s ability to act as a central control unit also raises the possibility of using the device in future computers, he said.

Machines built using devices such as this would be able to process 16 bits of information simultaneously.

Current silicon Central Processing Units (CPUs) can only carry out one instruction at a time, albeit millions of times per second.

The researchers say they have already built faster machines, capable of 256 simultaneous operations, and have designed one capable of 1024.

However, according to Professor Andrew Adamatzky of the University of the West England (UWE), making a workable computer would be very difficult at the moment.

“As with other implementations of unconventional computers the application is very limited, because they operate [it] using scanning tunnel microscopy,” he said.

But, he said, the work is promising.

“I am sure with time such molecular CPUs can be integrated in molecular robots, so they will simply interact with other molecular parts autonomously.”

Revolution in Artificial Intelligence,,,

 ScienceDaily: Your source for the latest research news<br /><br />
and science breakthroughs -- updated daily


Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

ScienceDaily (Apr. 2, 2012) — As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.

Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing’s work to its next logical step. She is translating her 1993 discovery of what she has dubbed “Super-Turing” computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue ofNeural Computation.

“This model is inspired by the brain,” she says. “It is a mathematical formulation of the brain’s neural networks with their adaptive abilities.” The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine’s lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

“Each time a Super-Turing machine gets input it literally becomes a different machine,” Siegelmann says. “You don’t want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you’d like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you’d like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That’s what this model can offer.”

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they’ve been told what to expect and how to respond, Siegelmann says. But they can’t take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called “adaptive inference.” In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the “calculating computer” model and more like Turing’s prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

“I was young enough to be curious, wanting to understand why the Turing model looked really strong,” she recalls. “I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations.”

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 20, possible behaviors. “If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

“If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain,” she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.