When brain implants arrive, will we still be “us”?

When brain implants arrive, will we still be “us”?

 

By | November 19, 2012, 11:38 AM PST

 Brain Storm

What happens when non-biological implants in our bodies — along the lines of cochlear implants to improve hearing in the deaf — include brain-related devices that might enhance our memories? Will we still be “us”? Will we be more of a cyborg than we were if, say, we had another type of implant? And for those who believe we would not be, at what point do we lose our selves to a more machine-like incarnation? When do we stop being human?

This is all pretty heavy, mind-bending stuff. But while such thoughts might seem like the domain of science fiction, when considering the trends toward smaller and more powerful computer chips and wearable computing, such theoretical musings might be relevant in innovation theory. And Ray Kurzweil, the often controversial author and futurist, has some opinions on this scenario in his new book How to Create a Human Mind: The Secret of Human Thought Revealed. (Andrew Nusca, SmartPlanet’s editor, recently reported on his talk at the Techonomy conference). Kurzweil argues that even without any non-biological implants, our physical selves are always changing.

In a very short excerpt on Slate, he draws a simple and compelling (if not exactly parallel) comparison:

We naturally undergo a gradual replacement process. Most of our cells in our body are continually being replaced. (You just replaced 100 million of them in the course of reading the last sentence.) Cells in the inner lining of the small intestine turn over in about a week. The life span of white blood cells range from a few days to a few months, depending on the type. Neurons persist, but their organelles and their constituent molecules turn over within a month.  So you are completely replaced in a matter of months. Are you the same person you were a few months ago?

He argues that as our gadgets become smaller, they could eventually become part of our physical selves just as widely accepted health equipment, inserted into our bodies surgically, is today. Plus, he adds, we are increasingly “outsourcing” more of our information and even our memories — in terms of our precious photos, videos, recordings, and even thoughts, in terms of our writings and other materials — to the cloud, versus storing them in our brains.

It’s possible to clearly imagine such disparate trends emerge and converge in some way. But critics suggest that businesses might be wise to also consider the possible pitfalls of future internal, brain-enhancing machinery as they research and develop it. As Publishers Weekly wrote, in How to Create a Mind, Kurzweil can be “uncritically optimistic about the possibilities of our technologies.” Yet perhaps that’s the strength of his ideas: they can be seen as scene-building narratives that focus on a positive prediction of tomorrow. As Kirkus Reviews pointed out, Kurzweil’s new book can be understood as (italics mine) ”a fascinating exercise in futurology.” And, it seems clear, a conversation starter.

Original:  http://www.smartplanet.com/blog/bulletin/when-brain-implants-arrive-will-we-still-be-8220us-8221/6102

What kind of privacy and security measures are needed when a machine can read your mind?

What kind of privacy and security measures are needed when a machine can read your mind?

In recent decades, meetings between information technology, biotechnology, and neuroscience have produced entirely new research, which is developing new, previously unknown products and services.

From nanotechnology opportunities for computer-brain integration occurs even an entirely new civil-military research, to develop a communication between computers and human minds / thoughts, called synthetic or artificial telepathy.

Understanding how the human brain works is not only leading to innovations in medicine, but also providing new models for energy-efficient, fault tolerant and adaptive computing technologies.

Research about artificial neural networks (signal processing) systems, and evolutionary, genetic algorithms, resulting in that you can now construct a self-learning computer programming themselves among others to read the human brain’s memories, feelings and knowledge.

Bioelectronics and a miniaturized signal processing systems in the brain may play in brain functional arkitektuer and through the spoken language to find out what the signals mean.

It is about creating a computer model of the brain including the evidence should provide the answer to what a person is, what is a conscience? What a responsibility is? Whence arises norms and values, etc.?None of these questions can be answered without copy the brain’s functional architecture.

Research Council Ethics Committee wrote the following on medical ethics Nano 2004:
Plus and minus with nanotechnology.

+ It is good to give medicine into the brain via the blood-brain barrier. + It is good to insert electrodes into the brain to give sight to a blind or to control a prosthetic hand. + It is good to use nanotechnology to stem terrorism on innocent people. + It is good for those who can afford to exploit nanotechnology for their own health and their own prosperity.

It’s not good when the particles that enter the body through the lungs and stresses the heart and other organs.

– It’s not good if the technology used to read or to influence others’ thoughts, feelings and intentions.

– There is no good if the same technology used to control and manage the innocent people.

– It’s not good for the poor, who do not have access to the advanced technology.

 

Is it ethical for researchers to retain parts of uploaded minds (copied biologically conscious) that when the copied person is deceased?

Scientific psychological approach that studies the mechanisms underlying human thought processes. In the cognitive psychology main areas of work include memory , perception , knowledge representation,language , problem solving , decision making, awareness and intelligence .

Charles Darwin collected on his time in a variety of materials to describe the diversity of species and to announce his great work in 1859, if the origin of species (evolution theory)

Just as Charles Darwin collected the amounts of material, now played human neurons and nervous systems in bit by bit, in order to simulate the human brain and nervous system of the computer models.As computers developed enough power, research will be able to simulate a human brain in real time.

There are already injectable bioelectronics and multimedia technology as a “hang out” with people for years to clone their feelings, memories and knowledge. The protection against illegal recording and exploitation of people, according to Swedish European professors are not enough.

Ethical aspects of so-called ICT (Information and Comunication Technologies) implants in the human body are discussed for several years at the European level of The European Group on Ethics in Science and New Technologies under the guidance of such Professor Goran Hermerén. One of the recommendations is that the dangers of ICT implants will be discussed in EU countries. But this has in any event not occurred in Sweden.

By using the new technology to read and copy human neurons and nervous systems so computers can learn ontologies and later “artificial intelligence”, an intelligence that has no ethical foundations and values.

“Artificial intelligence” is a research area that aims to develop computer-based applications that behave and act in a manner that is indistinguishable from human behavior.

The next step in computer development, computers / software that imitate humans. These computers come with their artificial intelligence to be able to threaten the man’s integrity, identity, autonomy and spirituality.

Years of recordings of people using the Carbon Nanotubes as Electrical Interfaces with Neurons in the cortex,

and cognetive radio technology visualizes piecemeal man’s own self, this is copied to the new more powerful computers.

Some of the research with brain implants (ICT) to clone the human brain is conducted according to many sources, without informed consent. This is probably because the ethical appeal can not be approved for life-long computerized study of brain implants (Carbon Nanotubes as Electrical Interfaces with Neurons in the cortex), where the consequences for the individual is destruction more than the benefits of the research.

Illegal computer cloning could lead to unprecedented physical, psychological and legal consequences for man and society. Illegal data cloning (copy) also involves research to do everything in their power to bring technology to the ICT implants read and copy pro men’s thoughts is not disclosed.

Nanoscience and biological implants can lead to serious problems if the technology is used in ways that violate people’s privacy. It is almost impossible to find electronic components, when incorporated in nanoscale particles. Businesses and governments will this new technology to find out things about people in a whole new way. Therefore, nanotechnology will also require new laws and regulations, just as the development of computers has contributed to the enactment of such Personal Data Act.

Swedish Professors also ask, how can you prevent and control the unauthorized use of nanotechnology, although there are legislation? Traceability, or rather the scarcity of traceability, is a perennial topic of debate on ethics, risk and safety. Another recurring theme is the monitoring, how nanotechnology can be used for monitoring purposes, where the individual or group is unaware of the surveillance and unable to find out if she / they are supervised (e) or not.

The government and their ethical advice, according to the EU has a responsibility to inform and educate the community in this new area of research. This has not been entrusted to the government was aware of the technologies already in 2003.

That some of today’s important scientific breakthroughs in nanotechnology / bioelectronics and information not published, because the established academic, financial and political centers of power to preserve their interests and protect unethical research on humans, research thus miss opportunities revealed. Research and its implications are misleading in relation to the judiciary and traditional medical diagnostics. It also goes against all human rights conventions.

Instead of Sweden and Europe, through their political gatekeepers favors confidential unethical civilian-military research on the civilian population during the development of software and networking technologies for medical and military surveillance would research it can make its research progress and the new paradigm’s insights.

In this way Sweden could use progress to solve many of its current political problems and be able to make an international pioneer work for the benefit of all mankind.

We want this website to create an awareness and an awareness that many of the new technologies described developed on the civilian population in the world, without their consent and / or knowledge, for many years.

Mindtech cooperate with the media and the Church to try to push the ethical debate that the EU research council and Professor Goran Hermerén initiated in this topic back in 2004. An ethical debate that has since been blacked out by the research and its representatives.

Know someone who is multi-media online but do not dare talk about it?

It is easy not to be believed for a person who alleges that a paradigm shift in computer-brain integration and multimedia technology is already here.

We are aware that portions of the information here may sound like pure science fiction, but it is already a real reality.

By: Magnus Olsson

 

CNN: Your Behavior Will Be Controlled by a Brain Chip!

CNN: Your Behavior Will Be Controlled by a Brain Chip

“Smart phone will be implanted”

Paul Joseph Watson
Infowars.com
October 9, 2012

A new CNN article predicts that within 25 years people will have embedded microchips within their brain that will allow their behavior to be controlled by a third party.

The story, entitled Smartphone of the future will be in your brain, offers a semi-satirical look at transhumanism and the idea of humans becoming part cyborg by having communications devices implanted in their body.

Predicting first the widespread popularity of wearable smartphones, already in production by Google, the article goes on to forecast how humans will communicate by the end of the century.

“Technology takes a huge leap in 25 years. Microchip can be installed directly in the user’s brain. Apple, along with a handful of companies, makes these chips. Thoughts connect instantly when people dial to “call” each other. But there’s one downside: “Advertisements” can occasionally control the user’s behavior because of an impossible-to-resolve glitch. If a user encounters this glitch — a 1 in a billion probability — every piece of data that his brain delivers is uploaded to companies’ servers so that they may “serve customers better.”

The tone of the CNN piece is somewhat sophomoric, but the notion that humans will eventually merge with machines as the realization of the technological singularity arrives is one shared by virtually all top futurists.

Indeed, people like inventor and futurist Ray Kurzweil don’t think we’ll have to wait 25 years to see smartphones implanted in the brain. He sees this coming to pass within just 20 years.

In his 1999 book The Age of Spiritual Machines, Kurzweil successfully predicted the arrival of the iPad, Kindle, iTunes, You Tube and on demand services like Netflix.

By 2019, Kurzweil forecasts that wearable smartphones will be all the rage and that by 2029, computers and cellphones will now be implanted in people’s eyes and ears, creating a “human underclass” that is viewed as backwards and unproductive because it refuses to acquiesce to the singularity.

Although the CNN piece doesn’t even foresee implantable brain chips until the end of the century, Kurzweil’s predictions are far beyond this. According to him, by 2099, the entire planet is run by artificially intelligent computer systems which are smarter than the entire human race combined – similar to the Skynet system fictionalized in the Terminator franchise.

Humans who have resisted altering themselves by becoming part-cyborg will be ostracized from society.

“Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do,” writes Kurzweil.

Kurzweil’s forecasts are echoed by Sun Microsystems’ Bill Joy, who in a 2000 Wired Magazine article entitled Why The Future Doesn’t Need Us, predicted that technological advancements in robotics would render most humans obsolete.

As a result the elite, “may simply decide to exterminate the mass of humanity,” wrote Joy.

*********************

Paul Joseph Watson is the editor and writer for Prison Planet.com. He is the author of Order Out Of Chaos. Watson is also a regular fill-in host for The Alex Jones Show and Infowars Nightly News.

IBM get close to mimicking a human brain !

Man vs. machine

 Computerchip from IBM get close to mimicking a human brain

By Jordan Robertson Friday, August 19, 2011



 
 

 Computers, like humans, can learn. But when Google tries to fill in your search box based only on a few keystrokes, or your iPhone predicts words as you type a text message, it’s only a narrow mimicry of what the human brain is capable of.The challenge in training a machine to behave like a human brain is technological and physiological, testing the limits of computer and neuroscience. But IBM researchers say they’ve made a key step toward combining the two worlds.

 

The company announced it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.The chips represent a milestone in a six-year project that has involved 100 researchers and $41 million in funding from the government’s Defense Advanced Research Projects Agency, or DARPA. IBM has also committed an undisclosed amount of money.

The prototypes offer further evidence of the growing importance of “parallel processing,” or computers doing multiple tasks simultaneously. That is important for rendering graphics and crunching large amounts of data.

The uses of the IBM chips so far are prosaic, such as steering a simulated car through a maze, or playing Pong. It may be a decade or longer before the chips make their way out of the lab and into actual products.

 

But what’s important is not what the chips are doing, but how they’re doing it, said Giulio Tononi, a professor of psychiatry at the University of Wisconsin at Madison who worked with IBM on the project.The chips’ ability to adapt to types of information that they weren’t specifically programmed to expect is a key feature.

“There’s a lot of work to do still, but the most important thing is usually the first step,” Tononi said in an interview. “And this is not one step; it’s a few steps.”

Technologists have long imagined computers that learn like humans. Your iPhone or Google’s servers can be programmed to predict certain behavior based on past events. But the techniques being explored by IBM and other companies and university research labs around “cognitive computing” could lead to chips that are better able to adapt to unexpected information.

IBM’s interest in the chips lies in their ability to potentially help process real-world signals, such as temperature or sound or motion, and make sense of them for computers.

 

IBM, based in Armonk, N.Y., is a leader in a movement to link physical infrastructure, such as power plants or traffic lights, and information technology, such as servers and software that help regulate their functions. Such projects can be made more efficient with tools to monitor the myriad analog signals present in those environments.Dharmendra Modha, project leader for IBM Research, said the new chips have parts that behave like digital “neurons” and “synapses” that make them different from other chips. Each “core,” or processing engine, has computing, communication and memory functions.

“You have to throw out virtually everything we know about how these chips are designed,” he said. “The key, key, key difference really is the memory and the processor are very closely brought together. There’s a massive, massive amount of parallelism.”

The project is part of the same research that led to IBM’s announcement in 2009 that it had simulated a cat’s cerebral cortex, the thinking part of the brain, using a massive supercomputer. Using progressively bigger supercomputers, IBM previously had simulated 40 percent of a mouse’s brain in 2006, a rat’s full brain in 2007, and 1 percent of a human’s cerebral cortex in 2009.

A computer with the power of a human brain is not yet near. But Modha said the latest development is an important step.

“It really changes the perspective from ‘What if?’ to ‘What now?’” Modha said. “Today we proved it was possible. There have been many skeptics, and there will be more, but this completes in a certain sense our first round of innovation.”

– Associated Press

How to Use Light to Control the Brain

How to Use Light to Control the Brain

Stephen Dougherty, Scientific American
Date: 01 April 2012 Time: 09:38 AM
 

In the film Amèlie, the main character is a young eccentric woman who attempts to change the lives of those around her for the better. One day Amèlie finds an old rusty tin box of childhood mementos in her apartment, hidden by a boy decades earlier. After tracking down Bretodeau, the owner, she lures him to a phone booth where he discovers the box. Upon opening the box and seeing a few marbles, a sudden flash of vivid images come flooding into his mind. Next thing you know, Bretodeau is transported to a time when he was in the schoolyard scrambling to stuff his pockets with hundreds of marbles while a teacher is yelling at him to hurry up.

We have all experienced this: a seemingly insignificant trigger, a scent, a song, or an old photograph transports us to another time and place. Now a group of neuroscientists have investigated the fascinating question: Can a few neurons trigger a full memory?
In a new study, published in Nature, a group of researchers from MIT showed for the first time that it is possible to activate a memory on demand, by stimulating only a few neurons with light, using a technique known as optogenetics. Optogenetics is a powerful technology that enables researchers to control genetically modified neurons with a brief pulse of light.

To artificially turn on a memory, researchers first set out to identify the neurons that are activated when a mouse is making a new memory. To accomplish this, they focused on a part of the brain called the hippocampus, known for its role in learning and memory, especially for discriminating places. Then they inserted a gene that codes for a light-sensitive protein into hippocampal neurons, enabling them to use light to control the neurons.

With the light-sensitive proteins in place, the researchers gave the mouse a new memory. They put the animal in an environment where it received a mild foot shock, eliciting the normal fear behavior in mice: freezing in place. The mouse learned to associate a particular environment with the shock.

Next, the researchers attempted to answer the big question: Could they artificially activate the fear memory? They directed light on the hippocampus, activating a portion of the neurons involved in the memory, and the animals showed a clear freezing response. Stimulating the neurons appears to have triggered the entire memory.

The researchers performed several key tests to confirm that it was really the original memory recalled. They tested mice with the same light-sensitive protein but without the shock; they tested mice without the light-sensitive protein; and they tested mice in a different environment not associated with fear. None of these tests yielded the freezing response, reinforcing the conclusion that the pulse of light indeed activated the old fear memory.

In 2010, optogenetics was named the scientific Method of the Year by the journal Nature Methods. The technology was introduced in 2004 by a research group at Stanford University led by Karl Deisseroth, a collaborator on this research. The critical advantage that optogenetics provides over traditional neuroscience techniques, like electrical stimulation or chemical agents, is speed and precision. Electrical stimulation and chemicals can only be used to alter neural activity in nonspecific ways and without precise timing. Light stimulation enables control over a small subset of neurons on a millisecond time scale.

Over the last several years, optogenetics has provided powerful insights into the neural underpinnings of brain disorders like depression, Parkinson’s disease, anxiety, and schizophrenia. Now, in the context of memory research, this study shows that it is possible to artificially stimulate a few neurons to activate an old memory, controlling an animals’ behavior without any sensory input. This is significant because it provides a new approach to understand how complex memories are formed in the first place.

Lest ye worry about implanted memories and mind control, this technology is still a long way from reaching any human brains. Nevertheless, the first small steps towards the clinical application of optogenetics have already begun. A group at Brown University, for example, is working on a wireless optical electrode that can deliver light to neurons in the human brain. Who knows, someday, instead of new technology enabling us to erase memories á la Eternal Sunshine of the Spotless Mind, we may actually undergo memory enhancement therapy with a brief session under the lights.

This article was first published on Scientific American. © 2012 ScientificAmerican.com. Follow Scientific American on Twitter @SciAm and @SciamBlogs. VisitScientificAmerican.com for the latest in science, health and technology news.