UN Starts Investigation to Ban Cyber Torture
Magnus Olsson, Geneva 8 March 2020
UN Human Rights Council (HRC) Special Rapporteur on torture revealed during the 43rd HRC that Cyber technology is not only used for internet and 5G. It is also used to target individuals remotely – through intimidation, harassment and public shaming.
On the 28th of February in Geneva, Professor Nils Melzer, UN Special Rapporteur on Torture and other Cruel Inhuman Degrading Treatment and Punishment, has officially confirmed that cyber torture exists and investigation is now underway on how to tackle it legally.
Electromagnetic radiation, radar, and surveillance technology are used to transfer sounds and thoughts into people’s brain. UN started their investigation after receiving thousands of testimonies from so-called “targeted individuals” (TIs).
Professor Nils Melzer is an expert in international law and since 2016 he holds the Human Rights Chair at the Geneva Academy of International Humanitarian Law and Human Rights. His team has found evidence that Cyber technology is used to inflict severe mental and physical sufferings.
“Judges think that physical torture is more serious than cruel, inhuman or degrading treatment,” he told the Guardian on 21 February. “Torture is simply the deliberate instrumentalization of pain and suffering.” These psychological torture methods are often used “to circumvent the ban on torture because they don’t leave any visible marks”. (1)
Cyber psychological systems like cognitive radio are used to interrupt human perceptions and memory. They can also be used to spy on people violating personal integrity which could lead to corruption and slavery in society. Cyber torture is also called no-touch torture or brain-machine interface.
One way to handle this situation is to regulate new technologies and use AI control mechanisms by independent and impartial investigators. The evidence gathered could then be used to convict criminals easier and quicker in the future.
Professor Meltzer and his team is now underway to create an international legal framework covering cyber technologies that can cause torture which previously was hard to prove. In the future it may be necessary to establish Radio Frequency Spectrum police in order to protect humanity from cyber terrorism. Nils Meltzer also revealed to me personally that the HRC will release several reports on this subject soon in the future.
- Owen Bowcott, ‘UN warns of rise of ‘cybertorture’ to bypass physical ban’ (The Guardian, March 2020) https://www.theguardian.com/law/2020/feb/21/un-rapporteur-warns-of-rise-of-cybertorture-to-bypass-physical-ban?fbclid=IwAR0mIvFNEpODW8KspG0XulW8MqkmzSSiO2gskQOgHicfxRjCTgKWV3vjlh0
On February 28, 2020, the UN Special Rapporteur on Torture, Professor Nils Melzer, issued his World Report on “Torture and other cruel, inhuman or degrading treatment or punishment.” This report included a definition of “Cybertorture,” the Crime Against Humanity where millions of targeted victims worldwide are remotely assaulted with Electromagnetic Weapons in actions directed via computer, often from Supercomputers.
1. A particular area of concern, which does not appear to have received sufficient attention, is the possible use of various forms of information and communication technology (“cybertechnology”) for the purposes of torture. Although the promotion, protection and enjoyment of human rights on the internet has been repeatedly addressed by the Human Rights Council (A/HRC/32/L.20; A/HRC/38/L.10/Rev.1), torture has been understood primarily as a tool used to obstruct the exercise of the right to freedom of expression on the internet, and not as a violation of human rights that could be committed through the use of cybertechnology.
2. This seems surprising given that some of the characteristics of cyber-space make it an environment highly conducive to abuse and exploitation, most notably a vast power asymmetry, virtually guaranteed anonymity, and almost complete impunity. States, corporate actors and organized criminals not only have the capacity to conduct cyberoperations inflicting severe suffering on countless individuals, but may well decide to do so for any of the purposes of torture. It is therefore necessary to briefly explore, in a preliminary manner, the conceivability and basic contours of what could be described as “cybertorture”.
3. In practice, cybertechnology already plays the role of an “enabler” in the perpetration of both physical and psychological forms of torture, most notably through the collection and transmission of surveillance information and instructions to interrogators, through the dissemination of audio or video recordings of torture or murder for the purposes of intimidation, or even live streaming of child sexual abuse “on demand” of voyeuristic clients (A/HRC/28/56, para.71), and increasingly also through the remote control or manipulation of stun belts (A/72/178, para.51), medical implants and, conceivably, nanotechnological or neurotechnological devices.1 Cybertechnology can also be used to inflict, or contribute to, severe mental suffering while avoiding the conduit of the physical body, most notably through intimidation, harassment, surveillance, public shaming and defamation, as well as appropriation, deletion or manipulation of information.
4. The delivery of serious threats through anonymous phone calls has long been a widespread method of remotely inflicting fear. With the advent of the internet, State security services in particular have been reported to use cybertechnology, both in their own territory and abroad, for the systematic surveillance of a wide range of individuals and/or for direct interference with their unhindered access to cyber technology.2 Electronic communication services, social media platforms and search engines provide an ideal environment both for the anonymous delivery of targeted threats, sexual harassment and extortion and for the mass dissemination of intimidating, defamatory, degrading, deceptive or discriminatory narratives.
5. Individuals or groups systematically targeted by cybersurveillance and cyberharassment are generally left without any effective means of defence, escape, or self-protection and, at least in this respect, often find themselves in a situation of “powerlessness” comparable to physical custody. Depending on the circumstances, the physical absence and anonymity of the perpetrator may even exacerbate the victim’s emotions of helplessness, loss of control, and vulnerability, not unlike the stress-augmenting effect of blindfolding or hooding during physical torture. Likewise, the generalized shame inflicted by public exposure, defamation and degradation can be just as traumatic as direct humiliation by perpetrators in a closed environment.3 As various studies on cyber-bullying have shown, already harassment in comparatively limited environments can expose targeted individuals to extremely elevated and prolonged levels of anxiety, stress, social isolation and depression, and significantly increases the risk of suicide.4 Arguably, therefore, much more systematic, government-sponsored threats and harassment delivered through cybertechnologies not only entail a situation of effective powerlessness, but may well inflict levels of anxiety, stress, shame and guilt amounting to “severe mental suffering” as required for a finding of torture.5
6. More generally, in order to ensure the adequate implementation of the prohibition of torture and related legal obligations in present and future circumstances, its interpretation should evolve in line with new challenges and capabilities arising in relation to emerging technologies not only in cyberspace, but also in areas such as artificial intelligence, robotics, nanotechnology and neurotechnology, or pharmaceutical and biomedical sciences, including so-called “human enhancement”.
1. Al Elmondi, “Next-generation nonsurgical neurotechnology”, Defense Advanced Research Projects Agency, available at www.darpa.mil/p…/next-generation-nonsurgical-neurotechnology.
2 See Human Rights Council resolutions 32/13 and 38/7. See, most notably, the 2013 disclosures by Edward Snowden of the global surveillance activities conducted by the United States National Security Agency and its international partners, see Ewan Macaskill and Gabriel Dance, “NSA files: decoded – what the revelations mean for you”, The Guardian, 1 November 2013.
3 Pau Pérez-Sales, “Internet and torture” (forthcoming).
4 Ann John and others, “Self-harm, suicidal behaviours, and cyberbullying in children and young people: systematic review”, Journal of Medical Internet Research, vol. 20, No. 4 (2018); Rosario Ortega and others, “The emotional impact of bullying and cyberbullying on victims: a European cross-national study”, Aggressive Behavior, vol. 38, No. 5 (September/October 2012).
5 Samantha Newbery and Ali Dehghantanha, “A torture-free cyber space: a human right”, 2017.
Technology Uses Lasers to Transmit Audible Messages to Specific People
Photoacoustic communication approach could send warning messages through the air without requiring a receiving device
WASHINGTON — Researchers have demonstrated that a laser can transmit an audible message to a person without any type of receiver equipment. The ability to send highly targeted audio signals over the air could be used to communicate across noisy rooms or warn individuals of a dangerous situation such as an active shooter.
MIT Used a Laser to Transmit Audio Directly Into a Person’s Ear
|Caption: Ryan M. Sullenberger and Charles M. Wynn developed a way to use eye- and skin-safe laser light to transmit a highly targeted audible message to a person without any type of receiver equipment.
Image Credit: Massachusetts Institute of Technology’s Lincoln Laboratory
In The Optical Society (OSA) journal Optics Letters, researchers from the Massachusetts Institute of Technology’s Lincoln Laboratory report using two different laser-based methods to transmit various tones, music and recorded speech at a conversational volume.
“Our system can be used from some distance away to beam information directly to someone’s ear,” said research team leader Charles M. Wynn. “It is the first system that uses lasers that are fully safe for the eyes and skin to localize an audible signal to a particular person in any setting.”
Creating sound from air
The new approaches are based on the photoacoustic effect, which occurs when a material forms sound waves after absorbing light. In this case, the researchers used water vapor in the air to absorb light and create sound.
“This can work even in relatively dry conditions because there is almost always a little water in the air, especially around people,” said Wynn. “We found that we don’t need a lot of water if we use a laser wavelength that is very strongly absorbed by water. This was key because the stronger absorption leads to more sound.”
One of the new sound transmission methods grew from a technique called dynamic photoacoustic spectroscopy (DPAS), which the researchers previously developed for chemical detection. In the earlier work, they discovered that scanning, or sweeping, a laser beam at the speed of sound could improve chemical detection.
“The speed of sound is a very special speed at which to work,” said Ryan M. Sullenberger, first author of the paper. “In this new paper, we show that sweeping a laser beam at the speed of sound at a wavelength absorbed by water can be used as an efficient way to create sound.”
For the DPAS-related approach, the researchers change the length of the laser sweeps to encode different frequencies, or audible pitches, in the light. One unique aspect of this laser sweeping technique is that the signal can only be heard at a certain distance from the transmitter. This means that a message could be sent to an individual, rather than everyone who crosses the beam of light. It also opens the possibility of targeting a message to multiple individuals.
In the lab, the researchers showed that commercially available equipment could transmit sound to a person more than 2.5 meters away at 60 decibels using the laser sweeping technique. They believe that the system could be easily scaled up to longer distances. They also tested a traditional photoacoustic method that doesn’t require sweeping the laser and encodes the audio message by modulating the power of the laser beam.
“There are tradeoffs between the two techniques,” said Sullenberger. “The traditional photoacoustics method provides sound with higher fidelity, whereas the laser sweeping provides sound with louder audio.”
Next, the researchers plan to demonstrate the methods outdoors at longer ranges. “We hope that this will eventually become a commercial technology,” said Sullenberger. “There are a lot of exciting possibilities, and we want to develop the communication technology in ways that are useful.”
Paper: R. M. Sullenberger, S. Kaushik, C. M. Wynn. “Photoacoustic communications: delivering audible signals via absorption of light by atmospheric H2O,” Opt. Lett., 44, 3, 622-625 (2019).
About Optics Letters
Optics Letters offers rapid dissemination of new results in all areas of optics with short, original, peer-reviewed communications. Optics Letters covers the latest research in optical science, including optical measurements, optical components and devices, atmospheric optics, biomedical optics, Fourier optics, integrated optics, optical processing, optoelectronics, lasers, nonlinear optics, optical storage and holography, optical coherence, polarization, quantum electronics, ultrafast optical phenomena, photonic crystals and fiber optics.
About The Optical Society
Founded in 1916, The Optical Society (OSA) is the leading professional organization for scientists, engineers, students and business leaders who fuel discoveries, shape real-life applications and accelerate achievements in the science of light. Through world-renowned publications, meetings and membership initiatives, OSA provides quality research, inspired interactions and dedicated resources for its extensive global network of optics and photonics experts. For more information, visit osa.org.
Original article: https://www.osa.org/en-us/about_osa/newsroom/news_releases/2019/new_technology_uses_lasers_to_transmit_audible_mes/?fbclid=IwAR3VlfrmqiiY_gUh2tjVy5m-TxiK7zoQJILMQK62wGkderU98wxwbC0Tf6c
This invention we give away for free to someone who wants to build a AI assistant startup:
(Read the warning in the end of this post)
It should work to build a interface for telepathy/ silent communication with a AI assistant in a smartphone with a neurophone sensor:
My suggestion is to use the sensor for Touch ID for communication with the AI.
When you touch the sensor you hear the assistant through your skin:
And a interface based on this information for speaking with the assistant:
The Audeo is a sensor/device which detects activity in the larynx (aka. voice box) through EEG (Electroencephalography). The Audeo is unique in it’s use of EEG in that it is detecting & analyzing signals outside the brain on their path to the larynx.1 The neurological signals/data are then encrypted and then transmitted to a computer to be processed using their software (which can be seen being used in Kimberly Beals’ video).2 Once it is analyzed and processed the data can then be represented using a computer speech generator.
The Audeo is a great sensor/device to detect imagined speech. It has an infinite amount of uses, especially in our areas of study. Here are some videos that show what the Audeo can be used for:
In a $6.3 million Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found that it is possible to use ECoG https://en.m.wikipedia.org/wiki/Electrocorticography signals to discriminate the vowels and consonants embedded in spoken and in imagined words.
The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.
Research into synthetic telepathy using subvocalization https://en.m.wikipedia.org/wiki/Subvocalization is taking place at the University of California, Irvine under lead scientist Mike D’Zmura. The first such communication took place in the 1960s using EEG to create Morse code using brain alpha waves.
Why do Magnus Olsson and Leo Angelsleva
give you this opportunity for free?
Because Facebook can use you and your data in research for free and I think someone else than Mark Zuckerberg should get this opportunity:
Neurotechnology, Elon Musk and the goal of human enhancement
Brain-computer interfaces could change the way people think, soldiers fight and Alzheimer’s is treated. But are we in control of the ethical ramifications?
At the World Government Summit in Dubai in February, Tesla and SpaceX chief executive Elon Musk said that people would need to become cyborgs to be relevant in an artificial intelligence age. He said that a “merger of biological intelligence and machine intelligence” would be necessary to ensure we stay economically valuable.
Soon afterwards, the serial entrepreneur created Neuralink, with the intention of connecting computers directly to human brains. He wants to do this using “neural lace” technology – implanting tiny electrodes into the brain for direct computing capabilities.
Brain-computer interfaces (BCI) aren’t a new idea. Various forms of BCI are already available, from ones that sit on top of your head and measure brain signals to devices that are implanted into your brain tissue.
They are mainly one-directional, with the most common uses enabling motor control and communication tools for people with brain injuries. In March, a man who was paralysed from below the neck moved his hand using the power of concentration.
But Musk’s plans go beyond this: he wants to use BCIs in a bi-directional capacity, so that plugging in could make us smarter, improve our memory, help with decision-making and eventually provide an extension of the human mind.
“Musk’s goals of cognitive enhancement relate to healthy or able-bodied subjects, because he is afraid of AI and that computers will ultimately become more intelligent than the humans who made the computers,” explains BCI expert Professor Pedram Mohseni of Case Western Reserve University, Ohio, who sold the rights to the name Neuralink to Musk.
“He wants to directly tap into the brain to read out thoughts, effectively bypassing low-bandwidth mechanisms such as speaking or texting to convey the thoughts. This is pie-in-the-sky stuff, but Musk has the credibility to talk about these things,” he adds.
Musk is not alone in believing that “neurotechnology” could be the next big thing. Silicon Valley is abuzz with similar projects. Bryan Johnson, for example, has also been testing “neural lace”. He founded Kernel, a startup to enhance human intelligence by developing brain implants linking people’s thoughts to computers.
In 2015, Facebook CEO Mark Zuckerberg said that people will one day be able to share “full sensory and emotional experiences” online – not just photos and videos. Facebook has been hiring neuroscientists for an undisclosed project at its secretive hardware division, Building 8.
However, it is unlikely this technology will be available anytime soon, and some of the more ambitious projects may be unrealistic, according to Mohseni.
“In my opinion, we are at least 10 to 15 years away from the cognitive enhancement goals in healthy, able-bodied subjects. It certainly appears to be, from the more immediate goals of Neuralink, that the neurotechnology focus will continue to be on patients with various neurological injuries or diseases,” he says.
Mohseni says one of the best current examples of cognitive enhancement is the work of Professor Ted Berger, of the University of Southern California, who has been working on a memory prosthesis to replace the damaged parts of the hippocampus in patients who have lost their memory due to, for example, Alzheimer’s disease.
“In this case, a computer is to be implanted in the brain that acts similaly to the biological hippocampus from an input and output perspective,” he says. “Berger has results from both rodents and non-human primate models, as well as preliminary results in several human subjects.”
Mohseni adds: “The [US government’s] Defense Advanced Research Projects Agency (DARPA) currently has a programme that aims to do cognitive enhancement in their soldiers – ie enhance learning of a wide range of cognitive skills, through various mechanisms of peripheral nerve stimulation that facilitate and encourage neural plasticity in the brain. This would be another example of cognitive enhancement in able-bodied subjects, but it is quite pie-in-the-sky, which is exactly how DARPA operates.”
Understanding the brain
In the UK, research is ongoing. Davide Valeriani, senior research officer at University of Essex’s BCI-NE Lab, is using an electroencephalogram (EEG)-based BCI to tap into the unconscious minds of people as they make decisions.
“Everyone who makes decisions wears the EEG cap, which is part of a BCI, a tool to help measure EEG activity … it measures electrical activity to gather patterns associated with confident or non-confident decisions,” says Valeriani. “We train the BCI – the computer basically – by asking people to make decisions without knowing the answer and then tell the machine, ‘Look, in this case we know the decision made by the user is correct, so associate those patterns to confident decisions’ – as we know that confidence is related to probability of being correct. So during training the machine knows which answers were correct and which one were not. The user doesn’t know all the time.”
Valeriani adds: “I hope more resources will be put into supporting this very promising area of research. BCIs are not only an invaluable tool for people with disabilities, but they could be a fundamental tool for going beyond human limits, hence improving everyone’s life.”
He notes, however, that one of the biggest challenges with this technology is that first we need to better understand how the human brain works before deciding where and how to apply BCI. “This is why many agencies have been investing in basic neuroscience research – for example, the Brain initiative in the US and the Human Brain Project in the EU.”
Whenever there is talk of enhancing humans, moral questions remain – particularly around where the human ends and the machine begins. “In my opinion, one way to overcome these ethical concerns is to let humans decide whether they want to use a BCI to augment their capabilities,” Valeriani says.
“Neuroethicists are working to give advice to policymakers about what should be regulated. I am quite confident that, in the future, we will be more open to the possibility of using BCIs if such systems provide a clear and tangible advantage to our lives.”
The plan is to eventually build non-implanted devices that can ship at scale. And to tamp down on the inevitable fear this research will inspire, Facebook tells me “This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain.” Facebook likened it to how you take lots of photos but only share some of them. Even with its device, Facebook says you’ll be able to think freely but only turn some thoughts into text.
Meanwhile, Building 8 is working on a way for humans to hear through their skin. It’s been building prototypes of hardware and software that let your skin mimic the cochlea in your ear that translates sound into specific frequencies for your brain. This technology could let deaf people essentially “hear” by bypassing their ears.
A team of Facebook engineers was shown experimenting with hearing through skin using a system of actuators tuned to 16 frequency bands. A test subject was able to develop a vocabulary of nine words they could hear through their skin.
To underscore the gravity of Building 8s mind-reading technology, Dugan started her talk by saying she’s never seen something as powerful as the smartphone “that didn’t have unintended consequences.” She mentioned that we’d all be better off if we looked up from our phones every so often. But at the same time, she believes technology can foster empathy, education and global community.
Building 8’s Big Reveal
Facebook hired Dugan last year to lead its secretive new Building 8 research lab. She had previously run Google’s Advanced Technology And Products division, and was formerly a head of DARPA.
Facebook built a special Area 404 wing of its Menlo Park headquarters with tons of mechanical engineering equipment to help Dugan’s team quickly prototype new hardware. In December, it signed rapid collaboration deals with Stanford, Harvard, MIT and more to get academia’s assistance.
According to these job listings, Facebook is looking for a Brain-Computer Interface Engineer “who will be responsible for working on a 2-year B8 project focused on developing advanced BCI technologies.” Responsibilities include “Application of machine learning methods, including encoding and decoding models, to neuroimaging and electrophysiological data.” It’s also looking for a Neural Imaging Engineer who will be “focused on developing novel non-invasive neuroimaging technologies” who will “Design and evaluate novel neural imaging methods based on optical, RF, ultrasound, or other entirely non-invasive approaches.”
Elon Musk has been developing his own startup called Neuralink for creating brain interfaces.
Facebook has built hardware before to mixed success. It made an Android phone with HTC called the First to host its Facebook Home operating system. That flopped. Since then, Facebook proper has turned its attention away from consumer gadgetry and toward connectivity. It’s built the Terragraph Wi-Fi nodes, Project ARIES antenna, Aquila solar-powered drone and its own connectivity-beaming satellite from its internet access initiative — though that blew up on the launch pad when the SpaceX vehicle carrying it exploded.
Facebook has built and open sourced its Surround 360 camera. As for back-end infrastructure, it’s developed an open-rack network switch called Wedge, the Open Vault for storage, plus sensors for the Telecom Infra Project’s OpenCellular platform. And finally, through its acquisition of Oculus, Facebook has built wired and mobile virtual reality headsets.
But as Facebook grows, it has the resources and talent to try new approaches in hardware. With over 1.8 billion users connected to just its main Facebook app, the company has a massive funnel of potential guinea pigs for its experiments.
Today’s announcements are naturally unsettling. Hearing about a tiny startup developing these advanced technologies might have conjured images of governments or coporate conglomerates one day reading our mind to detect thought crime, like in 1984. Facebook’s scale makes that future feel more plausible, no matter how much Zuckerberg and Dugan try to position the company as benevolent and compassionate. The more Facebook can do to institute safe-guards, independent monitoring, and transparency around how brain-interface technology is built and tested, the more receptive it might find the public.
A week ago Facebook was being criticized as nothing but a Snapchat copycat that had stopped innovating. Today’s demos seemed design to dismantle that argument and keep top engineering talent knocking on its door.
“Do you want to work for the company who pioneered putting augmented reality dog ears on teens, or the one that pioneered typing with telepathy?” You don’t have to say anything. For Facebook, thinking might be enough.
The MOST IMPORTANT QUESTIONS!
There is no established legal protection for the human subject when researchers use Brain Machine Interface (cybernetic technology) to reverse engineer the human brain.
The progressing neuroscience using brain-machine-interface will enable those in power to push the human mind wide open for inspection.
There is call for alarm. What kind of privacy safeguard is needed, computers can read your thoughts!
In recent decades areas of research involving nanotechnology, information technology, biotechnology and neuroscience have emerged, resulting in, products and services.
We are facing an era of synthetic telepathy, with brain-computer-interface and communication technology based on thoughts, not speech.
An appropriate albeit alarming question is: “Do you accept being enmeshed in a computer network and turned into a multimedia module”? authorities will be able to collect information directly from your brain, without your consent.
This kind of research in bioelectronics has been progressing for half a century.
Brain Machine Interface (Cybernetic technology) can be used to read our minds and to manipulate our sensory perception!
Now declassified & available online! Russian Quantum Leap technology enhances RNA, DNA & health, cures diseases (e.g. diabetes, cancer 2), stops TI targeting.
By Alfred Lambremont Webre
WATCH QUANTUM LEAP PANEL INTERVIEW
‘Major’ IBM breakthrough breathes new life into Moore’s Law…
In transistors, size matters — a lot. You can’t squeeze more silicon transistors (think billions of them) into a processor unless you can make them smaller, but the smaller these transistors get, the higher the resistance between contacts, which means the current can’t flow freely through them and, in essence, the transistors and chips built based on them, can no longer do their jobs. Ultra-tiny carbon nanotube transistors, though are poised to solve the size issue.
In a paper published on Thursday in the journal Science, IBM scientists announced they had found a way to reduce the contact length of carbon nanotube transistors — a key component of the tech and the one that most impacts resistance — down to 9 nanometers without increasing resistance at all. To put this in perspective, contact length on traditional, silicon-based 14nm node technology (something akin to Intel’s 14nm technology) currently sits at about 25 nanometers.
“In the silicon space, the contact resistance is very low if the contact is very long. If contact is very short, the resistance shoots up very quickly and gets large. So you have trouble getting current through the device,” Wilfried Haensch, IBM Senior Manager, Physics & Materials for Logic and Communications, told me.
carbon nanotubes, which happen to be 10,000 times thinner than a single strand of human hair, have been a promising tech for continuing Moore’s Law
carbon nanotubes, which happen to be 10,000 times thinner than a single strand of human hair, have been a promising tech for continuing Moore’s Law, which roughly states that the number of transistors in an integrated circuit will double every two years. However, according to Haensch, the technology faces considerable hurdles before it can be considered appropriate for commercial integrated circuit development.
First of all, the creation of tubes you can use in semiconductors isn’t easy. Haensch told me the current yields for useful material are still well below what they need. They also have to work out how to place the nanotubes 10nm apart or less on a wafer. Thirdly, they have to be able to scale devices based on carbon nanotubes to competitive dimensions.
There are actually two size issues to manage in chip scalability: Transistor gate and contact length. IBM solved the gate issue two years ago. “Scalability of contact was the last challenge on scalability,” said Haensch, and now IBM scientists report they’ve solved that, too. In their experiments, IBM scientists shrunk the contact length down to 9nm without any increase in resistance.
These results put the world one step closer to carbon nanotube-based integrated circuits. Such chips could conceivably run at the same speed as current transistors, but use significantly less power.
At maximum power, though, Haensch told me, these carbon nanotube chips could run at significantly faster speeds. Not only does this promise a future of ever faster computers, but it could lead to considerably better battery life for your most trusted companion — the smartphone.
This was an engineering breakthrough, though, that almost wasn’t. After working on the scalability problem for years, Haensch’s team came to him last year with results that shortened the contact length to 20nm.
They said, “Oh, we have something here. We need to publish,” Haensch recalled, who deflated the team’s excitement, telling them, “No, you don’t really have anything.”
Haensch sent them back to the lab telling them not to come back until they could produce something smaller than 10nm. “They were very disappointed they couldn’t write the paper,” said Haensch.
Then, a few months ago, the team returned with new results. “‘We got down to 9nm, and, by the way, we can reproduce the results.”
Haensch was thrilled. “Taking away the early gratification really gave us good results,” he said. It may also have given Moore’s Law a new lease on life and the world an exciting new future of electronics.
Have something to add to this story? Share it in the comments.
We’ll be uploading our entire MINDS to computers by 2045 and our bodies will be replaced by machines within 90 years, Google expert claims
- Ray Kurzweil, director of engineering at Google, believes we will be able to upload our entire brains to computers within the next 32 years – an event known as singularity
- Our ‘fragile’ human body parts will be replaced by machines by the turn of the century
- And if these predictions comes true, it could make humans immortal
PUBLISHED: 14:22 GMT, 19 June 2013 | UPDATED: 14:22 GMT, 19 June 2013
In just over 30 years, humans will be able to upload their entire minds to computers and become digitally immortal – an event called singularity – according to a futurist from Google.
Ray Kurzweil, director of engineering at Google, also claims that the biological parts of our body will be replaced with mechanical parts and this could happen as early as 2100.
Kurweil made the claims during his conference speech at the Global Futures 2045 International Congress in New York at the weekend.
Scroll down for video
WHAT IS SINGULARITY?
Technological singularity is the development of ‘superintelligence’ brought about through the use of technology.
The first use of the term ‘singularity’ refer to technological minds was by mathematician John von Neumann. Neumann in the mid-1950s.
He said: ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.’
The term was then used by science fiction writer Vernor Vinge who believesbrain-computer interfaces are causes of the singularity.
Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.
Kurzweil predicts the singularity to occur around 2045 while Vinge predicts it will happen before 2030.
The conference was created by Russian multimillionaire Dmitry Itskov and featured visonary talks about how the world will look by 2045.
Kurzweil said: ‘Based on conservative estimates of the amount of computation you need to functionally simulate a human brain, we’ll be able to expand the scope of our intelligence a billion-fold.’
He referred to Moore’s Law that states the power of computing doubles, on average, every two years quoting the developments from genetic sequencing and 3D printing.
In Kurweil’s book, The Singularity Is Near, he plots this development and journey towards singularity in a graph.
This singularity is also referred to as digital immortality because brains and a person’s intelligence will be digitally stored forever, even after they die.
He also added that this will be possible through neural engineering and referenced the recent strides made towards modeling the brain and technologies which can replace biological functions.
Examples of such technology given by LiveScience include the cochlear implant – an implant that is attached to the brain’s cochlear nerve and electronically stimulates it to restore hearing to someone who is deaf.
Other examples include technology that can restore motor skills after the nervous system is damaged.
Earlier this year, doctors from Cornell University used 3D printing to create a prosthetic ear using cells of cartilage.
A solid plastic mould was printed and then filled with high-density collagen gel.The researchers then added cartilage cells into the collagen matrix.
Kurweil was invited to the conference because he has previously written books around the idea of singularity.
Expanding on this idea Martine Rothblatt, CEO of biotech company United Therapeutics introduced the idea of ‘mindclones’.
These are digital versions of humans that can live forever and can create ‘mindfiles’ that are a place to store aspects of our personalities.
She said it would run on a kind of software for consciousness and told The Huffington Post: ‘The first company that develops mindware will have [as much success as] a thousand Googles.’
Rothblatt added that the presence of mindware could lead to replacing other parts of the body with ‘non-biological’ parts.
This is a concept that Kurweil also discussed and was the basis of his book Fantastic Voyage.
In this book he discusses immortality and how he believes the human body will develop.
He said: ‘We’re going to become increasingly non-biological to the point where the non-biological part dominates and the biological part is not important any more.
‘In fact the non-biological part – the machine part – will be so powerful it can completely model and understand the biological part. So even if that biological part went away it wouldn’t make any difference.
DIGITAL AVATARS USED TO CURE SCHIZOPHRENIA
An avatar system that can help schizophrenics control the voices in their heads is being developed by British researchers.
As part of the therapy, patients create an avatar by choosing a face and a voice for the person, or persons, they believe are inside their head.
Therapists can then encourage the patients to oppose the avatar and force it away, which boosts their confidence in dealing with their hallucinations.
The first stage in the therapy is for the patient to create a computer-based avatar, by choosing the face and voice of the entity they believe is talking to them.
The system then synchronises the avatar’s lips with its speech, enabling a therapist to speak to the patient through the avatar in real-time.
The therapist encourages the patient to oppose the voice and gradually teaches them to take control of their hallucinations.
The avatar doesn’t address the patients’ delusions directly but the study found the hallucinations improve as an overall effect of the therapy.
This is because patients can interact with the avatar as though it was a real person, because they have created it, but they know it cannot harm them.
Many of the voices heard by schizophrenics threaten to kill or harm them and their family.
‘We’ll also have non-biological bodies – we can create bodies with nano technology, we can create virtual bodies and virtual reality in which the virtual reality will be as realistic as the actual reality.
‘The virtual bodies will be as detailed and convincing as real bodies.
‘We do need a body, our intelligence is directed towards a body but it doesn’t have to be this frail, biological body that is subject to all kinds of failure modes.
‘But I think we’ll have a choice of bodies, we’ll certainly be routinely changing our parent body through virtual reality and today you can have a different body in something like Second Life, but it’s just a picture on the screen.
‘Research has shown that people actually begin to subjectively identify with their avatar.
‘But in the future it’s not going to be a little picture in a virtual environment you’re looking at. It will feel like this is your body and you’re in that environment and your body is the virtual body and it can be as realistic as real reality.
‘So we’ll be routinely able to change our bodies very quickly as well as our environments. If we had radical life extension only we would get profoundly bored and we would run out of thing to do and new ideas.
‘In additional to radical life extension we’re going to have radical life expansion.
‘We’re going to have million of virtual environments to explore that we’re going to literally expand our brains – right now we only have 300 million patterns organised in a grand hierarchy that we create ourselves.
‘But we could make that 300 billion or 300 trillion. The last time we expanded it with the frontal cortex we created language and art and science. Just think of the qualitative leaps we can’t even imagine today when we expand our near cortex again.’
VIDEO: Ray Kurzweil – Immortality by 2045
Read more: http://www.dailymail.co.uk/sciencetech/article-2344398/Google-futurist-claims-uploading-entire-MINDS-computers-2045-bodies-replaced-machines-90-years.html#ixzz2gTpgDHuH Follow us: @MailOnline on Twitter | DailyMail on Facebook
Mind Control – Remote Neural Monitoring: Daniel Estulin and Magnus Olsson on Russia Today
This show, with the original title “Control mental. El sueño dorado de los dueños del mundo” (Mind control. The golden dream of the world’s masters) — broadcasted to some 45 million people — was one of the biggest victories for victims of implant technologies so far. Thanks to Magnus Olsson, who, despite being victimized himself, worked hard for several years to expose one the biggest human rights abuses of our times – connecting people against their will and knowledge to computers via …Cyber-Physical Systems… optogenetic implants of the size of a few nanometers – leading to a complete destruction of not only their lives and health, but also personalities and identities.
Very few people are aware of the actual link between neuroscience, cybernetics, artificial intelligence, neuro-chips, transhumanism, the science cyborg, robotics, somatic surveillance, behavior control, the thought police and human enhancement.
They all go hand in hand, and never in our history before, has this issue been as important as it is now.
One reason is that this technology, that begun to develop in the early 1950s is by now very advanced but the public is unaware of it and it goes completely unregulated. There is also a complete amnesia about its early development, as Lars Drudgaard of ICAACT, mentioned in one of his interviews last year. The CIA funded experiments on people without consent through leading universities and by hiring prominent neuroscientists of that time. These experiments have since the 50s been brutal, destroying every aspect of a person’s life, while hiding behind curtains of National Security and secrecy but also behind psychiatry diagnosis.
The second is that its backside –mind reading, thought police, surveillance, pre-crime, behavior modification, control of citizen’s behavior; tastes, dreams, feelings and wishes; identities; personalities and not to mention the ability to torture and kill anyone from a distance — is completely ignored. All the important ethical issues dealing with the most special aspects of being a free human being living a full human life are completely dismissed. The praise of the machine in these discourses dealing with not only transhumanism ideals but also neuroscience today has a cost and that is complete disrespect, despise and underestimation of human beings, at least when it comes to their bodies, abilities and biological functions. The brain is though seen as the only valuable thing; not just because of its complexity and mysteries, but also because it can create consciousness and awareness. We’re prone to diseases, we die, we make irrational decisions, we’re inconsistent, and we need someone to look up to. In a radio interview on Swedish “Filosofiska rummet” entitled “Me and my new brain” (Jag och min nya hjärna), neuroscientist Martin Ingvar referred to the human body as a “bad frame for the brain”. Questions about individual free will and personal identity were discussed and the point of view of Martin Ingvar was very much in line with José Delgado’s some 60 years ago, and its buried history of mind control: we don’t really have any choice, we’re not really having a free will or for that matter any consistent personality. This would be enough reason to change humans to whatever someone else wishes. For example, an elite.
Another reason for why this issue dealing with brain implants is important of course is the fact that both the US and the EU pour billions of dollars and euros in brain research every single year, a brain research very focused on not only understanding the brain, but also highly focused on merging human beings with machines; using neuro-implants to correct behavior and enhance intelligence; creating robots and other machines that think and make autonomous intelligent decisions — just like humans do.
Ray Kurzweil, who’s predictions about future technological developments have been correct at least until now, claims that in 20 years, implant-technology has advanced that far that humanity has been completely transformed by it. We cannot know right now whether he’s prediction is right or wrong, but we have the right to decide on the kind of future we want. I do not know if eradicating humanity as we know it is the best future or the only alternative. Today, we might still have a choice.
Something to think about: Can you research the depths of the human brain on mice?
Copyright Carmen Lupan
The Only Thing That Helped Magnus Olsson
MINI TESLA RA GENERATORS AS A FRONTIER OF QUANTUM
Go To The Web Page:
Google’s chief engineer: People will soon upload their entire brains to computers
Published time: June 20, 2013 16:02
There are around 377 million results on Google.com for the query “Can I live forever?” Ask that question to company’s top engineer, though, and you’re likely to hear an answer that’s much more concise.
Simply put, Google’s Ray Kurzweil says immortality is only a few years away. Digital immortality, at least.
Kurzweil, 64, was only brought on to Google late last year, but that hasn’t stopped him from making headlines already. During a conference in New York City last week, the company’s director of engineering said that the growth of biotechnology is so quickly paced that that he predicts our lives will be drastically different in just a few decades.
According to Kurzweil, humans will soon be able to upload their entire brains onto computers. After then, other advancements won’t be too far behind.
“The life expectancy was 20, 1,000 years ago,” Kurzweil said over the weekend at the Global Future 2045 World Congress in New York City, CNBC’s Cadie Thompson reported. “We doubled it in 200 years. This will go into high gear within 10 and 20 years from now, probably less than 15, we will be reaching that tipping point where we add more time than has gone by because of scientific progress.”
“Somewhere between 10 and 20 years, there is going to be tremendous transformation of health and medicine,” he said.
In his 2005 book “The Singularity Is Near,” Kurzweil predicted that ongoing achievements in biotechnology would mean that by the middle of the century, “humans will develop the means to instantly create new portions of ourselves, either biological or nonbiologicial,” so that people can have “a biological body at one time and not at another, then have it again, then change it.” He also said there will soon be “software-based humans” who will “live out on the Web, projecting bodies whenever they need or want them, including holographically projected bodies, foglet-projected bodies and physical bodies comprising nanobot swarms.”
Those nanobot swarms might still be a bit away, but given the vast capabilities already achieved since the publication of his book, Kurzweil said in New York last week that more and more of the human body will soon be synced up to computers, both for backing up our thoughts and to help stay in good health.
“There’s already fantastic therapies to overcome heart disease, cancer and every other neurological disease based on this idea of reprogramming the software,” Kurzweil at the conference. “These are all examples of treating biology as software. …These technologies will be a 1,000 times more powerful than they were a decade ago. …These will be 1,000 times more powerful by the end of the decade. And a million times more powerful in 20 years.”
In “The Singularity Is Near,” Kurzweil acknowledged that Moore’s Law of Computer suggests that the power of computer doubles, on average, every two years. At that rate, he wrote, “We’re going to become increasingly non-biological to the point where the non-biological part dominates and the biological part is not important anymore.”
“Based on conservative estimates of the amount of computation you need to functionally simulate a human brain, we’ll be able to expand the scope of our intelligence a billion-fold,” The Daily Mail quoted Kurzweil.
Kurzweil joined Google in December 2012 and is a 1999 winner of the National Medal of Technology and Innovation. In the 1970s, Kurzweil was responsible for creating the first commercial text-to-speech synthesizer.
Wireless optogenetic control of the brain
Optogenetics, a recently developed technique that uses light to map and control brain activity, requires the genetic modification of an animal’s brain cells and the insertion of optical fibers and electrical wire into its brain. The bulky wires and fibers emerge from the skull, hampering the animal’s movement and making it difficult to perform certain experiments that could lead to breakthroughs for Parkinson’s disease, addiction, depression, and spinal cord injuries.
But now, a new ultrathin, flexible device laden with light-emitting diodes and sensors, both the size of individual brain cells, promises to make optogenetics completely wireless. The 20-micrometer-thick device can be safely injected deep into the brain and controlled and powered using radio-frequency signals. Its developers say the technology could also be used in other parts of the body, with broad implications for medical diagnosis and therapy.
In optogenetics, scientists genetically modify neurons to make them sensitive to particular wavelengths of light. Shining light on the altered neurons turns them on or off, allowing scientists to control specific brain circuits and change animal behavior.
The implant is a stack of four different optoelectronics devices that the researchers create separately on flexible polymer substrates and then glue on top of one another. The topmost layer is a platinum microelectrode for stimulating and recording from neurons. Below that is a silicon photodetector, followed by a group of four microscale LEDs that are each just 50 by 50 micrometers. Last comes a platinum-based temperature sensor. The filament carrying the stack is glued onto a microneedle with a silk-based glue that dissolves once the device has been injected into the targeted spot, allowing the researchers to retract the microneedle.
The technique for making the membranous devices is not new. Developed a few years ago in Rogers’s lab, it involves growing stacks of thin semiconductor films, peeling them off one at a time with a rubber stamp, and transferring them to plastic substrates.
Scientists could use the multifunctional system to stimulate and sense the brain in a variety of ways, Bruchas explains. The microelectrode can measure the electrical signals produced by neurons, and it can also be used to stimulate them. The photodiodes ensure that the LEDs are working, but they can also be used to detect light signals generated by neurons that have been genetically modified to make certain fluorescent proteins.
The micro-LEDs, which have dimensions comparable to individual neurons, could trigger individual neurons, unlike the fiber-optic implants typically used in optogenetics, which are four times as wide. The researchers could also combine different-colored LEDs on the same device and use them to simultaneously control neurons that have been engineered to react to different colors. Such multiplexing would allow neuroscientists to analyze brain circuits more precisely, Bruchas says. Finally, the temperature sensor monitors the heat generated by the LEDs to prevent the tissue from overheating.
When the researchers placed the device—which connects to an RF power module mounted on the animal’s head—inside the brains of living mice, it caused no inflammation or infection. To test the system’s ability to alter animal behavior, the researchers embedded it near a particular group of neurons that they had genetically altered to release dopamine when cued with light. The neurochemical dopamine is involved in the body’s “rewards” system, such as with food or sex, and it plays a part in several addictive drugs.
ABSTRACT – Successful integration of advanced semiconductor devices with biological systems will accelerate basic scientific discoveries and their translation into clinical technologies. In neuroscience generally, and in optogenetics in particular, the ability to insert light sources, detectors, sensors, and other components into precise locations of the deep brain yields versatile and important capabilities. Here, we introduce an injectable class of cellular-scale optoelectronics that offers such features, with examples of unmatched operational modes in optogenetics, including completely wireless and programmed complex behavioral control over freely moving animals. The ability of these ultrathin, mechanically compliant, biocompatible devices to afford minimally invasive operation in the soft tissues of the mammalian brain foreshadow applications in other organ systems, with potential for broad utility in biomedical science and engineering.