Biotechnology and Human Augmentation

Mick Ryan and Therese Keane


Over the last decade, military theorists and authors in the fields of future warfare and strategy have examined in detail the potential impacts of an ongoing revolution in information technology. There has been a particular focus on the impacts of automation and artificial intelligence on military and national security affairs. This attention on silicon-based disruption has nonetheless meant that sufficient attention may not have been paid to other equally profound technological developments. One of those developments is the field of biotechnology.

There have been some breathtaking achievements in the biological realm over the last decade. Human genome sequencing has progressed from a multi-year and multi-billion dollar undertaking to a much cheaper and quicker process, far outstripping Moore’s Law. Just as those concerned with national security affairs must monitor disruptive silicon-based technologies, leaders must also be literate in the key biological issues likely to impact the future security of nations. One of the most significant matters in biotechnology is that of human augmentation and whether nations should augment military personnel to stay at the leading edge of capability.

Biotechnology and Human Augmentation

Military institutions will continue to seek competitive advantage over potential adversaries. While this is most obvious in the procurement of advanced platforms, human biotechnological advancement is gaining more attention. As a 2017 CSIS report on the Third Offset found most new technological advances will provide only a temporary advantage, assessed to be no more than five years. In this environment, some military institutions may view the newer field of human augmentation as a more significant source of a future competitive edge.

Biological enhancement of human performance has existed for millenia. The discovery of naturally occurring compounds by our ancestors has led to many of the cognitive and physical enhancements currently available. In the contemporary environment, for example, competition in national and international sports continues to fuel a race between creation of the next generation of performance enhancements and regulatory bodies developing detection methods. One example of this is the use of gene doping to hone the competitive edge in athletes, an off-label use of gene therapies originally developed for the treatment of debilitating genetic and acquired diseases. Despite the possibility of cancer and a range of other lethal side effects, some athletes consider these an acceptable risk. Might this not translate to adversaries adopting any possible advantage without equal disregard for ethics and safety considerations?

Gene Doping (Ralf Hiemisch)

It cannot be safely be assumed all states will share  the same ethical, moral, legal, or policy principals as Western democratic societies. Based on developmental trajectories to date, contemporary military institutions should anticipate that all forms of human enhancements, whether relatively benign or highly controversial, will continue to evolve. For contemporary strategic leaders, the key is to anticipate how these developments may potentially impact on military institutions.

Impacts on Military Institutions

Theoretically, future advances in biotechnology may permit the augmentation of cognitive performance. However, given the challenges of biocompatibility of silicon, significant enhancements to human performance in the near future are likely to be found in prosthetics, wearable computing, or human teaming with artificial intelligence. In the longer term, some forms of gene therapy may obviate the need for implants. Noting this, a selection of likely challenges are explored below.

Previously, integration of new groups into the military  dealt with human beings.

A first order issue will be group cohesion. Military institutions have deep experience integrating newcomers into their ranks. Fundamental to effective future teaming will be evolving this approach to establish trust and group cohesion between normal humans and those who are augmented. The degree to which military leaders can and should trust augmented personnel to make decisions about saving and taking lives is likely to be an evolutionary process. It also remains to be seen whether or not teams comprised of augmented and non-augmented humans are capable of developing trust. Experimentation and trials are needed to establish whether augmented people will bias away from decisions and input from non-augmented people and vice versa. While institutions can learn from historical integration challenges, there is one essential difference with augmented humans. Previously, integration of new groups into the military  dealt with human beings. If augmentation using neurotechnology significantly enhances cognitive function, this may represent a separate and distinct group of future Homo sapiens.

The second challenge will be accessibility. Military institutions will need to decide what proportion of its forces will be augmented. Given that early generations of this biotechnology may be expensive, it is unlikely an entire military institution can be augmented. If so, who will be augmented and why? Military institutions will need to develop a value proposition to ensure physical and cognitive augmentation produces superior outcomes to the use of un-augmented personnel. Yet another question to ask is whether military personnel will be de-augmented on leaving the service. The transition of augmented personnel into a largely unaugmented populace may be traumatic for military personnel, and for society more broadly. Even more severe in its repercussions may be transitioning de-augmented personnel into a populace where augmentation is ubiquitous.

The Role of Humans in the Age of Robots (The Luvo)

The third challenge will be conceptual. One Chinese scientist, writing in 2006, has proposed military biotechnology offers the chance to shift to a “new balance between defence and attack, giving rise to a new concept of warfare, a new balance of military force, and new attacking power.” While the emphasis of this particular article was on a more merciful form of warfare—about which we should be skeptical—it nonetheless highlights the requirement to rethink what biotechnology and human augmentation means for how military institutions develop warfighting concepts. When humans arrive with cognitive enhancement, a range of tactical, operational, and strategic concepts may become irrelevant. Strategic thinking, using a combination of biological and silicon-based technologies could take organisations in very different directions than is presently the case. It also bears examining whether those with augmentation will enable greater diversity of performance (particularly in the intellectual realm) or if it will lead to increased homogenisation of physical and cognitive performance.

The fourth challenge is obsolescence. A fundamental challenge for humans waging war is that, despite technological advances, one of the weakest links is the physical capacity of the human. As Patrick Lin was written, technology makes up for our absurd frailty. Therefore, might normal humans without augmentation become irrelevant in a new construct where military institutions possess large numbers of physically and cognitively augmented personnel? It remains to be seen whether unaugmented humans might able to compete with physically and cognitively augmented military personnel. The augmentation of humans for different physical and cognitive functions may also drive change in how military institutions operate, plan, and think strategically.

A fifth challenge is military education and training. Traditional military training emphases the teaching of humans to achieve learning outcomes and missions as individuals and teams. In an integrated augmented/non-augmented institution, training methods must evolve to account for the different and improved capabilities of augmented personnel and to blend the capabilities of augmented and non-augmented personnel. Similarly, education for military leaders currently seeks to achieve their intellectual development in the art and science of war. If humans augmented with cognitive enhancements are present, both institutional and individual professional military education will also need to evolve. Learning delivery, as well as key learning outcomes, will have to be re-examined to account for the enhanced physical and cognitive performance of this new segment of the military workforce. Even issues as basic as fitness assessments must be re-examined. Potentially, military organisations could drop physical assessments by automatically augmenting people to the institutionally desired level of performance.

The sixth challenge is one of choice. Command structures demand a reduction in an individual’s free will to refuse such that informed consent is not quite the same as for the general population. And when experimental augmentation options progress to become approved interventions, can we equate a parent considering whether to choose an approved cognitive augmentation option for their child to a soldier contemplating the same when operating alongside augmented peers where the stakes are orders of magnitude greater? How much choice will military personnel have in the augmentation process? Will this be on a volunteer basis or by direction, and what are the moral, legal, and ethical implications of these stances? Speculation that augmentation may become mandatory for some professions may also apply to the military.

The final issue addressed in this article is one of ethics. Research communities are grappling with the ethical and moral implications of augmentation for society as a whole. While the first concern in evaluating the military applications of biotechnology is international humanitarian law, bioethics must also be considered. Ethical considerations pervade almost every aspect of human augmentation, and there are ethical considerations threaded through the other challenges raised in this article. For example, beyond the first order questions of whether we should augment soldiers are issues such as how much augmentation should be allowable. Military institutions should also assess the cumulative effects of multiple augmentations and the consequences of converging augmentation. There may also be a point at which a highly augmented human may cross the human-machine barrier, as well as a range of unanticipated capabilities that emerge from different augmentation combinations.

A Way Ahead

These issues must be informed by those within the biotechnology community, but they alone cannot solve them. Broader involvement by senior military, government, and community leaders is required. One expert in biotechnology has written that “clearly the new forms of power being unleashed by bio-technology will have to be harnessed and used with greater wisdom than power has been used in the past.” If military institutions are to demonstrate wisdom in their investments in biotechnology, they must explore societal impacts as well as effects within military institutions.

“Splitting humankind into biological castes will destroy the foundations of liberal ideology. Liberalism still presupposes that all human beings have equal value and authority.”

It is likely some augmentation will be—at least initially—expensive.  It may be beyond the means of most people in society and, potentially, many government and corporate institutions. If only military personnel might be augmented, what are the impacts on civil-military relationships, and who would make this decision? In this construct, it could be unethical to deny the benefits of augmentation to wider society. However as Yuval Harari has noted, this may see a differentiation in how society views augmented and non-augmented people—“Splitting humankind into biological castes will destroy the foundations of liberal ideology. Liberalism still presupposes that all human beings have equal value and authority.” In Western democracies, this poses profound questions about conferred advantage, societal sense of fairness and equality, and the value of individuals within society.

In Western democratic systems, development of regulation, policy, and legal frameworks is not keeping pace with the current tempo of complicated technological advancements. It cannot be assumed other states are allowing these deficits to slow their efforts in biotechnology, not to mention the unregulated efforts of non-state actors. While the focus of the fourth industrial revolution remains predominantly on technologies, perhaps for Australia (and other democracies) it is also these areas which require a complementary revolution in the Whole of Nation enterprise so as to keep up with the pace of change and facilitate systematic assessment of human augmentation implications.

Conclusion

The potential to augment the physical and cognitive capacity of humans is seductive. There will be some who will not demonstrate responsible behaviour in taking advantage of these new technologies. Humans have demonstrated in the past the capacity to responsibly manage disruptive technologies such as flight, atomic weapons, and space-based capabilities. This means thoughtful academics, national security practitioners, and people from wider society must be part of the discussion on why and how biotechnology might be used in future. It is vital for the future of global security, and for the human race, that mechanisms for responsible ethical and legal use of biotechnology are considered and developed. This must occur in parallel with the scientific endeavours to develop new biotechnologies.


Mick Ryan is an Australian Army officer, and Commander of the Australian Defence College in Canberra, Australia. A distinguished graduate of Johns Hopkins University and the USMC Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. Therese Keane is a scientist with the Defence Science and Technology Group. Although with a background in mathematics now expanding into biotechnology. The views expressed are the authors’ and do not reflect the official position of the Australian Department of Defence or the Australian Government.

Nanoethics, On The Mind and the Machine Interaction…

The Mind and the Machine. On the Conceptual and Moral Implications of Brain-Machine Interaction…

Maartje Schermer

Medical Ethics and Philosophy of Medicine, ErasmusMC, Room AE 340, PO Box 2040, 3000 Rotterdam, The Netherlands

Find articles by Maartje Schermer
Medical Ethics and Philosophy of Medicine, ErasmusMC, Room AE 340, PO Box 2040, 3000 Rotterdam, The Netherlands
Maartje Schermer, Phone: +31-107-043062, ln.cmsumsare@remrehcs.m.
corresponding authorCorresponding author.
Received 2009 Nov 5; Accepted 2009 Nov 5.

Abstract

Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have on our ‘symbolic order’—on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.

Keywords: Brain-machine interaction, Brain-computer interfaces, Converging technologies, Cyborg, Deep brain stimulation, Moral responsibility, Neuroethics

Introduction

Within two or three decades our brains will have been entirely unravelled and made technically accessible: nanobots will be able to immerse us totally in virtual reality and connect our brains directly to the Internet. Soon after that we will expand our intellect in a spectacular manner by melting our biological brains with non-biological intelligence. At least that is the prophecy of Ray Kurzweil, futurist, transhumanist and successful inventor of, amongst other things, the electronic keyboard and the voice-recognition system.1 He is not the only one who foresees great possibilities and, what’s more, has the borders between biological and non-biological, real and virtual, and human and machine, disappear with the greatest of ease. Some of these possibilities are actually already here. On 22 June 2004, a bundle of minuscule electrodes was implanted into the brain of the 25-year-old Matthew Nagel (who was completely paralysed due to a high spinal cord lesion) to enable him to operate a computer by means of his thoughts. This successful experiment seems to be an important step on the way to the blending of brains and computers or humans and machines, that Kurzweil and others foresee. With regard to the actual developments in neuroscience and the convergence of neurotechnology with information, communication- and nanotechnology in particular it is still unclear how realistic the promises are. The same applies to the moral and social implications of these developments. This article offers a preliminary exploration of this area. The hypothesis is that scientific and technological developments in neuroscience and brain-machine interfacing challenge—and may contribute to shifts in—some of the culturally determined categories and classification schemes (our ‘symbolic order’), such as body, mind, human, machine, free will and responsibility (see the Introduction to this issue: Converging Technologies, Shifting Boundaries)

Firstly I will examine the expectations regarding the development of brain-machine interfaces and the forms of brain-machine interaction that already actually exist. Subsequently, I will briefly point out the moral issues raised by these new technologies, and argue the debate on these issues will be influenced by the shifts that may take place in our symbolic order—that is, the popular categories that we use in our everyday dealings to make sense of our world—as a result of these developments. It is important to consider the consequences these technologies might have for our understanding of central organizing categories, moral concepts and important values. Section four then focuses on the categories of human and machine: are we all going to become cyborgs? Will the distinction between human and machine blur if more and more artificial components are built into the body and brain? I will argue that the answer depends partly on the context in which this question is asked, and that the concept of the person may be more suitable here than that of the human. Section five is about the distinction between body and mind. I argue that as a result of our growing neuroscientific knowledge and the mounting possibilities for technological manipulation, the mind is increasingly seen as a component of the body, and therefore also more and more in mechanical terms. This put the concept of moral responsibility under pressure. I will illustrate the consequences of these shifts in concepts and in category-boundaries with some examples of the moral questions confronting us already.

Developments in Brain-Machine Interaction

Various publications and reports on converging technologies and brain-machine interaction speculate heatedly on future possibilities for the direct linkage of the human brain with machines, that is: some form of computer or ICT technology or other. If the neurosciences provide further insight into the precise working of the brain, ICT technology becomes increasingly powerful, the electronics become more refined and the possibilities for uniting silicones with cells more advanced, then great things must lie ahead of us—or so it seems. The popular media, but also serious governmental reports and even scientific literature, present scenarios that are suspiciously like science fiction as realistic prospects: the expansion of memory or intelligence by means of an implanted chip; the direct uploading of encyclopaedias, databases or dictionaries into the brain; a wireless connection from the brain to the Internet; thought reading or lie detection via the analysis of brain activity; direct brain-to-brain communication. A fine example comes from the report on converging technologies issued by the American National Science Foundation:

‘Fast, broadband interfaces directly between the human brain and machines will transform work in factories, control automobiles, ensure military superiority, and enable new sports, art forms and modes of interaction between people. […] New communication paradigms (brain-to-brain, brain-machine-brain, group) could be realized in 10–20 years.’ []

It is not easy to tell which prospects are realistic, which to a certain extent plausible and which are total nonsense. Some scientists make incredible claims whilst others contradict them again. These claims often have utopian characteristics and seem to go beyond the border between science and science fiction. Incidentally, they are frequently presented in such a way as to create goodwill and attract financial resources. After all, impressive and perhaps, from the scientific point of view, exaggerated future scenarios have a political and ideological function too—they help to secure research funds2 and to create a certain image of these developments, either utopian or dystopian, thus steering public opinion.

Uncertainty about the facts—which expectations are realistic, which exaggerated and which altogether impossible—is great, even amongst serious scientists []. Whereas experts in cyberkinetic neurotechnology in the reputable medical journal, The Lancet, are seriously of the opinion that almost naturally-functioning, brain-driven prostheses will be possible, the editorial department of the Dutch doctors’ journal, Medisch Contact, wonders sceptically how many light-years away they are [, ]. It is precisely the convergence of knowledge and technology from very different scientific areas that makes predictions so difficult. Although claims regarding future developments sometimes seem incredible, actual functioning forms of brain-machine interaction do in fact exist, and various applications are at an advanced stage of development. Next, I will look at what is currently already possible, or what is actually being researched and developed.

Existing Brain-Machine Interactions

The first category of existing brain-machine interaction is formed by the sensory prostheses. The earliest form of brain-machine interaction is the cochlear implant, also known as the bionic ear, which has been around for about 30 years. This technology enables deaf people to hear again, by converting sound into electrical impulses that are transmitted to an electrode implanted in the inner ear, which stimulates the auditory nerve directly. While there have been fierce discussions about the desirability of the cochlear implant, nowadays they are largely accepted and are included in the normal arsenal of medical technology (e.g. []). In this same category, various research lines are currently ongoing to develop an artificial retina or ‘bionic eye’ to enable blind people to see again.

A second form of brain-machine interaction is Deep Brain Stimulation (DBS). With this technique small electrodes are surgically inserted directly into the brain. These are connected to a subcutaneously implanted neurostimulator, which sends out tiny electrical pulses to stimulate a specific brain area. This technology is used for treatment of neurological diseases such as Parkinson’s disease and Gilles de la Tourette’s syndrome. Many new indications are being studied experimentally, ranging from severe obsessive-compulsive disorders, addictions, and obesity to Alzheimer’s disease and depression. The use of this technique raises a number of ethical issues, like informed consent from vulnerable research subjects, the risks and side-effects, including effects on the patient’s mood and behaviour [].

More spectacular, and at an even earlier stage of development, is the third form of brain-machine interaction in which the brain controls a computer directly. This technology, called neuroprosthetics, enables people to use thought to control objects in the outside world such as the cursor of a computer or a robotic arm. It is being developed so that people with a high spinal cord lesion, like Matt Nagel mentioned in the introduction, can act and communicate again. An electrode in the brain receives the electrical impulses that the neurons in the motor cerebral cortex give off when the patient wants to make a specific movement. It then sends these impulses to a computer where they are translated into instructions to move the cursor or a robot that is connected to the computer. This technology offers the prospect that paraplegics or patients with locked in syndrome could move their own wheelchair with the aid of thought-control, communicate with others through written text or voice synthesis, pick up things with the aid of artificial limbs et cetera.

In future, the direct cortical control described above could also be used in the further development of artificial limbs (robotic arms or legs) for people who have had limbs amputated. It is already possible to receive the signals from other muscles and control a robotic arm with them (a myoelectrical prosthesis); whether the patient’s own remaining nerves can be connected directly to a prosthesis to enable it to move as though it is the patient’s own arm is now being examined. Wireless control by the cortex would be a great step in prosthetics, further enabling patient rehabilitation. Next to motor control of the prosthesis, tactile sensors are being developed and placed in artificial hands to pass on the feeling to the patient’s own remaining nerves, thus creating a sense of touch. It is claimed that this meeting of the (micro)technological and (neuro)biological sciences will in the future lead to a significant reduction in invalidity due to amputation or even its total elimination [, ].

In the fourth form of brain-machine interaction, use is made of neurofeedback. By detecting brain activity with the aid of electroencephalography (EEG) equipment, it can be made visible to the person involved. This principle is used, for instance, in a new method for preventing epileptic attacks with the aid of Vagal Nerve Stimulation (VNS). Changes in brainwaves can be detected and used to predict an oncoming epileptic attack. This ‘warning system’ can then generate an automatic reaction from the VNS system which stimulates the vagal nerve to prevent the attack. In time, the detection electrodes could be implanted under the skull, and perhaps the direct electrical stimulation of the cerebral cortex could be used instead of the vagal nerve []. Another type of feedback system is being developed by the American army and concerns a helmet with binoculars that can draw a soldier’s attention to a danger that his brain has subconsciously detected enabling him to react faster and more adequately. The idea is that EEG can spot ‘neural signatures’ for target detection before the conscious mind becomes aware of a potential threat or target [].

Finally, yet another technology that is currently making rapid advances is the so-called exoskeleton. Although this is not a form of brain-machine interaction in itself, it is a technology that will perhaps be eminently suitable for combination with said interaction in the future. An exoskeleton is an external structure that is worn around the body, or parts of it, to provide strength and power that the body does not intrinsically possess. It is chiefly being developed for applications in the army and in the health care sector.3 Theoretically, the movements of exoskeletons could also be controlled directly by thought if the technology of the aforementioned ‘neuroprostheses’ was to be developed further. If, in the future, the exoskeleton could also give feedback on feelings (touch, temperature and suchlike), the possibilities could be expanded still further.

Ethical Issues and Shifts in Our Symbolic Order

The developments described above raise various ethical questions, for instance about the safety, possible risks and side effects of new technologies. There are also speculations as to the moral problems or dangers that may arise in connection with further advances in this type of technologies. The European Group on Ethics in Science and New Technologies (EGE), an influential European advisory body, warns for the risk that ICT implants will be used to control and locate people, or that they could provide third parties with access to information about the body and mind of those involved, without their permission EGE []. There are also concerns about about the position of vulnerable research subjects, patient selection and informed consent, the effects on personal identity, resource allocation and about the use of such technologies for human enhancement Over the past few years, the neuroethical discussion on such topics has been booming (e.g. [, , , , , , ]).

It has been argued that while these ethical issues are real, they do not present anything really new []. The point of departure of this article, however, is that it is not so easy to deal adequately with the moral questions raised by these new technologies because they also challenge some of the central concepts and categories that we use in understanding and answering moral questions. Hansson, for example, states that brain implants may be “reason to reconsider our criteria for personal identity and personality changes” [; p. 523]. Moreover, these new technologies may also change some elements of our common morality itself, just like the birth control pill once helped to change sexual morality []. In brief: new technologies not only influence the ways we can act, but also the symbolic order: our organizing categories and the associated views on norms and values.

The concepts and categories we, as ordinary people, use to classify our world to make it manageable and comprehensible are subject to change. These categories also play an important part in moral judgement, since they often have a normative next to a descriptive dimension. Categories such as human and machine, body and mind, sick and healthy, nature and culture, real and unreal are difficult to define precisely and the boundaries of such notions are always vague and movable. Time and again it takes ‘symbolic labour’ to reinterpret these categories and to re-conceptualise them and make them manageable in new situations. In part, this symbolic labour is being done by philosophers who explicitly work with concepts and definitions, refining and adjusting them; in part it is also a diffuse socio-cultural process of adaptation and emerging changes in symbolic order.4 Boundaries are repeatedly negotiated or won, and new concepts arise where old ones no longer fit the bill An example is the new concept of ‘brain dead’ which arose a few decades ago as a consequence of the concurrent developments in electroencephalography, artificial respiration and organ transplantation. Here the complex interplay of technology, popular categories of life and death, and scientific and philosophical understandings of these concepts is clearly demonstrated [; p. 16].

Morality, defined as our more or less shared system of norms, values and moral judgements, is also subject to change It is not a static ‘tool’ that we can apply to all kinds of new technologically induced moral problems. Technological and social developments influence and change our morality, although this does not apply equally to all its elements. Important values such as justice, well-being or personal autonomy are reasonably stable, but they are also such abstract notions that they are open to various and changing interpretations. The norms we observe in order to protect and promote our values depend on these interpretations and may require adjustment under new circumstances. Some norms are relatively fixed, others more contingent and changing []. The detailed and concrete moral rules of conduct derived from the general norms are the most contingent and changeable. The introduction of the notion brain death, for example, led to adaptations in ethical norms and regulations. Likewise, the new developments in genomics research are now challenging and changing existing rules of informed consent as well as notions of privacy and rules for privacy protection [].

In the field of brain-machine interaction we can therefore also expect that certain fixed categories that we classify our world with and that structure our thinking, will evolve alongside the new technologies. This will have consequences for the ethical questions these technologies raise and for the way in which we handle both new and familiar moral issues. A first shift that can be expected concerns the distinction between human and machine. This distinction might fade as more parts of the body can be replaced with mechanical or artificial parts that become more and more ‘real’. Secondly, we might expect a blurring of boundaries between our familiar concepts of body and mind when neuroscience and neurotechnologies increasingly present the brain as an ordinary part of our body and the mind as simply one of its ‘products’. The following sections analyse these possible shifts in the symbolic order and the associated moral questions in more detail.

Symbolic Order in Motion: The Human Machine

The blurring of the boundary between human and machine brought about by brain-machine interaction forms the first challenge to the familiar categories with which we think. The more artificial parts are inserted in and added to the human body, the more uncertainty there is about where the human stops and the machine begins. Instead of human or machine, we increasingly seem to be looking at cyborgs: human and machine in a single being.

For a long time it was easy to distinguish between people and the tools, machines or devices that they used. Gradually, however, our lives have become increasingly entangled with machines—or, in the broader sense, with technology—and we have become dependent on them for practically every facet of our daily lives. Increasingly, parts of the human body itself are replaced or supplemented by technology.5 Of course, the notion that the human body works as a machine has been a leitmotiv in western culture since Descartes; this vision has enabled modern medicine while the successes achieved substantiate the underlying beliefs about the body at the same time. The emergence of transplantation medicine was a clear step in the development of popular views on the body as a machine. Since the first kidney transplantation in 1954 and the first heart transplantation in 1967, lungs, liver, pancreas and even hands and faces have become transplantable, thus enforcing the image of the human body as a collection of replaceable parts. Some have criticised transplantation medicine because of the ensuing mechanization and commodification of the human body.

Besides organs, more and more artefacts are now being implanted in the human body: artificial heart valves, pacemakers, knees, arterial stents and subcutaneous medicine pumps. Prostheses that are attached to the body, such as artificial limbs, are becoming increasingly advanced, and are no longer easy to detach—unlike the old-fashioned wooden leg. Experiences of patients who wear prostheses seem to indicate that people rapidly adapt to using them and fuse with them to the extent that they perceive them as natural parts of themselves. Artificial parts are rapidly included in the body scheme and come to be felt as ‘ones own’.6

In a certain sense, then, we are familiar with the perception of the body as a sort of machine, and with the fact that fusing the human body with artificial parts is possible. Do technologies like neuroprostheses, artificial limbs and exoskeletons break through the boundary between human and machine in a fundamentally new, different fashion? Should the conceptual distinction between human and machine perhaps be revised? Many publications, both popular and more academic, suggest that the answer has to be yes. A notion that is often used in this connection is that of the cyborg: the human machine.

Cyborgs

The term ‘cyborg’—derived from cybernetic organism—was coined in 1960 by Manfred Clynes and Nathan Kline, American researchers who wrote about the ways in which the vulnerable human body could be technologically modified to meet the requirements of space travel and exploration. The figure of the cyborg appealed to the imagination and was introduced into popular culture by science fiction writers, filmmakers, cartoonists and game designers; famous cyborgs include The Six Million Dollar Man, Darth Vader and RoboCop. In the popular image, the cyborg thus stands for the merging of the human and the machine.

In recent literature, both popular and scientific, the cyborg has come to stand for all sorts of man-machine combinations and all manner of technological and biotechnological enhancements or modifications of the human body. With the publication of books like I, Cyborg or Cyborg Citizen, the concept now covers a whole area of biopolitical questions. Everything that is controversial around biotechnological interventions, that raises moral questions and controversy, that evokes simultaneous horror and admiration, is now clustered under the designation ‘cyborg’ [, , ].

The concept of the cyborg indicates that something is the matter, that boundaries are transgressed, familiar categorizations challenged, creating unease and uncertainty. For Donna Haraway, well-known for her Cyborg Manifesto [], the concept of the cyborg stands for all the breaches of boundaries and disruptions of order, not merely for the specific breaking through of the distinction between human and machine which concerns me here. The term cyborg can thus be used to describe our inability to categorize some new forms of human life or human bodies. The use of the term compels us to delay categorization—in familiar terms of human or machine—at least for the moment and so creates a space for further exploration.

Monsters

Following Mary Douglas, Martijntje Smits has called these kind of entities that defy categorization and challenge the familiar symbolic order monsters []. Smits discusses four strategies for treating these monsters, four ways of cultural adaptation to these new entities and the disruption they bring about.

The first strategy, embracing the monster, is clearly reflected in the pronouncements of adherents of the transhumanist movement. They welcome all manner of biotechnological enhancements of humans, believe in the exponential development of the possibilities to this end and place the cyborg, almost literally, on a pedestal. The second strategy is the opposite of the first and entails exorcizing the monster. Neo-Luddites or bioconservatives see biotechnology in general and the biotechnological enhancement of people in particular, as a threat to the existing natural order. They frequently refer to human nature, traditional categories and values and norms when attacking and trying to curb the new possibilities. To them, the cyborg is a real monster that has to be stopped and exorcized.

The third strategy is that of adaptation of the monster. Endeavours are made to classify the new phenomenon in terms of existing categories after all. Adaptation seems to be what is happening with regard to existing brain-machine interaction. The conceptual framework here is largely formed by the familiar medical context of prostheses and aids. The designation of the electrodes and chips implanted in the brain as neuroprostheses, places them in the ethical area of therapy, medical treatment, the healing of the sick and support of the handicapped. As long as something that was naturally present but is now lost due to sickness or an accident is being replaced, brain-machine interaction can be understood as therapy and therefore accepted within the ethical limits normally assigned to medical treatments. However, for non-medical applications the problem of classifications remains. Prostheses to replace functions that have been lost may be accepted relatively easily, but how are we going to regard enhancements or qualitative changes in functions such as the addition of infrared vision to the human visual faculty? Are we only going to allow the creation of cyborgs for medical purposes, or also for military goals, or for relaxation and entertainment?

Finally, the fourth strategy is assimilation of the monster, whereby existing categories and concepts are adjusted or new ones introduced.7 In the following I will suggest that the concept of the person—in the sense in which it is used in ethics, rather than in common language—may be useful for this purpose.

Morality of Persons

In the empirical sense, cyborgs, or blends of human bodies with mechanical parts, are gradually becoming less exceptional. It therefore seems exaggerated to view people with prostheses or implants as something very exceptional or to designate them as a separate class. And this raises the question of why we should really worry about the blurring of the distinction between human and machine? This is not merely because the mixing of the flesh with steel or silicone intuitively bothers us, or because the confusion about categories scares us. More fundamentally, I believe this is because the distinction between the human and the machine also points to a significant moral distinction. The difference between the two concepts is important because it indicates a moral dividing line between two different normative categories. For most of our practices and everyday dealings the normative distinction between human and machine matters. You just treat people differently to machines—with more respect and care—and you expect something else from people than you expect from machines—responsibility and understanding, for example. Human beings deserve praise and blame for their actions while machines cannot. The important question is therefore whether brain-machine interfaces will somehow affect the moral status of the people using them []. Do we still regard a paralysed patient with a brain implant and an exoskeleton as a human being, or do we see him as a machine? Will we consider someone with two bionic legs to be a human being or a machine?

I belief that in part this also depends on the context and the reasons for wanting to make the distinction. In the context of athletic competition, the bionic runner may be disqualified because of his supra-human capacities. In this context, he is ‘too much of a machine’ to grant fair competition with fully biological human beings. However, in the context of everyday interaction with others, a person with bionic legs is just as morally responsible for his actions as any other person. In this sense he clearly is human and not a machine. This is because, with regard to moral status, the human being as an acting, responsible moral agent is identified more with the mind than with the body. The mind is what matters in the moral sense. Whether this mind controls a wheelchair with the aid of hands, or electrical brain-generated pulses, is irrelevant to the question of who controls the wheelchair: the answer in both cases is the person concerned. Whether someone is paralysed or not does not alter the question of whether he or she is a person or not; it will of course affect the kind of person he or she is but whether he or she is a person depends on his or her mental capacities. Ethical theories consider the possession of some minimal set of cognitive, conative and affective capacities as a condition for personhood. This means that, ethically speaking, under certain conditions, intelligent primates or Martians could be considered persons while human babies or extremely demented old people would not. Whatever the exact criteria one applies, there is no reason to doubt the fact that someone who is paralysed, someone who controls a robot by remote or someone who has a DBS electrode is a person. Certain moral entitlements, obligations and responsibilities are connected to that state of ‘being a person’. This notion therefore helps to resolve the confusion surrounding the cyborg. Rather than classifying him as either man or machine, we should be looking at personhood. Personhood is what really matters morally and this is not necessarily affected by brain-machine interfaces. As long as they do not affect personhood, brain-machine interfaces are no more special than other types of prosthesis, implants or instrumental aids that we have already grown used to.

New Views on Physical Integrity?

Nevertheless, brain-machine interfaces may in some cases cause new moral issues. A concrete example that can illustrate how shifting catagories can affect concepts and ethics is that of physical integrity. How should this important ethical and legal principle be interpreted when applied to cyborgs? The principle itself is not under discussion. We want to continue to guard and protect physical integrity. The question is, however, how to define the concept ‘body’ now that biological human bodies are becoming increasingly fused with technology and where to draw the line between those plastic, metal or silicone parts that do belong to that body and the parts that do not.

In the spring of 2007 the Dutch media paid attention to an asylum seeker who had lost an arm as a result of torture in his native country and had received a new myoelectrical prosthesis in the Netherlands. He just got used to the arm and was trained in using it naturally when it became apparent that there were problems with the insurance and he would have to return the prosthesis. Evidently, according to the regulations a prosthesis does not belong to the body of the person in question and it does not enjoy the protection of physical integrity. However, the loss of an arm causes a great deal of damage to the person, whether the arm is natural or a well-functioning prosthesis. If prostheses become more intimately connected to and integrated with the body (also through tactile sensors) such that they become incorporated in the body scheme and are deemed a natural part of the body by the person concerned, it seems there must come a point at which such a prosthesis should be seen as belonging to the (body of the) person concerned from the moral and legal point of view. It has even been questioned whether the interception of signals that are transmitted by a wireless link from the brain to a computer or artificial limb, should perhaps also fall under the protection of physical integrity []

Symbolic Order in Motion: Body-Mind

In the previous section I assumed the distinction between body and mind to be clear-cut. The common view is that the mind controls the body (whether this body is natural or artificial) and that the mind is the seat of our personhood, and of consciousness, freedom and responsibility. In this section I examine how this view might change under the influence of new brain-machine interactions and neuroscientific developments in general and what implications this may have for ethics. I will concentrate on DBS, since this brain-machine technique has at present the clearest impact on human mind and behaviour.8 Of course, however, our categories and common views will not change because of one single new technique—rather, it is the whole constellation of neuroscientific research and (emerging) applications that may change the ways in which we understand our minds and important related concepts.

The Mind as Machine

Neuroprostheses and other brain-machine interactions call into question the demarcation between body and mind, at least in the popular perception. Technologies such as neuroprosthetics and DBS make very clear the fact that physical intervention in the brain has a direct effect on the mind of the person in question. By switching the DBS electrode on or off, the behaviour, feelings and thoughts of the patient can be changed instantly. Thoughts of a paralysed person can be translated directly into electrical pulses and physical processes. As a result of neuroscience and its applications the human mind comes to be seen more and more as a collection of neurones, a system of synapses, neurotransmitters and electrical conductors. A very complex system perhaps, but a physical system nonetheless, that can be connected directly to other systems.

For some, this causes moral concern, since it may lead us to see ourselves merely in mechanical terms:

‘The obvious temptation will be to see advances in neuroelectronics as final evidence that man is just a complex machine after all, that the brain is just a computer, that our thoughts and identity are just software. But in reality, our new powers should lead us to a different conclusion: even though we can make the brain compatible with machines to serve specific functions, the thinking being is a being of very different sorts.’ [; p. 40–41]

I believe this change in our popular view of the mind that Keiper fears is actually already taking place. Neuroscientific knowledge and understanding penetrate increasingly into our everyday lives, and it is becoming more normal to understand our behaviour and ourselves in neurobiological terms. This shift is for example noticeable in the rise of biological psychiatry. Many psychiatric syndromes that were still understood in psychoanalytical or psychodynamic terms until well into the second half of the twentieth century, are now deemed biological brain diseases. The shift is also noticeable in the discussion on the biological determinants of criminal behaviour (and opportunities to change such behaviour by intervening in the brain) or in the increased attention for the biological and evolutionary roots of morality. Also in popular magazines and books, our behaviour and ourselves are increasingly presented as the direct result of our brains’ anatomy and physiology.

Scientific and technological developments have contributed to this shift. The development of EEG in the first half of the last century revealed the electrical activity of the brain for the first time, thus creating the vision of the brain as the wiring of the mind. The development of psychiatric drugs in the second half of the last century also helped naturalize our vision of the mind, picturing the brain as a neurochemical ‘soup’, a collection of synapses, neurotransmitters and receptors []. More recently the PET scan and the fMRI have made it possible to look, as it were, inside the active brain. The fact that fMRI produces such wonderful pictures of brains ‘in action’ contributes to our mechanical view of the relation between brain and behaviour. Certain areas of the brain light up if we make plans, others if an emotional memory is evoked; damage in one area explains why the psychopath has no empathy, a lesion in another correlates with poor impulse control or hot-headedness. While neurophilosophers have warned against the oversimplified idea that images are just like photographs that show us directly how the brain works, these beautiful, colourful images appeal to scientists and laymen alike [].

According to Nikolas Rose, we have come to understand ourselves increasingly in terms of a biomedical body, and our personalities and behaviour increasingly in terms of the brain. He says that a new way of thinking has taken shape: ‘In this way of thinking, all explanations of mental pathology must ‘pass through’ the brain and its neurochemistry—neurones, synapses, membranes, receptors, ion channels, neurotransmitters, enzymes, etc.’ [; p. 57]

We are experiencing what he calls a ‘neurochemical reshaping of personhood’ [; p. 59]. Likewise, Mooij has argued that the naturalistic determinism of the neurosciences is also catching on in philosophy and has now spread broadly in the current culture ‘that is to a large extent steeped in this biological thinking, in which brain and person more or less correspond’ [; p.77].

The mind is being seen more and more as a physical, bodily object (the ‘the mind = the brain’ idea), and given that the human body, as described above, has long been understood in mechanical terms, the equal status of the mind and brain means that the mind can also be understood in mechanical terms. As the basic distinction between mind and machine seems to drop away the distinction between human and machine once more raises its head, but now on a more fundamental and extremely relevant level, morally speaking. If in fact our mind, the seat of our humanity, is also a machine, how should we understand personhood in the morally relevant sense? How can we hold on to notions such as free will and moral responsibility?

Neuroscientific Revisionism

A recent notion amongst many neuroscientists and some neurophilosophers is that our experience of having a self, a free will or agency, is based on a misconception. The self as a regulating, controlling authority does not exist, but is only an illusion produced by the brain.9 From this notion it seems to follow that there is no such thing as free will and that there can therefore be no real moral responsibility. Within philosophy revisionists, who claim that our retributive intuitions and practices are unwarranted under determinism, claim that this view obliges us to revise our responsibility-attributing practices, including our legal system. Revisionism implies the need to replace some of our ordinary concepts with new ones. It has, for example, been suggested to substitute blame with ‘dispraise’ [] or to eliminate concepts connected to desert like blame, moral praise, guilt and remorse, altogether []. On a revisionist account, praise, blame and punishment are just devices that modify conduct, and that can be more or less effective, but not more or less deserved.

Greene and Cohen assume that because of the visible advances in the neurosciences—and I take brain-machine interfaces to be part of those—the naturalistic deterministic view on human behaviour will by degrees be accepted by more and more people, and revisionism will catch on []. To their way of thinking, our moral intuitions and our folk psychology will slowly adapt to the overwhelming evidence the neurosciences present us with. The technologies enabled on the basis of neuroscientific understanding, such as DBS, neurofeedback, psychiatric drugs, and perhaps also intelligent systems or intelligent robots, can contribute to this. Little by little we will hold people less responsible and liable for their actions, according to Greene and Cohen, but will see them increasingly as determined beings who can be regulated, more or less effectively, by sanctions or rewards. They allege that questions concerning free will and responsibility will lose their power in an age in which the mechanistic nature of the human decision process will be totally understood. This will also have consequences for the legal system. ‘The law will continue to punish misdeeds, as it must for practical reasons, but the idea of distinguishing the truly, deeply guilty from those who are merely victims of neuronal circumstances will, we submit, seem pointless.’ ([; p. 1781])

Greene and Cohen, like other revisionists, advocate a shift in the nature of our criminal justice system, from a retributive to a consequentialistic system. This means a shift from a system based on liability and retribution to one based on effects and effectiveness of punishment. A consequentialistic system of this kind is, in their opinion, in keeping with the true scientific vision of hard determinism and the non-existence of free will. Greene and Cohen recognize that many people will intuitively continue to think in terms of free will and responsibility. What is more, they think that this intuitive reflex has arisen through evolution and is deeply rooted in our brains. We can hardly help thinking in these sorts of terms, despite the fact that we know better, scientifically speaking. Nonetheless, Greene and Cohen insist that we should base important, complex matters such as the criminal justice system10 on the scientific truth about ourselves and not allow ourselves to be controlled by persistent, but incorrect, intuitions.

Moral Responsibility Reconsidered

A whole body of literature has accumulated refuting this thesis and arguing that new neuroscientific evidence need not influence our moral and legal notions of responsibility (e.g. []). This literature reflects the dominant position in the determinism debate nowadays, that of compatibilism. According to compatibilism determinism is reconcilable with the existence of a free will, and with responsibility. As long as we can act on the basis of reasons and as long as we are not coerced, we are sufficiently free to carry responsibility and the naturalistic neuroscientific explanatory model of behaviour is therefore not necessarily a threat to our free will and responsibility, according to the compatibilist. The question is, however, whether the compatibilist’s philosophical argumentation also convinces the average layman or neuroscientist, certainly in the light of new experimental findings and technical possibilities. How popular views on this topic will develop remains to be seen.

At the moment even adherents of the revisionist view seem convinced that we will never be able to stop thinking, or even think less, in terms of intentions, reasons, free will and responsibility. It seems almost inconceivable not to hold one another responsible for deeds and behaviour [].

Nevertheless, neuroscientific research does challenge our view of ourselves as rational, autonomous and moral beings []. Research shows us, for example, that many if not most of our actions automatic, unconsciously initiated and only some of our actions are deliberate and consciously based on reasons. Our rationality, moreover, is limited by various biases, like confirmation bias, hyperbolic discounting, false memories et cetera. New findings in neuroscience, such as the fact that immaturity of the frontal lobes impedes the capacities for reasoning, decision making and impulse control in adolescents, or that exercise of self-constraint eventually leads to exhaustion of the capacity for self-control (ego-depletion), do necessitate us to re-think the ways in which or the degrees to which we are actually morally responsible is specific situations and circumstances (see for example the series of articles on addiction and responsibility in AJOB Neuroscience 2007).

A more naturalized view of the human mind could thus still have important consequences, even if we do not jettison the notion of moral responsibility altogether. More grounds for ‘absence of criminal responsibility’ could, for example, be acknowledged in criminal law, whereby new technologies could play a role. Functional brain scans might provide more clarity on the degree to which an individual has control over his or her own behaviour.

Prosthetic Responsibility?

Due to their ability to directly influence complex human behaviour by intervening in the brain, brain-machine interfaces may raise interesting issues of responsibility, even when we reject revisionism, as can be illustrated by the following case of a 62 year old Parkinson patient treated with DBS.11

After implantation of the electrodes, this patient became euphoric and demonstrated unrestrained behaviour: he bought several houses that he could not really pay for; he bought various cars and got involved in traffic accidents; he started a relationship with a married woman and showed unacceptable and deviant sexual behaviour towards nurses; he suffered from megalomania and, furthermore, did not understand his illness at all. He was totally unaware of any problem. Attempts to improve his condition by changing the settings of the DBS failed as the manic characteristics disappeared but the patient’s severe Parkinson’s symptoms reappeared. The patient was either in a reasonable motor state but in a manic condition lacking any self reflection and understanding of his illness, or bedridden in a non-deviant mental state. The mania could not be treated by medication [].

Who was responsible for the uninhibited behaviour of the patient in this case? Was that still the patient himself, was it the stimulator or the neurosurgeon who implanted and adjusted the device? In a sense, the patient was ‘not himself’ during the stimulation; he behaved in a way that he never would have done without the stimulator.12 That behaviour was neither the intended nor the predicted result of the stimulation and it therefore looks as though no one can be held morally responsible for it. However, in his non-manic state when, according to his doctors, he was competent to express his will and had a good grasp of the situation, the patient chose to have the stimulator switched on again. After lengthy deliberations the doctors complied with his wishes. To what extent were his doctors also responsible for his manic behaviour? After all, they knew the consequences of switching on the stimulator again. To what extent was the patient himself subsequently to blame for getting into debt and bothering nurses?

For such decisions, the notion of ‘diachronic responsibility’ [] can be of use, indicating that a person can take responsibility for his future behaviour by taking certain actions. Suppose, for example, that DBS would prove an effective treatment for addiction, helping people to stay off drugs, alcohol or gambling, could it then rightly be considered a ‘prosthesis for willpower’ [], or even a prosthesis for responsibility? I believe that technologies that enable us to control our own behaviour better—as DBS might do in the case of addiction, or in the treatment of Obsessive Compulsive Disorder—can be understood in terms of diachronic responsibility and self-control, and thus enhance autonomy and responsibility [].

Future applications of brain-machine interaction may raise further questions: suppose a doctor would adjust the settings of DBS without consent from the patient and cause behaviour the patient claimed not to identify with—who would then be responsible? As Clausen has pointed out, neuroprostheses may challenge our traditional concept of responsibility when imperfections in the system lead to involuntary actions of the patient []. Likewise, if the wireless signals of a neuroprosthesis were incidentally or deliberately disrupted, it would be questionable who would be responsible for the ensuing ‘actions’ of the patient.

Clearly, even without major shifts in our views on free will and responsibility, brain-machine interfaces will require us to consider questions of responsibility.

Conclusion

The convergence of neuroscientific knowledge with bio-, nano-, and information technology is already beginning to be fruitful in the field of brain-machine interaction, with applications like DBS, neuroprosthesis and neurofeedback. It is hard to predict the specific applications awaiting us in the future, although there is no shortage of wild speculations. The emergence of new technical possibilities also gives rise to shifts in our popular understanding of basic categories, and to some new moral issues. The boundaries of the human body are blurring and must be laid down anew; our views on what it is to be a person, to have a free will and to have responsibility are once more up for discussion. In this article I have explored how these shifts in categories and concepts might work out.

I have argued that the distinction between human and machine, insofar as it concerns a morally relevant distinction, does not have to be given up immediately because increasingly far-reaching physical combinations are now being made between human and mechanical parts. Depending on the context and the reasons we have for wanting to make a distinction, we will draw the line between human and machine differently. In the context of sports, a bionic limb may disqualify its user for being too much of a ‘machine’ while in another context such a limb may be qualified as an integral part of a human being and be protected under the right to physical integrity. Important general moral questions that lie behind the confusion about categories of human and machine concern moral responsibility and moral status. The concept of a person, as used in ethical theory to designate moral actors, is more precise and more useful in this context than the general category of the ‘human’ or the poly-interpretable notion of the ‘cyborg’.

In the most radical scenario of shifts in our symbolic order, the concept of ‘person’ may also come under pressure. As I have shown based on Greene and Cohen’s vision, the person as a being with a free will and moral responsibility, and as a moral actor, should, according to some, disappear from the stage altogether. Implementing such a neuroreductionistic vision on the mind and free will would have clear consequences for criminal law: it would have to be revised to a consequentialist, neo-behaviouristic system. People would then barely be considered to be morally responsible beings but be seen as systems that respond to praise and blame in a mechanical fashion. I believe it is unlikely that such a shift in our popular views will come about, because the intuitive appeal of the notion of responsibility, and because there are many good arguments to resist this shift. Even if we do not jettison responsibility altogether, however, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.

A general lesson for ethics of emerging technologies is that such technologies necessitate renewed consideration and reinterpretation of important organizing concepts and distinctions that are crucial to moral judgement. The symbolic labour required to answer such conceptual and normative questions is at least as important for the development of converging technologies as the technical-scientific labour involved.

Acknowledgments

Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Footnotes

1See his website www.kurzweilAI.net for these and other future forecasts

2A lot of research in the field of brain-machine interaction and other converging technologies is carried out by DARPA, the American Ministry of Defence research institute. In 2003, for example, DARPA subsidized research into brain-machine interfaces to the tune of 24 million dollars [, ].

3In the former case, these applications might enable soldiers to carry heavy rucksacks more easily, in the latter they could, for example, help a nurse to lift a heavy patient on his or her own.

4How these processes interact with one another and how socio-cultural changes influence philosophical thinking and vice versa is an interesting and complicated question, that I cannot start to answer here.

5Not that this is an entirely new phenomenon, seeing that all sorts of bodily prostheses have existed for centuries; the first artificial leg dates back to 300 before Christ. Other prostheses that we more or less attach to our bodies are, for instance, spectacles, hairpieces and hearing aids. But the insertion of external parts into the human body is more recent.

6More attention has recently been paid to the importance of the body for the development and working of our consciousness. In the Embodied Mind model, body and mind are seen far more as interwoven than in the past (e.g. []). This can have even further implications for brain-machine interaction; if for example, neuroprostheses change our physical, bodily dealings with the world, this may also have consequences for the development of the brain and for our consciousness. Neuroscientists have even claimed that: ‘It may sound like science fiction but if human brain regions involved in bodily self-consciousness were to be monitored and manipulated online via a machine, then not only will the boundary between user and robot become unclear, but human identity may change, as such bodily signals are crucial for the self and the ‘I’ of conscious experience’ []

7The distinction between adaptation and assimilation is not very clear—it depends on what one would wish to call an ‘adaptation’ or ‘adjustment’ of a concept.

8By contrast, in the case of the neuroprosthesis discussed in the previous section, it is mainly the mind that influences the body, through the interface.

9‘Obviously we have thoughts. Ad nauseam, one might say. What is deceptive, is the idea that these thoughts control our behaviour. In my opinion, that idea is no more than a side effect of our social behaviour. […] The idea that we control our deeds with our thoughts, that is an illusion’, says cognitive neuroscientist Victor Lamme, echoing his collegue Wegner. [; p. 22, ].

10Likewise, moral views on responsibility may change. An instrumental, neo-behaviouristic vision on morality and the moral practice of holding one another responsible might arise. Holding one another responsible may still prove a very effective way of regulating behaviour, even if it is not based on the actual existence of responsibility and free will. As long people change their behaviour under the influence of moral praise and blame, there is no reason to throw the concept of responsibility overboard. From this point of view, there would be no relevant difference anymore between a human being and any other system that would be sensitive to praise and blame, such as an intelligent robot or computer system. If such a system would be sensitive to moral judgements and respond to them with the desired behaviour on this view they would qualify as much as moral actors as human beings would.

11This case also been discussed by [, , ].

12Of course, this problem is not exclusive for DBS; some medications can have similar effects. However, with DBS the changes are more rapid and more specific and can be controlled literally by a remote control (theoretically, the patients behaviour can thus be influenced without the patient’s approval once the electrode is in his brain). These characteristics do make DBS different from more traditional means of behaviour influencing, though I agree with an anonymous reviewer that this is more a matter of degree than an absolute qualitative difference.

References

1. Anonymus Kunstarm met gevoel. Med Contact. 2007;62:246.
2. Bell EW, Mathieu G, Racine E. Preparing the ethical future of deep brain stimulation. Surg Neurol. 2009 [PubMed]
3. Berghmans R, Wert G. Wilsbekwaamheid in de context van elektrostimulatie van de hersenen. Ned Tijdschr Geneeskd. 2004;148:1373–75. [PubMed]
4. Blanke O, Aspell JE (2009) Brain technologies raise unprecedented ethical challenges. Nature, l458, 703 [PubMed]
5. Blume S. Histories of cochlear implantation. Soc Sci Med. 1999;49:1257–68. doi: 10.1016/S0277-9536(99)00164-1. [PubMed] [CrossRef]
6. Burg W. Dynamic Ethics. J Value Inq. 2003;37:13–34. doi: 10.1023/A:1024009125065. [CrossRef]
7. Clausen J. Moving minds: ehical aspects of neural motor prostheses. Biotechnol J. 2008;3:1493–1501. doi: 10.1002/biot.200800244. [PubMed] [CrossRef]
8. Clausen J. Man, machine and in between. Nature. 2009;457:1080–1081. doi: 10.1038/4571080a. [PubMed] [CrossRef]
9. EGE—European Group on Ethics in Science and New Technologies (2005) Ethical aspects of ICT implants in the human body. opinion no. 20. Retrieved from http://ec.europa.eu/european_group_ethics/docs/avis20_en.pdf
10. Ford PJ. Neurosurgical implants: clinical protocol considerations. Camb Q Healthc Ethics. 2007;16:308–311. doi: 10.1017/S096318010707034X. [PubMed] [CrossRef]
11. Ford P, Kubu C. Ameliorating or exacerbating: surgical ‘prosthesis’ in addiction. Am J Bioeth. 2007;7:29–32. [PubMed]
12. Foster KR. Engineering the brain. In: Illes J, editor. Neuroethics. Defining issues in theory, practice and policy. Oxford: Oxford University; 2006. pp. 185–200.
13. Gillet G. Cyborgs and moral identity. J Med Ethics. 2006;32:79–83. doi: 10.1136/jme.2005.012583. [PMC free article] [PubMed] [CrossRef]
14. Glannon W. Bioethics and the brain. Oxford: Oxford University; 2007.
15. Glannon W. Our brains are not us. Bioethics. 2009;23:321–329. doi: 10.1111/j.1467-8519.2009.01727.x. [PubMed] [CrossRef]
16. Glannon W. Stimulating brains, altering minds. J Med Ethics. 2009;35:289–292. doi: 10.1136/jme.2008.027789. [PubMed] [CrossRef]
17. Graham-Rowe D (2006) Catching seizures before they occur. Retrieved September 15, 2009, from http://www.technologyreview.com/biotech/17124/
18. Gray CH. Cyborg ctizen: politics in the posthuman age. New York: Routledge; 2001.
19. Greene J, Cohen J. For the law, neuroscience changes nothing and everything. Phil Trans Roy Soc Lond B. 2004;359:1775–1785. doi: 10.1098/rstb.2004.1546. [PMC free article] [PubMed] [CrossRef]
20. Hansson SO. Implant ethics. J Med Ethics. 2005;31:519–525. doi: 10.1136/jme.2004.009803. [PMC free article] [PubMed] [CrossRef]
21. Haraway D. A cyborg manifesto: science, technology, and socialist-feminism in the late twentieth century. In: Haraway D, editor. Simians, cyborgs and women: the reinvention of nature. New York: Routledge; 1991. pp. 149–181.
22. Healy D. The creation of psychopharmacology. Cambridge: Harvard University; 2000.
23. Hochberg L, Taylor D. Intuitive prosthetic limb control. Lancet. 2007;369:345–346. doi: 10.1016/S0140-6736(07)60164-0. [PubMed] [CrossRef]
24. Hochberg L, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–171. doi: 10.1038/nature04970. [PubMed] [CrossRef]
25. Hughes J. Citizen cyborg: why democratic societies must respond to the redesigned human of the future. Cambridge: Westview; 2004.
26. Keiper A. The age of neuroelectronics. The New Atlantis Winter. 2006;2006:4–41. [PubMed]
27. Keulartz JM, Korthals MS, Swierstra T, editors. Pragmatist ethics for a technological culture. Dordrecht: Kluwer Academic; 2002.
28. Keulartz J, Schermer M, Korthals M, Swierstra T. Ethics in technological culture: a programmatic proposal for a pragmatist approach. Sci Technol human values. 2004;29(1):3–29. doi: 10.1177/0162243903259188. [PubMed] [CrossRef]
29. Koops B, van Schooten H, Prinsen B (2004) Recht naar binnen kijken: een toekomstverkenning van huisrecht, lichamelijke integriteit en nieuwe opsporingstechnieken, eJure, ITER series 70. Retrieved from http://www.ejure.nl/mode=display/downloads/dossier_id=296/id=301/Deel_70_Koops.pdf
30. Lamme V. De geest uit de fles. Dies rede. Amsterdam: University of Amsterdam; 2006.
31. Levy N. Neuroethics. Cambridge: Cambridge University; 2007.
32. Leentjes AFG, et al. Manipuleerbare wilsbekwaamheid: een ethisch probleem bij elektrostimulatie van de nucleaus subthalamicus voor een ernstige ziekte van Parkinson. Ned Tijdschr Geneeskd. 2004;148:1394–1397. [PubMed]
33. Lunshof JE, Chadwick R, Vorhaus DB, Church GB. From genetic privacy to open consent. Nat Rev Genet. 2008;9:406–411. doi: 10.1038/nrg2360. [PubMed] [CrossRef]
34. Mooij A. Toerekeningsvatbaarheid. Over handelingsvrijheid. Amsterdam: Boom; 2004.
35. Moreno JD. DARPA on your mind. Cerebrum. 2004;6:91–99. [PubMed]
36. Morse S. Voluntary control of behavior and responsibility. Am J Bioeth. 2007;7:12–13. [PubMed]
37. Morse SJ. Determinism and the death of folk psychology: two challenges to responsibility from neuroscience. Minnesota Journal of Law, Science and Technology. 2008;9:1–35.
38. Rabins P, et al. Scientific and ethical issues related to deep brain stimulation for disorders of mood, behavior and thought. Arch Gen Psychiatry. 2009;66:931–37. doi: 10.1001/archgenpsychiatry.2009.113. [PMC free article] [PubMed] [CrossRef]
39. Roco MC, Bainbridge WS. Converging technologies for improving human performance. Arlington: National Science Foundation; 2002.
40. Rose N. Neurochemical selves. Society. 2003;41:46–59. doi: 10.1007/BF02688204. [CrossRef]
41. Roskies A. Neuroimaging and inferential distance. Neuroethics. 2008;1:19–30. doi: 10.1007/s12152-007-9003-3. [CrossRef]
42. Schermer M. Gedraag Je! Ethische aspecten van gedragsbeïnvloeding door nieuwe technologie in de gezondheidszorg. Rotterdam: NVBE; 2007.
43. Smart JJC. Free will, praise and blame. In: Watson G, editor. Free will. 2. New York: Oxford University; 2003. pp. 58–71.
44. Smits M. Taming monsters: the cultural domestication of new technology. Tech Soc. 2006;28:489–504. doi: 10.1016/j.techsoc.2006.09.008. [CrossRef]
45. Strawson P. Freedom and resentment. In: Watson G, editor. Free will. 2. New York: Oxford University; 2003. pp. 72–93.
46. Synofzik M, Schlaepfer TE. Stimulating personality: ethical criteria for deep brain stimulation in psychiatric patients for enhancement purposes. Biotechnol J. 2008;3:1511–1520. doi: 10.1002/biot.200800187. [PubMed] [CrossRef]
47. Vargas M. The revisionist’s guide to responsibility. Philos Stud. 2005;125:399–429. doi: 10.1007/s11098-005-7783-z. [CrossRef]
48. Warwick K. I, cyborg. London: Century; 2004.
49. Wegner DM. The illusion of conscious will. Cambridge: MIT; 2002.
50. Weinberger S (2007) Pentagon to merge the next-gen binoculars with soldiers’ brain. Retrieved September 15 2009 from http://www.wired.com/gadgets/miscellaneous/news/2007/05/binoculars

The psychoacoustic effect of infrasonic, sonic and ultrasonic frequencies within non-lethal military warfare techniques.

The psychoacoustic effect of infrasonic, sonic and ultrasonic frequencies within non-lethal military warfare techniques.

Exploring the use of audio to influence humans physically and psychologically as a means of non-lethal warfare methods throughout both the 20th and the 21st century.

Infrasonic Frequencies

The term ‘infrasound’ defines itself as the inaudible frequency range below the human bandwidth of around 20Hz. When discussing infrasound, it’s often associated with acts of

nature, sources such as the Fuego volcano in Guatemala emitted 120 decibels of infrasonic sound ranging around 10Hz (Georgia State University, no date). It is with occurrences like this that calls for a large amount of infrasonic monitoring to counter natural disaster detection. Beyond the use of infrasound detection, this frequency range, of which is inaudible to us, has been researched throughout the decades to investigate its effects on the human body. One of which is it’s application to military usage.

Throughout the 20th and 21st century, there has been a vast amount of research collected and interest gained in the use of non-lethal weapons (NLW), which are intended to immobilise or impair targets without causing permanent or severe damage to the human body. As technologies have developed, it’s apparent that military bodies within the world seek to create weapons resulting in “war’s without death” (Scott & Monitor, 2010). However, it is within the creation of new weapons that many issues arise, which perhaps may be a reason there is little evidence for the deployment of NLW. It’s apparent that some concepts of using infrasound may violate disarmament treaties, for example, the 1999 European Committee stated:

“global ban on all research and development, whether military or civilian, which seeks to apply knowledge of the chemical, electrical, sound vibration or other functioning of the human brain to the development of human beings, including a ban on actual or possible deployment of such systems” (Giordano, 2014).

Thus, this may result in military bodies taking a critical view before the acceptance of research to be made. However, it is important to understand at this point within this study, that this does not just encompass infrasonic sound but also applies to ultrasonic sound too.

Despite this, it is the alleged properties that infrasound, when applied correctly to humans, that have allowed for the field to be of interest within military application. Within Table 1 we can see a notable number of applications that infrasound could possibly or has been applied for:

Infrasound has resulted in a large amount of interest within the creation of NLW. It is apparent that given the technical depth that infrasound can be applied to within weaponry, a very in depth analysis of each device would be required. The present chapter within this text will analyse research collated that will allow for a greater insight into the application of infrasound on the human body, thus allowing us to formulate a background before exploring the outcome of the research tested within this study.

Physical and Psychological Effects

Infrasound has been utilised as a means of sonic warfare for physical human impact, dating back to World War 1. Acoustic imaging was the primitive use of infrasonic sound during World War 2, for the use of radar and sonar techniques in order to detect locations of enemy artillery (Ihde, 2015). Despite there bing many references to acoustic weaponry, as early as World War 2, it is in the 1960’s that actual documented research becomes more available. As described in, Secret Weapons of the Third Reich (E. Simon, 1971), one such device is discussed:

“…design consisted of a parabolic reflector, 3.2 meters in diameter, having a short tube which was the combustion chamber or sound generator, extending to the rear from the vertex of the parabola. The chamber was fed at the rear by two coaxial nozzles, the outer nozzle emitting methane, and the central nozzle oxygen. The length of the chamber was one- quarter the wavelength of the sound in air. Upon initiation, the first shock wave was reflected back from the openend of the chamber and initiated the second explosion. The frequency was from 800 to 1500 impulses per second. The main lobe of the sound intensity pattern had a 65 degree angle of opening, and at 60 meters’ distance on the axis a pressure of 1000 microbars had been measured. No physiological experiments were conducted, but it was estimated that at such a pressure it would take from 30 to 40 seconds to kill a man. At greater ranges, perhaps up to 300 meters, the effect, although not lethal, would be very painful and would probably disable a man for an appreciable length of time. Vision would be affected, and low- level exposures would cause point sources of light to appear as lines.”

This device, known as the ‘Wirbelwind Kanonew’ , is perhaps the only known fully developed infrasonic weapon created in order to physically effect it’s target, with the intention of countering enemy aircraft and infantry by creating a vortex of sound (Crab, 2008). Moreover, there are cases that perhaps suggest a possible application of infrasound to cause physical damage to the ear drum. (Harding, Bohne, Lee, & Salt, 2007) cites that frequency ranges around 4Hz, at high decibels, are perhaps able damage parts of the ear drum. The vibrational movement created by the infrasonic frequency result in large fluid movements of cochlear fluid, the intermixing of cochlear fluid is hypothesised to result in lasting damage. There are however, in contrary to this, studies also suggest the mechanisms of the ear have a normal reaction to infrasonic sound. As preciously mentioned, the central mechanism of the ear is the cochlear; within the cochlear there are two sensory cells, the inner hair cells (IHC) and the outer hair cells (OHC) (Cook, 1999). IHC responses are dependant on velocity and due to the fluid within the ear, the stimulus lowers as the frequency lowers; in contrast, OHC have a greater response to low frequency ranges such as infrasound. As a result, the effect of infrasound on IHC’s within the ear, could be suggested as inefficient thus resulting in infrasound’s effect on the ear, physically, being normal (Salt & Hullar, 2010). However, this does not suggest that the effect of infrasound on both IHC and OHC do not have a psychological effect on the brain. Exposure to levels above 80db between 0.5Hz and 10Hz causing these possible vibrational movements within the ear’s functions, are said to cause psychological changes such as fear, sorrow, depression, anxiety, nausea, chest pressure and hallucinations (ECRIP, 2008). It is the result of this effect in the middle ear, that (Goodman, 2010 p. 18) cites as being discovered by military personnel during World War 1 and World War 2.

The effect of emotional and psychological change as a result of infrasonic exposure can later be found during the second Indochina war. In 1973, The United States deployed the Urban Funk Campaign, a psychoacoustic attack during the war with the intention of altering mental states of their enemies (Goodman, 2010). The device utilised both infrasonic and ultrasonic frequencies, which emitted high decibel oscillations from a mounted helicopter onto the Vietnamese ground troops (Toffler, Alvin, & Toffler, 1995). Though there is no record of the specification of this device, one can assume that the U.S Military had tested the infrasonic frequency ranges in order to achieve a psychological effect on it’s targets. As previously cited by (Goodman, 2010), it is documented that the frequency range of 7Hz is thought to instil effects of uneasiness, anxiety, fear and anger. (Walonick, 1990) reports in a experiment that below 8Hz had caused agitation and uneasiness for participants. Goodman also supports this discussing “It has been noted that certain infrasonic frequencies plug straight into the algorithms of the brain and nervous system. Frequencies of 7 hertz, for example, coincide with theta rhythms, thought to induce moods of fear and anger.” (Goodman, 2010). It is within the psychological change that we begin to question the reasoning behind it, many of the studies in the next chapter of this study suggest that resonance is perhaps the reason as to why there could be an emotional and psychological change to human’s when exposed to infrasonic frequencies.

Resonance

All objects have a property known as their resonant frequency, this involves the “re- enforcement of vibrations of a receiving system due to a similarity to the frequencies of the source” (Pellegrino & Productions, 1996). It is this property that is held within all matter, that we can apply sound as a means of resonance within the human body. It is resonance within the human body that is thought to create the psychological effects of that mentioned in the previous chapter.

Limited literature within the infrasonic frequency range allows for an array of research speculating conspiracies within the utilisation of infrasonic frequency ranges as a means of non-lethal weaponry and crowd control. As a result, this could lead to a plausible suggestion that military application of non-lethal audio weapons have not been made publicly available. A large influence on the development and notable usages of infrasonic frequencies as a means of deterrence, was the development of a low-frequency acoustic device by French scientist Vladimir Gavreau (Lothes, 2004). It is reported that Gavreau had discovered the infrasound weapon by result of a resonant frequency being emitted from a motor-driven ventilator within his office (Vassilatos, no date). Following this, Gavreau developed a device that emitted infrasonic sine wave frequencies around 7hertz, with military application, (Vassilatos, no date) said to induce painful symptoms effecting his laboratory staff with immediate effect, other results are reported of the likes of the feeling of fear and flight. Following this discovery Gavreau made discussions that highlighted the effect of infrasonic frequencies to humans, citing it as a possible cause of city dwellers’ stress (Broner, 2003). Gavreau’s discovery within this field has been largely researched and discussed throughout the acoustic warfare field. Vinokur, drew from Gavreau’s invention stating within his publication The Case of the Mythical Beast. (Vinokur, 1993)

“. . . sound with a frequency of less than 16 Hz is inaudible. It’s called infrasound, and its effect on human beings is not completely understood. We do know, however, that high- intensity infrasound causes headache, fatigue, and anxiety . . . Our internal organs (heart, liver, stomach, kidneys) are attached to the bones by elastic connective tissue, and at low frequencies may be considered simple oscillators. The natural frequencies of most of them are below 12 Hz (which is in the infrasonic range). Thus, the organs may resonate. Of course, the amplitude of any resonance vibrations depends significantly on damping, which transforms mechanical energy into thermal energy . . . this amplitude decreases as the damping increases. Also, the amplitude is proportional to the amplitude of the harmonic force causing the vibrations . . .”

It is also apparent that such frequencies have been used in many varying fields to provide evidence of it’s existence, exterior to military and police usage. Furthermore, British physiology researchers O’Keeffe & Angliss conducted an experiment to test the effects of infrasonic frequencies on the human brain in 2003. The method was conducted by playing 4 musical pieces to 700 participants two of which had 17hertz frequencies played unknowingly to the participants during the piece. Results found that 22% of the participants experienced a feeling of anxiety and fear (Stathatos, no date). A similar experiment entitled ‘The Haunt Project’ conducted by the Anomalistic Psychology Research Unit of Goldsmiths College, London, subjected 79 volunteers to a varying array of infrasonic frequencies. The primary analysis of the study cites that “63 (79.7%) of the participants felt dizzy or odd, 9 (11.4%) experienced sadness, 7 (8.9%) experienced terror” (French, Haque, Bunton- Stasyshyn, & Davis, 2009). It’s not unreasonable to state that within a varying amount of research conducted in this field, there is little evidence to suggest why infrasound actually has an effect on human emotion. Acoustic scientists investigating the result of noise pollution on workers determine that every organ within the human body has a resonant frequency and it’s own ‘acoustic properties’, this effect is discussed as a possible means as to why frequency has an effect on the human body (Prashanth & Venugopalachar, 2010). Additionally to this, Mahindra states that the resonant frequency of the eyeball has a direct effect on emotional states of anxiety & stress (Prashanth & Venugopalachar, 2010). (Braithwaite, 2006), who also have researched infrasonic resonance, cite that the change to fearful emotions may be a direct response to infrasound inducing resonance within the human eyeball. To support this statement, it’s also apparent within research conducted by NASA (Aerospace Medical Research Laboratory, 1976) that the resonant frequency of the human eyeball sits at around 18hertz, just below the audible range of the human ear. Referring back to the use of 7Hz frequency, additional support is gathered with many texts referring to resonant frequencies within the body, with the likes of (Broner, 2003) stating “…it has also been alleged that this is the resonant frequency of the body’s organs…”. One could perhaps draw a conclusion that resonance could be the catalyst for psychological change when exposed to infrasonic sound. The result of resonant frequencies within the body allow for a direct correspondence to the frequency rhythms within the brain, which cohere with the emotional state of every human. (Davies & Honours, no date) cites that “Many of the most profound effects of sound are attributed to infrasound in the region of 7Hz. This corresponds with the median alpha-rhythm frequencies of the brain.”. In addition to this, we also see discussed by (Sargeant, 2001):

“The frequency that is thought to be most dangerous to humans is between 7 and 8Hz. This is the resonant frequency of flesh and, theoretically, it can rupture internal organs if loud enough. Seven hertz is also the average frequency of the brain’s alpha rhythms; thus this frequency has been described as dangerous but also relaxing. Whether exposure to such infrasound can trigger epileptic seizures, as some fear, remains unclear; experimental data on exposure to such frequencies gives a variety of results. It should be noted, however, that the strobe light effect associated with triggering epileptic seizures flashes at an equivalent rhythm. Frequencies below 50Hz commonly lose their coherence and are perceived to pulse or fluctuate, which is analogous to the strobing beat of a modulated light.”

It is apparent that the frequency range sitting around 7Hz has been widely discussed as changing a subject emotional state when exposed. As a result of this research, the study will gather primary research to understand the effect of 7Hz on the human body, and analyse the emotional effect it has within formulated within this study.

Sonic Frequencies

The frequency that forms our own perception of sound sits between 20Hz — 20,000Hz, though only constituting a small amount of frequency spectrum, our auditory range can play an important role on our body; such as our equilibrioception (balance), proprioception and kinaesthesia (joint motion and acceleration), time, nociception (pain), magnetoception (direction), and thermoception (temperature differences) (HEYS, 2011). In order to full understand how the military application of sound can impact subjects psychologically, we must first understand how sound effects us mentally. Drawing from research collated pioneers within the sound-emotion connection, (Berlyne, 1971), (Meyer & Meyer, 1961), (Juslin & Sloboda, 2001) & (Liljeström, 2011) suggest six main mechanisms that happen when we perceive sound:

  • Brain Stem Reflex is the effect of the brain recognising the acoustic properties of a sound, signalling the brain to react instinctively. Much similar to that of the American ‘Long Range Acoustic Device’ discussed later within this section.
  • Evaluative conditioning is the effect of association between setting and sound; if the brain has heard a specific sound repeatedly in a specific setting, this triggers an emotional connection between the two.
  • Emotional contagion is the perception of emotion expressed in certain sounds, whether or not the audio sounds sad, the association is recognised by the brain as an expression of emotion.
  • Visual imagery relates to the brains association between a certain sound and a visual image or sensation.
  • Episodic memory is the effect of the brain recognising sound as a memory, evoking the thought of stations to which a memory of sound was present.
  • Sound expectancy is the brains mechanism of expecting how a sound will hear through previous experience.
  • It is these mechanisms within the brain that aid us to draw the association between techniques developed for military application and sound in order to alter the state of mind of subjects. Whether it is by creating resonance within the brain or allowing for association between a sound and setting, many key pieces of research provide insight into the use of these techniques. It is with these mechanisms that we can gain an understanding as to why audible sound can effect our mental state.

Psychological Effects

The use of sound within our auditory range has been used to effect targets negatively from the mid-1900s. After analysing previously explored research within this field, a large amount of research refers to the United State’s military and their Psychological Operations Units (PsyOps) (United States Military, 1996). In many cases, we see the application of sound utilised in order to effect the six mechanisms discussed in chapter 3.2, allowing them to apply the use of sound for non-lethal warfare. As early as World War 2, we see strong evidence for the the deployment of sound, used in order to effect the psychology of enemies. The U.S militaries 23rd Special Troops, often referred to as the ‘Ghost Army’ were a troop of sound and radio engineers assigned to fabricate the sounds of marching troops, tanks, landing crafts allowing for sonic deception of their enemies (Goodman, 2009, p. 41). This perhaps was a result of that described in Philip Gerard’s book Secret Soldiers: How a Troupe of American Artists, Designers and Sonic Wizards Won World War II’s Battles of Deception Against the Germans:

“…screaming whine caused by a siren deliberately designed into the aircraft…it instilled a paralysing panic in those on the ground…For Division 17 of the National Research Defence Committee, the lesson was clear: sound could terrify soldiers…So they decided to take the concept to the next level and develop a sonic ‘bomb’…The idea of a sonic ‘bomb’ never quite panned out, so the engineers shifted their work toward battlefield deception.” (Gerard, 2002)

It is these tactics and technologies used within the early years of the military’s application of sound that allow for a greater insight into their usages. We also see many deployments of sonic frequencies, used in order to impact subjects negatively in varied military approaches such as interrogation, crowd control and creating fear against enemies. (BBC, 2003) cites the U.S’s PsyOps use of heavy metal and children’s music as a means of interrogation during warfare. Sergeant Mark Hadsell of PsyOps states “If you play it for 24 hours, your brain and body functions start to slide, your train of thought slows down and your will is broken. That’s when we come in and talk to them.” (BBC, 2003). However, though it is well documented that music and sound has been used within interrogation scenarios, this perhaps does not allow us to have an understanding of how sound effects our brain, as one can associate it’s effect as more physiological, due to sensory depravation caused, as a pose to psychological change. Psychological change, can infect be seen within the second Indochina war, similar to operations such as the Urban Funk Campaign discussed in section 3.1. Known as the “Wandering Soul” PsyOps units within the war attempted to exploit emotional contagion, evaluative conditioning and visual imagery of the enemy. John Pilger describes this within his book Heroes when discussing a PsyOps Officer in Vietnam:

“His favourite tape was called “Wandering Soul,” and as we lifted out of Snuffy he explained, “what we’re doing today is psyching out the enemy. And that’s where Wandering Soul comes in. Now you’ve got to understand the Vietnamese way of life to realise the power behind Wandering Soul. You see, the Vietnamese people worship their ancestors and they take a lot of notice of the spirits and stuff like that. Well, what we’re going to do here is broadcast the voices of the ancestors — you know, ghosts which we’ve simulated in our studios. These ghosts, these ancestors, are going to tell the Vietcong to stop messing with the people’s right to live freely, or the people are going to disown them.” The helicopter dropped to within twenty feet of the trees. The PsyOps captain threw a switch and a voice reverberated from two loudspeakers attached to the machine- gun mounting. While the voice hissed and hooted, a sergeant hurled out handfuls of leaflets which made the same threats in writing.” (Pilger, 1986).

These techniques have allowed for a greater amount of research in the 21st century, and as a common theme, this is particularly within the U.S military. In February 2004, the American Technology Corporation secured a $1 million contract to provide U.S forces in Iraq with Long Range Acoustic Devices (LRAD) (Goodman, 2009, p. 21). The LRAD focuses a directional 15° to 30° beam of sound between 1kHz and 5kHz reaching a distance of around 5,500 meters (LRAD , 2015). The use of the LRAD has been seen as a means of crowd control and has been identified in scenarios such as repelling pirates in Somalia and suicide bombers in the middle east (Goodman, 2009). It is the LRAD’s highly directional and high decibel sound that perhaps allows us to see the effect of the Brain Stem Reflex discussed in section 3.1. The impact of such a high decibel frequency could perhaps be believed to instil a natural instinctive flight mechanism in the brain; it is also document that the effect of the LRAD can cause nausea or dizziness, Amy Teibel writes, when discussing the Israeli use of a similar LRAD device

“A young Palestinian covers his ears from a sound, launched by a new weapon of the Israeli army, during a demonstration against the construction of Israel’s separation barrier at the West Bank village of Bil’in Friday, June 3, 2005. Israel is considering using an unusual new weapon against Jewish settlers who resist this summer’s Gaza Strip evacuation, a device that emits penetrating bursts of sound that send targets reeling with dizziness and nausea.” (Teibel, 2005).

However, when discussing the LRAD device we must also consider it’s use of ultrasound, as this device also applies ultrasound within it’s mechanism — this will be discussed in section 4.3.1. It is clear to see that the effect of sonic weapons used in order to impact the human body physiologically and alter the subjects mental state, is of large importance when researching acoustic warfare weapons.

Brainwave Entrainment

The effect of sound on our brain often leads back to a common theme of resonance. Brainwave entrainment (or often referred to as neural entrainment) defines itself as the use of certain frequencies to activate bands of electrical wave resonance within our brain, to induce neurological states within our body. The preliminary proof of concept and main body of contextual research in this field stems from German professor of Physics, Heinrich Wilhelm Dove, who made discoveries in brainwave entrainment (BWE) through infrasonic frequencies entitled “Binaural beats” in 1841 (Kliempt, Ruta, Ogston, Landeck, & Martay, 1999). This method of entrainment occurs when two coherent frequencies within our audible range, are made present in both the left and right ear. Each frequency enters the auditory canal of the ear through to the cochlea; in turn the basilar membrane resonates at the frequency heard, this passes to the brain allowing us to recognise the frequency (Cook, 1999). The effect of this allows the brain to detect the phase difference between the two frequencies, rather than the brain responding to each frequency, the effect comprises of the difference between the two. This instils the ‘third’ frequency to resonate at an infrasonic range below 20–30Hz. The stimulus frequency reverberated by this induces a specific cerebral wave corresponding to characterised states of mind. (Caterina Filimon, n.d). Goodman states “…resonating with alpha and theta rhythms in the brain known to produce moods of fear, anxiety or anger” (Goodman, 2009, p. 18).

This technique has been applied to many non-warfare scenarios, which allows us to understand the importance of it’s application. Many musicians and directors have found ways of utilising neural entrainment to initiate fear into the listeners. Movie Director Gaspar Noe and musician Thomas Bangalter, used two differing bandwidths to instil beta wave frequency to the audience in order to create a feeling of tension in particular scenes of the movie Irreversible (Stathatos, no date).

Articles posted in The Times & New Scientist in 1973 document the use of a device called a ‘Squawk Box’ (New Scientist, 1973), used by the British Military in Northern Ireland. The device, mounted on a vehicle, emitted two frequencies of marginal difference in order to resonate a particular frequency bandwidth, similar to the effect discussed previously (Spannered, 2009). The article in New Scientist reports that the audio produced psychoacoustic effects giddiness, nausea, fainting, or merely a “spooky” psychological effect to targets. It also goes on to say that “Most people are intensely annoyed by the device and have a compelling wish to be somewhere else.” (New Scientist, 1973). Though the exact frequency range that was created is discussed in many aspects of military application, it’s important to draw from research to discover which areas of brainwave entrainment may perhaps effect the human body negatively.

Contrary to that described previously, the use of binaural beats has been actively discussed as a means of stress relief for participants, with research such as that collated by (Huang & Charyton, 2008) citing “People suffering from cognitive functioning deficits, stress, pain, headache/migraines, PMS, and behavioural problems benefited from BWE. However, more controlled trials are needed to test additional protocols with outcomes.” It is in review of physiological effects of brainwave entrainment we see in many pieces of research and literature such as that by, (Wahbeh, Calabrese, & Zwickey, 2007) & (Huang & Charyton, 2008), that confirm increased Serotonin levels within the body due to brainwave entrainment. With research such as (Mercola, 2015), discussing the role of increased Serotonin levels positively effecting the feeling of anxiety, that perhaps one may see the benefits of BWE. However, it is in fact discussed by (L. Fannin, Ph.D, no date) that the effect of BWE on frequency ranges that are already heightened within our brain is what causes a negative effect. Jeffrey L. Fannin, Ph.D, discusses:

“Anxiety — Too much beta activity may cause you to feel afraid or have thoughts of fear towards things that you are usually calm. I would imagine that if your brainwaves get high enough in the beta range, you will begin to notice a fear of things that are not normal to freak out over.

Stress — Though there are many good things that come with beta waves, there is also a huge possibility that they may stress you out. They are linked to increased stress, which is why it is important to learn how to shift your brainwaves when needed.

Paranoia — Paranoid schizophrenics are actually able to generate much more high beta (25–30Hz) activity than the average population. Are beta brainwaves the cause of schizophrenia? No, they are a side-effect and schizophrenia is a much more complex disease. Increasing beta brainwaves will not increase the likelihood of you becoming crazy, but they could make you feel more paranoid than usual.”

Ultrasonic frequencies

The spectrum beyond human audible range defines itself as ultrasound, this being above 20,000Hz. Ultrasound maintains very directional wave forms, due to their smaller wavelength and is very easily absorbed by materials, which allows for a greater application of use than other frequency bandwidths (Carovac, Smajlovic, & Junuzovic, 2011). Due to this, we can see ultrasound utilised in largely in the medical industry, with a particular focus on digital diagnostic imaging. Diagnostic imaging of ultrasound scanners operate around 2 to 18 megahertz, being hundreds of times greater than human perception (Carovac, Smajlovic, & Junuzovic, 2011). The mechanisms for this process depends on the echo time or Doppler shift, of the reflected ultrasonic sound on the internal organs or soft tissue, thus resulting in a 2d or 3d image (Georgia State University, no date). Ultrasonic sound is often produced using either piezoelectric or magnetostrictive transistors, by applying the output of an electronic oscillation within the device (Georgia State University, no date). The preliminary applications of ultrasound can be seen as a means of radar detection, similar to that of infrasound discussed in section 3.1, with the employment of submarine detectors in World War 1 (Carovac, Smajlovic, & Junuzovic, 2011). This depended on similar technologies of that used today in the medical industry, however since then, we have seen research within ultrasonic frequencies rise in many differing fields. Though it is apparent that the use of ultrasound has not been as widely investigated as both the infrasonic and sonic frequency fields, we can still see a common interest in it’s application for military use.

 

Hypersonic Ultrasound

‘Hypersonic sound’ can be referred to simply as the focusing of ultrasound. Similar to that of light being focused into a laser, hypersonic sound works under a similar principle, with a speaker being focused into a highly directional focused beam of sound. The effect of this involves a speaker which emits low level ultrasound at around 100,000 vibrations per second, resulting in the audio creating the sound in the air as it travels, as a pose to regular speakers which make the sound waves on the face of the speaker (Norris, 2004). However, as previously mentioned in section 3.2.2, hypersonic sound used in devices like the LRAD do in fact utilise audible frequencies too and it is important to understand the cohesion within it’s application.

The military usage of hypersonic ultrasound is perhaps a technical advancement of the acoustic deception techniques used in World War II by the ‘Ghost Army’ and that of the Urban Funk Campaign in Vietnam both discussed in section 3.2.2. However, what these to techniques did not allow for was the development ultrasound, resulting in the audio being highly directional. Woody Norris, who would later found the LRAD Corporation, discussed the military application of ultrasound on a hypersonic sound lecture in 2004. Stating that the device had been deployed by the U.S military for use within Iraq, in order to deceive the enemy by creating the sound ‘fake’ troops. Moreover, he also discussed the use of the device that altered temperature of enemies whilst also stating:

“We make a version with this which puts out 155 decibels. Pain is 120. So it allows you to go nearly a mile away and communicate with people, and there can be a public beach just off to the side, and they don’t even know it’s turned on. We sell those to the military presently for about 70,000 dollars, and they’re buying them as fast as we can make them.” (Norris, 2004).

This in fact, gives us a great insight into the development of techniques used within prior wars and the advancement that has been made with technology of those discussed in previous sections. We can also see from this that the application of ultrasound has in fact been popular by the military and one could assume that there may be more progressed development within this field. Moreover, (Goodman, 2009) cites “There is, however, evidence to suggest that ultrasound has been considered by military and law enforcement authorities as a likely technology for so-called ‘non-lethal weapons’ for use in crowd control and ‘coercive interrogation’.” which is evident to this day. We can also see the application of hypersonic ultrasound as a means of public crowd control with the likes of The Mosquito Anti-Social Device (M.A.D), which emits high frequency sound, around 20,000hz and above, with a range of around 15 to 20 meters (Goodman, 2009). On the Compound Security System’s website, who are the company behind the M.A.D, they specify that the sine wave frequency played by the device, at 20kHz, can only be heard by those under 25 years of age (Compound Security, 2015). Thus, this system is targeted as a youth deterrent. The company goes on to state that field trials suggest that teenagers where acutely aware for the ultrasonic tone and would usually wish to move away after around ten minutes (Compound Security, 2015). This suggest that perhaps the device’s intended use is to create auditory discomfort for the target audience, in order for them to move away from a specific area. Moreover, devices similar to this have also been developed previously; though military and law enforcement have denied the use of ultrasonic devices it apparent that such exist. Instructions and a Patent for a ‘Phasor Pain Field Generator’ can be found, which emits ultrasonic frequencies at 20,000Hz to 25,000Hz as a schematic for a handheld self-defence device, specifying that it’s “intended for Law Enforcement, Personal Or For Qualified Acoustical Research” (Free Information Society, no date) & (De Laro Research, 2014). Within the description of this device, it also states “if at any time head or neck feels swollen or you feel light headed or sick to your stomach, it is an indication that you are being affected. Sometimes you may experience a continuous ringing in the ears even after the device is turned off” (Free Information Society, no date). One can draw a conclusion from the description of both the M.A.D and the ‘Phasor Pain Field Generator’ that the intended outcome if the the target to feel discomfort. It is not unreasonable to state that as technology has progressed within ultrasonic research and as more psychological effects of inaudible sounds are discovered, the perceptual military operations of sonic warfare have widened. These techniques of applying 20,000Hz as a means of deterrent of said ‘self-defence’ devices allow for more primary research within this field to be explored. As a result, this study will collect primary research within this area to allow for a greater insight into the application of these techniques.

Sources:


Aerospace Medical Research Laboratory. (1976). Mechanical resonant frequency of the human eye ‘in vivo’”. Retrieved from https://archive.org/stream/DTIC_ADA030476/ DTIC_ADA030476_djvu.txt

BBC. (2003) Sesame Street breaks Iraqi POWs. BBC Middle East, Retrieved from http:// news.bbc.co.uk/1/hi/world/middle_east/3042907.stm

Bahaistudies. (n.d.). Binaural Beats. Retrieved from http://www.bahaistudies.net/asma/ binaural.pdf

Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Meredith

Braithwaite, D. (2006). Good vibrations: The case for a specific effect of Infrasound in instances of anomalous experience has yet to be empirically demonstrated. Retrieved from http://www.academia.edu/1191555/ Good_Vibrations_The_Case_for_a_Specific_Effect_of_Infrasound_in_Instances_of_Anom alous_Experience_has_Yet_to_be_Empirically_Demonstrated

Broner, N. (2003). The effects of low frequency noise on people. Journal of Sound and Vibration. Retrieved from http://waubrafoundation.org.au/wp-content/uploads/2015/02/ Broner-The-effects-of-low-frequency-noise-on-people.pdf

Carovac, A., Smajlovic, F., & Junuzovic, D. (2011). Application of ultrasound in medicine. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3564184/

Caterina Filimon, R. (n.d.). Beneficial Subliminal Music: Binaural Beats, Hemi-Sync and Metamusic. Department of Composition and Musicology University of Arts, University of Arts George Enescu, 1790–5095, 104–105

Compound Security. (2015). The mosquito MK4 anti-loitering device. Retrieved from http:// www.compoundsecurity.co.uk/security-equipment-mosquito-mk4-anti-loitering-device

Cook, P. R. (Ed.) (1999). Music, cognition, and computerized sound: an introduction to psychoacoustics (1st ed.). Cambridge, MA: The MIT Press

Crab, S. (2008). A short history of sound weapons: infrasound. Retrieved from https:// crab.wordpress.com/2008/01/14/a-short-history-of-sound-weapons-pt2-infrasound/

Davies, A. & Honours, B. (n.d.). Acoustic trauma: Bioeffects of sound. Retrieved from http://schizophonia.com/wp-content/uploads/2015/01/Alex_Davies_Acoustic_Trauma.pdf

De Laro Research. (2014). Ultrasonic Phaser Pain Field Generator. Retrieved from http:// delarosaresearch.com/uploads/ Ultrasonic_Phaser_Pain_Field_Generator_users_manual.pdf

ECRIP. (2008). Infrasound. Retrieved from http://www.eastcoastrip.org/did-you-know/ infrasound

E. Simon, L. (1971). Secret Weapons of the Third Reich: German Research in World War II

Fahy, F. & Walker, J. (Eds.) (2004). Advanced applications in acoustics, noise, and vibration (1st ed.). New York: Taylor & Francis

Free Information Society. (n.d.). Phasor Pain Field Generator. Retrieved from http:// www.freeinfosociety.com/electronics/schematics/weaponry/painfieldgenerator.pdf

French, C. C., Haque, U., Bunton-Stasyshyn, R., & Davis, R. (2009). The haunt’’ project: An attempt to build a haunted’’ room by manipulating complex electromagnetic fields and infrasound. Cortex. Retrieved from http://www.each.usp.br/rvicente/HauntProject.pdf

Georgia State University. (n.d.). Ultrasonic Sound. Retrieved from http://hyperphysics.phy- astr.gsu.edu/hbase/sound/usound.html

Georgia State University. (n.d.). Infrasonic Sound Retrieved from http://hyperphysics.phy- astr.gsu.edu/hbase/sound/infrasound.html

Gerard, P. (2002). Secret Soldiers: How a Troupe of American Artists, Designers and Sonic Wizards Won World War II’s Battles of Deception Against the Germans (1st ed.)

Giordano, J. (Ed.) (2014). Neurotechnology in national security and defense: Practical considerations, Neuroethical concerns. United Kingdom: CRC Press

Goodman, S. (2010). Sonic Warfare: Sound, Affect, and the Ecology of Fear. Cambridge, MA: MIT Press

HEYS, T. (2011). Sonic, Infrasonic, and Ultrasonic Frequencies: The utilisation of waveforms as weapons, apparatus for psychological manipulation, and as instruments of physiological influence by industrial, entertainment, and military Organisations.

Harding, G. W., Bohne, B. A., Lee, S. C., & Salt, A. N. (2007). Effect of infrasound on cochlear damage from exposure to a 4 kHz octave band of noise. Hearing Research. Retrieved from http://www.sciencedirect.com/science/article/pii/S0378595507000329

Howard, D. M. & Angus, J. A. S. (2009). Acoustics and Psychoacoustics (4th ed). Amsterdam: Elsevier Science

Huang, T. & Charyton, C. (2008). A comprehensive review of the psychological effects of brainwave entrainment. Alternative therapies in health and medicine. Retrieved from http:// www.ncbi.nlm.nih.gov/pubmed/18780583

Ihde, D. (2015). Acoustic Technics. United States: Lexington Books
Illingworth, E. (2012). Sonic Warfare and Music both Exploit the Negative Effects of

Sound. What are the Similarities — if any — between these two Distant Practices?

Juslin, P. & Sloboda, J. A. (Eds.) (2001). Music and emotion: Theory and research. New York: Oxford University Press

Kliempt, P., Ruta, D., Ogston, S., Landeck, A., & Martay, K. (1999). Hemispheric- synchronisation during anaesthesia: A double-blind randomised trial using audiotapes for intra-operative nociception control. Anaesthesia. Retrieved from http:// www.ncbi.nlm.nih.gov/pubmed/10460529

L. Fannin, Ph.D, J. (n.d.). Understanding Your Brainwaves. Retrieved from http:// drjoedispenza.com/files/understanding-brainwaves_white_paper.pdf

LRAD . (2015). Fact sheet — LRAD corporation. Retrieved from http://www.lradx.com/about/ lrad-public-safety-applications-fact-sheet/

Levitin, D. J. (2007). This Is Your Brain on Music: The Science of a Human Obsession. United States: New American Library

Liljeström, S. (2011). Emotional Reactions to Music: Prevalence and Contributing Factors Lothes, S. (2004). Acoustic noise. Retrieved from http://www.zemos98.org/controlsonoro/wp-content/uploads/pdf/acoustic_noise_Roman_Vinour.pdf

Mackinlay, C. (n.d.). Beta brain waves: 12 Hz to 40 Hz. Retrieved from http:// mentalhealthdaily.com/2014/04/10/beta-brain-waves-12-hz-to-40-hz/

Mercola. (2015) Social anxiety disorder linked to high serotonin levels. Retrieved from http://articles.mercola.com/sites/articles/archive/2015/07/02/social-anxiety-disorder.aspx

Meyer, L. B. & Meyer, D. J. (1961). Emotion and meaning in music. Chicago, IL: University of Chicago Press

New Scientist. (1973). New Scientist, September Issue. Reed Business Information Norris, W. (2004). Hypersonic Sound and other inventions (Lecture). Retrieved from https://www.ted.com/talks/woody_norris_invents_amazing_things?language=en

Pellegrino, R. & Productions, E. A. (1996). Sound deserves its own pollution category. Retrieved from http://www.ronpellegrinoselectronicartsproductions.org/Pages/ NsNSndPltnFndmntPrncpls.html/SndDsrvsOwnPltnCtgry.html

Pilger, J. (1986). Heroes. Random House.

Prashanth, M. & Venugopalachar, S. (2010). The possible influence of noise frequency components on the health of exposed industrial workers. Noise & health. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/21173483

Salt, A. N. & Hullar, T. E. (2010). Responses of the ear to low frequency sounds, infrasound and wind turbines. Hearing Research. Retrieved from http:// www.sciencedirect.com/science/article/pii/S0378595510003126

Sargeant, J. (2001). Sonic Boom. Retrieved from http://www.zemos98.org/controlsonoro/ 2008/03/08/sonic-doom-by-jack-sargeant/

Scott, R. L. & Monitor, T. C. S. (2010) War without death? How non-lethal weapons could change warfare. Retrieved from http://www.csmonitor.com/Commentary/Opinion/ 2010/0311/War-without-death-How-non-lethal-weapons-could-change-warfare

Spannered. (2009). A brief history of sonic warfare. Retrieved from http:// www.spannered.org/features/806/

Stathatos, S. (n.d.). Sounds in Silence: Infrasound and Resonance
Teibel, A. (2005). Israel may use sound weapon on settlers. Retrieved from http://

www.freerepublic.com/focus/news/1420380/posts
Toffler, A., Alvin, & Toffler, H. (1995). War and anti-war: Making sense of today’s global

chaos. London: Time Warner Paperbacks
United States Military. (1996). Doctrine for Joint Psychological Operations. Retrieved from http://www.iwar.org.uk/psyops/resources/us/jp3_53.pdf

Vassiltos, G. (n.d.). ‘The Sonic Doom of Vladimir Gavreau’ by Gerry Vassilatos. Retrieved from https://borderlandsciences.org/journal/vol/52/n04/ Vassilatos_on_Vladimir_Gavreau.html

Vinokur, R. (1993). The Case of the Mythical Beast. USA: Quantum

Wahbeh, H., Calabrese, C., & Zwickey, H. (2007). Binaural beat technology in humans: A pilot study to assess Psychologic and physiologic effects. The Journal of Alternative and Complementary Medicine

Walonick, D. S. (1990). Journal of Borderland Research. Retrieved from https:// borderlandsciences.org/journal/vol/46/n03–4/ Walonick_Effects_6–10hz_ELF_on_Brain_Waves.html

(This article is part of the paper ‘The psychoacoustic effect of infrasonic, sonic and ultrasonic frequencies within non-lethal military warfare techniques’ by Ryan Littlefield, copywrite of The University of Portsmouth)

Photoacoustic communication Technology Uses Lasers to Transmit Audible Messages to Specific People

 Technology Uses Lasers to Transmit Audible Messages to Specific People

Photoacoustic communication approach could send warning messages through the air without requiring a receiving device

WASHINGTON — Researchers have demonstrated that a laser can transmit an audible message to a person without any type of receiver equipment. The ability to send highly targeted audio signals over the air could be used to communicate across noisy rooms or warn individuals of a dangerous situation such as an active shooter.

MIT Used a Laser to Transmit Audio Directly Into a Person’s Ear

Caption: Ryan M. Sullenberger and Charles M. Wynn developed a way to use eye- and skin-safe laser light to transmit a highly targeted audible message to a person without any type of receiver equipment.

 Image Credit: Massachusetts Institute of Technology’s Lincoln Laboratory

In The Optical Society (OSA) journal Optics Letters, researchers from the Massachusetts Institute of Technology’s Lincoln Laboratory report using two different laser-based methods to transmit various tones, music and recorded speech at a conversational volume.

“Our system can be used from some distance away to beam information directly to someone’s ear,” said research team leader Charles M. Wynn. “It is the first system that uses lasers that are fully safe for the eyes and skin to localize an audible signal to a particular person in any setting.”

Creating sound from air

The new approaches are based on the photoacoustic effect, which occurs when a material forms sound waves after absorbing light. In this case, the researchers used water vapor in the air to absorb light and create sound.

“This can work even in relatively dry conditions because there is almost always a little water in the air, especially around people,” said Wynn. “We found that we don’t need a lot of water if we use a laser wavelength that is very strongly absorbed by water. This was key because the stronger absorption leads to more sound.”

One of the new sound transmission methods grew from a technique called dynamic photoacoustic spectroscopy (DPAS), which the researchers previously developed for chemical detection. In the earlier work, they discovered that scanning, or sweeping, a laser beam at the speed of sound could improve chemical detection.

“The speed of sound is a very special speed at which to work,” said Ryan M. Sullenberger, first author of the paper. “In this new paper, we show that sweeping a laser beam at the speed of sound at a wavelength absorbed by water can be used as an efficient way to create sound.”

Image result for hearing through laser
Caption: The researchers use water vapor in the air to absorb light and create sound. By sweeping the laser they can create an audio signal that can only be heard at a certain distance from the transmitter, allowing it to be localized to one person.

Image Credit: Massachusetts Institute of Technology’s Lincoln Laboratory

For the DPAS-related approach, the researchers change the length of the laser sweeps to encode different frequencies, or audible pitches, in the light. One unique aspect of this laser sweeping technique is that the signal can only be heard at a certain distance from the transmitter. This means that a message could be sent to an individual, rather than everyone who crosses the beam of light. It also opens the possibility of targeting a message to multiple individuals.

Laboratory tests

In the lab, the researchers showed that commercially available equipment could transmit sound to a person more than 2.5 meters away at 60 decibels using the laser sweeping technique. They believe that the system could be easily scaled up to longer distances. They also tested a traditional photoacoustic method that doesn’t require sweeping the laser and encodes the audio message by modulating the power of the laser beam.

“There are tradeoffs between the two techniques,” said Sullenberger. “The traditional photoacoustics method provides sound with higher fidelity, whereas the laser sweeping provides sound with louder audio.”

Next, the researchers plan to demonstrate the methods outdoors at longer ranges. “We hope that this will eventually become a commercial technology,” said Sullenberger. “There are a lot of exciting possibilities, and we want to develop the communication technology in ways that are useful.”

Paper: R. M. Sullenberger, S. Kaushik, C. M. Wynn. “Photoacoustic communications: delivering audible signals via absorption of light by atmospheric H2O,” Opt. Lett., 44, 3, 622-625 (2019).
DOI: https://doi.org/10.1364/OL.44.000622.

About Optics Letters
Optics Letters offers rapid dissemination of new results in all areas of optics with short, original, peer-reviewed communications. Optics Letters covers the latest research in optical science, including optical measurements, optical components and devices, atmospheric optics, biomedical optics, Fourier optics, integrated optics, optical processing, optoelectronics, lasers, nonlinear optics, optical storage and holography, optical coherence, polarization, quantum electronics, ultrafast optical phenomena, photonic crystals and fiber optics.

About The Optical Society

Founded in 1916, The Optical Society (OSA) is the leading professional organization for scientists, engineers, students and business leaders who fuel discoveries, shape real-life applications and accelerate achievements in the science of light. Through world-renowned publications, meetings and membership initiatives, OSA provides quality research, inspired interactions and dedicated resources for its extensive global network of optics and photonics experts. For more information, visit osa.org.

Original article:  https://www.osa.org/en-us/about_osa/newsroom/news_releases/2019/new_technology_uses_lasers_to_transmit_audible_mes/?fbclid=IwAR3VlfrmqiiY_gUh2tjVy5m-TxiK7zoQJILMQK62wGkderU98wxwbC0Tf6c

Remote Control of the Brain and Human Nervous System

Remote Control of the Brain and Human Nervous System

The USA and the European Union invest since the beginning of the millenium billions of dollars and euros into brain research. As a result of this research perfect maps of the brain were developed, including the areas of the brain that control the activity of different body organs or parts where higher brain activities, such as speech and thoughts, are taking place. The brain activities corresponding to different actions in those areas were also deciphered.

Thanks to the knowledge of specific locations of different centers in the brain and frequencies of the neuronal activity in them, teams of physicians are now capable of helping many people who were in the past, for different reasons, unable to participate in a normal life. There exist prostheses, which are controlled directly from the brain centers that normally control the movement of the limbs (see this) and enable people, who lost them, to use the prosthesis in a way similar to the way normal people use their limbs. Higher brain activities were produced as well. In 2006 scientists placed into the brain of a completely paralyzed man an implant, which transferred the activity of his brain into different devices and enabled him to open his e-mail, control his TV set and control his robotic arm. Other paralyzed people were able to search the Internet, play computer games and drive their electrical wheelchairs (see this).

Thanks to extensive brain research, computers were taught to understand the neuronal activity so much so that they are now capable of using the activity of our brain to reproduce our perceptions. Canadian scientists demonstrated an experiment, where the computer could interpret the electroencephalographical recordings from the brain to produce the painting of a face that the subject of experiment was perceiving (see this).

In the opposite way the data, processed by the computer in the way that will make them intelligible for the nervous system, can be transmitted into the brain and produce there a new reality. When an implant is placed in the brain and connected to a camera, placed on spectacles, for people whose photoreceptors in their retina stopped working, the sight is at least partially restored. In this case the camera on the spectacles is transmitting into the implant light frequencies and the implant re-transmits them in frequencies which “understand” the neurons processing the visual perceptions (see this).

In California scientists developed a device, which can register the brain waves and, using analysis, find among them consonants and vowels and in this way transform our thoughts to words. A paralyzed man could use this device to write without using a keyboard. Presently the accuracy of the device reaches 90%.  Scientists believe that within five years they will manage to develop a smartphone, to which their device could be connected (see this).

Just like in the case of visual perception it is possible, when knowing the algorithms of brain processing of words, to generate algorithms of different words in the computer and transmit them into the brain in ultrasound frequencies and in this way produce in the human brain particular “thoughts”.

Everybody will easily fall victim to the proposal that, instead of typing or searching with the use of mouse, his computer or cell phone could react directly to his brain’s activity and take down his thoughts directly to the documents or carry out operations that has just occurred to him.

As a matter of fact Apple and Samsung companies have already developed prototypes of necessary electroencephalographical equipment, which can be placed on top of a head and transmit electromagnetic waves produced by the brain into the prototypes of new smart phones. The smart phones should analyze those waves, find out what are the intentions of their owners and carry them out. Apple and Samsung companies expect that the direct connection with brains will gradually replace computer keyboards, touch screens, mouse and voice orders (see this). When the system is complete, it will be feasible for hackers, government agencies and foreign government’s agencies to implant thoughts and emotions in people’s minds and “hearts“, when they will be connected to internet or cell phone systems.

In 2013 scientists in the USA could infer from the brain activity the political views of people and distinguish democrats from republicans and in 2016 scientists used transcranial magnetic stimulation to make subjects of experiment more positive towards criticism to their country, than the participants whose brains were unaffected (see this).

Last year historian Juval Noah Harari was invited to deliver a speech at the World economic Forum in Davos. The editor of the British daily Financial Times stressed, when introducing him, that it is not usual to invite a historian to speak to most important world economists and politicians. Juval Noah Harari warned in his speech against the rise of new totality, based on the access to human brain. He said:

“Once we have algorithms that can understand you better than you understand yourself, they could predict my desires, manipulate my feelings and even make decisions on my behalf. And if we are not careful the outcome can be the rise of digital dictatorships. In the 21st century we may be enslaved under digital dictatorships”

In a similar way the Stanford University researcher in neurology and Dolby Labs’ chief scientist Poppy Crum warned at the conference in Las Vegas:

“Your devices will know more about you than you will. I believe we need to think about how [this data] could be used“.

In April 2017 neuroethicist at the University of Basel Marcello Ienca and Roberto Andorno, a human rights lawyer at the University of Zurich, writing in the journal Life Sciences, Society and Policy, published the article “Toward new human rights in the age of neuroscience and neurotechnology“ where they called for the creation of legislation which would protect human right to freedom and other human rights from the abuse of technologies opening access to the human brain. In the article they wrote that “the mind is a kind of last refuge of personal freedom and self-determination” and “at present, no specific legal or technical safeguard protects brain data from being subject to the same data-mining and privacy intruding measures as other types of information“. Among the world media only the British newspaper The Guardian wrote about their proposal (see this). This fact suggests that in the actual democratic world there exists no political will to forbid remote control of human thoughts and feelings, no matter that such perspective breaks elementary principles of democracy.

In 2016 and 2017 10 European organizations tried to convince the European Parliament and the European Commission to enact the legislation that would ban the remote control of activity of the human nervous system, since pulsed microwaves could be used to manipulate the human nervous system at a distance at present time already (see this). Then in 2017, 19 world organizations addressed the G20 meeting with the same proposal. They received no positive response to their effort.

To achieve the ban of the use of remote mind control technologies it is necessary to work out an international agreement. In the past century the USA and Russia built systems (HAARP and Sura), capable to produce, by manipulation of the ionosphere, extra long electromagnetic waves in frequencies corresponding to frequencies of the activity of the human nervous system and in this way to control the brain activity of populations of vast areas of this planet (See this, “Psychoelectronic Threat to Democracy“). At the beginning of this year China announced the building of a similar, more advanced, system. The Chinese daily The South China Morning Post admitted in its article that the system could be used to control the activity of the human nervous system.

The politicians should, instead of classifying those weapons of mass destruction, make effort to create more democratic system of international politics to replace the current system of struggle for military power. Only in this way conditions could be provided for the ban of use of   If this does not happen, in a few years there will be no chance to preserve democracy.

By Mojmir Babacek

Mojmir Babacek is the founder of the International Movement for the Ban of the Manipulation of the Human Nervous System by Technical Means,  He is the author of numerous articles on the issue of mind manipulation. 

Nano-Brain-Implant Technologies and Artificial Intelligence reported already over 6 years ago…

Magnus Olsson: Nano-Brain-Implant Technologies and Artificial Intelligence

Magnus begins his speech telling the audience “Welcome to the Future” and it’s a very good way to start what he’s going to say next. He also chooses to quote Gerald McGuire and Ellen McGee that several times published scientific papers requiring some type of regulation of implantable devices. Even though they’re been developed since the 1940s-1960s, and even though they’re such a huge area of research right now, as we speak, if you ever mention them in health care, the staff will claim that they don’t even exist. No physical examination is usually made, and there is no explanation to why victims are in so much pain in very specific areas of their bodies and more.

Magnus has researched all aspects of the supercomputer systems based on transmissions from implants in the human body. He elaborates on the Artificial Intelligence research done today and what it’ll mean for humanity in the future. He understands that this technology can be used in good ways but unfortunately, if unregulated, it can lead to the real Orwellian “thought police” state.

world, brain

He explores the possibility of using different avatars or agents, to assist people in their daily life and the developments of virtual worlds where people can enter as a third type of reality, apart from awaken state and the dream state. He talks about the NSAs supercomputer called “Mr. Computer” that has the ability to make its own decisions and the development of the new quantum computer, which is supposed to “marry” the old-fashioned Mr. Computer.

As interesting and fascinated his speech is, it’s easy to get lost in the new emerging world view that Magnus creates for a while. It’s tempered by the experiences he has, the immense 24/7 torture, the lack of privacy, the lost freedom of the mind and the necessity to cope with something that no human being should have to cope with: the most grotesques aspects of life.




Magnus Olsson used to be a very successful businessman. Not only is Magnus highly educated, but he also had a very successful career: as an entrepreneur, stockbroker and businessman.

The Only Thing That Helped Magnus Olsson

MINI TESLA GENERATORS AS A FRONTIER OF QUANTUM

GENETICS

Go To The Web Page:

https://www.zharp.net/

Mind Control – Remote Neural Monitoring: Daniel Estulin and Magnus Olsson on Russia Today

This show, with the original title “Control mental. El sueño dorado de los dueños del mundo” (Mind control. The golden dream of the world’s masters) — broadcasted to some 34 million people — was one of the biggest victories for victims of implant technologies so far. Thanks to Magnus Olsson, who, despite being victimized himself, worked hard for several years to expose one the biggest human rights abuses of our times – connecting people against their will and knowledge to computers via implants of the size of a few nanometers – leading to a complete destruction of not only their lives and health, but also personalities and identities.

Very few people are aware of the actual link between neuroscience, cybernetics, artificial intelligence, neuro-chips, transhumanism, the science cyborg, robotics, somatic surveillance, behavior control, the thought police and human enhancement.

They all go hand in hand, and never in our history before, has this issue been as important as it is now.

One reason is that this technology, that begun to develop in the early 1950s is by now very advanced but the public is unaware of it and it goes completely unregulated. There is also a complete amnesia about its early development, as Lars Drudgaard of ICAACT, mentioned in one of his interviews last year. The CIA funded experiments on people without consent through leading universities and by hiring prominent neuroscientists of that time. These experiments have since the 50s been brutal, destroying every aspect of a person’s life, while hiding behind curtains of National Security and secrecy but also behind psychiatry diagnosis.

future of humanity

The second is that its backside –mind reading, thought police, surveillance, pre-crime, behavior modification, control of citizen’s behavior; tastes, dreams, feelings and wishes; identities; personalities and not to mention the ability to torture and kill anyone from a distance — is completely ignored. All the important ethical issues dealing with the most special aspects of being a free human being living a full human life are completely dismissed. The praise of the machine in these discourses dealing with not only transhumanism ideals but also neuroscience today has a cost and that is complete disrespect, despise and underestimation of human beings, at least when it comes to their bodies, abilities and biological functions. The brain is though seen as the only valuable thing; not just because of its complexity and mysteries, but also because it can create consciousness and awareness. We’re prone to diseases, we die, we make irrational decisions, we’re inconsistent, and we need someone to look up to. In a radio interview on Swedish “Filosofiska rummet” entitled “Me and my new brain” (Jag och min nya hjärna), neuroscientist Martin Ingvar referred to the human body as a “bad frame for the brain”. Questions about individual free will and personal identity were discussed and the point of view of Martin Ingvar was very much in line with José Delgado’s some 60 years ago, and its buried history of mind control: we don’t really have any choice, we’re not really having a free will or for that matter any consistent personality. This would be enough reason to change humans to whatever someone else wishes. For example, an elite.

operator nsa

Another reason for why this issue dealing with brain implants is important of course is the fact that both the US and the EU pour billions of dollars and euros in brain research every single year, a brain research very focused on not only understanding the brain, but also highly focused on merging human beings with machines; using neuro-implants to correct behavior and enhance intelligence; creating robots and other machines that think and make autonomous intelligent decisions — just like humans do.

Ray Kurzweil, who’s predictions about future technological developments have been correct at least until now, claims that in 20 years, implant-technology has advanced that far that humanity has been completely transformed by it. We cannot know right now whether he’s prediction is right or wrong, but we have the right to decide on the kind of future we want. I do not know if eradicating humanity as we know it is the best future or the only alternative. Today, we might still have a choice.

Something to think about: Can you research the depths of the human brain on mice?

The Only Thing That Helped Magnus Olsson:

MINI TESLA GENERATORS AS A FRONTIER OF QUANTUM

GENETICS.

Go To The Web Page:

https://www.zharp.net/

Elon Musk and Mark Zuckerberg has been developing there own startup of Syntetic Telepathy

This invention we give away for free to someone who wants to build a AI assistant startup:

(Read the warning in the end of this post)

It should work to build a interface for telepathy/ silent communication with a AI assistant in a smartphone with a neurophone sensor:
https://youtu.be/U_QxkirKW74

My suggestion is to use the sensor for Touch ID for communication with the AI.
When you touch the sensor you hear the assistant through your skin:
https://www.lifewire.com/sensors-that-make-iphone-so-cool-2…

And a interface based on this information for speaking with the assistant:
The Audeo is a sensor/device which detects activity in the larynx (aka. voice box) through EEG (Electroencephalography). The Audeo is unique in it’s use of EEG in that it is detecting & analyzing signals outside the brain on their path to the larynx.1 The neurological signals/data are then encrypted and then transmitted to a computer to be processed using their software (which can be seen being used in Kimberly Beals’ video).2 Once it is analyzed and processed the data can then be represented using a computer speech generator.

Possibilities

The Audeo is a great sensor/device to detect imagined speech. It has an infinite amount of uses, especially in our areas of study. Here are some videos that show what the Audeo can be used for:
http://nerve.boards.net/…/79/audeo-ambient-using-voice-input
In a $6.3 million Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found that it is possible to use ECoG https://en.m.wikipedia.org/wiki/Electrocorticography signals to discriminate the vowels and consonants embedded in spoken and in imagined words.
http://m.phys.org/…/2008-08-scientists-synthetic-telepathy.…

The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.
https://books.google.se/books…

http://scholar.google.se/scholar…
Research into synthetic telepathy using subvocalization https://en.m.wikipedia.org/wiki/Subvocalization is taking place at the University of California, Irvine under lead scientist Mike D’Zmura. The first such communication took place in the 1960s using EEG to create Morse code using brain alpha waves.

https://en.m.wikipedia.org/wiki/Subvocal_recognition

https://en.m.wikipedia.org/wiki/Throat_microphone

https://en.m.wikipedia.org/wiki/Silent_speech_interface

Why do Magnus Olsson and Leo Angelsleva

give you this opportunity for free?

Because Facebook can use you and your data in research for free and I think someone else than Mark Zuckerberg should get this opportunity:
https://m.huffpost.com/us/entry/5551965

Neurotechnology, Elon Musk and the goal of human enhancement

Brain-computer interfaces could change the way people think, soldiers fight and Alzheimer’s is treated. But are we in control of the ethical ramifications?

Extending the human mind … Elon Musk.
Extending the human mind …

At the World Government Summit in Dubai in February, Tesla and SpaceX chief executive Elon Musk said that people would need to become cyborgs to be relevant in an artificial intelligence age. He said that a “merger of biological intelligence and machine intelligence” would be necessary to ensure we stay economically valuable.

Soon afterwards, the serial entrepreneur created Neuralink, with the intention of connecting computers directly to human brains. He wants to do this using “neural lace” technology – implanting tiny electrodes into the brain for direct computing capabilities.

Brain-computer interfaces (BCI) aren’t a new idea. Various forms of BCI are already available, from ones that sit on top of your head and measure brain signals to devices that are implanted into your brain tissue.

They are mainly one-directional, with the most common uses enabling motor control and communication tools for people with brain injuries. In March, a man who was paralysed from below the neck moved his hand using the power of concentration.

Cognitive enhancement

A researcher uses a brain-computer interface helmet at the Centre National de la Recherche Scientifique, Grenoble.
A researcher uses a brain-computer interface helmet at the Centre National de la Recherche Scientifique, Grenoble. Photograph: Jean-Pierre Clatot/AFP/Getty Images

But Musk’s plans go beyond this: he wants to use BCIs in a bi-directional capacity, so that plugging in could make us smarter, improve our memory, help with decision-making and eventually provide an extension of the human mind.

“Musk’s goals of cognitive enhancement relate to healthy or able-bodied subjects, because he is afraid of AI and that computers will ultimately become more intelligent than the humans who made the computers,” explains BCI expert Professor Pedram Mohseni of Case Western Reserve University, Ohio, who sold the rights to the name Neuralink to Musk.

“He wants to directly tap into the brain to read out thoughts, effectively bypassing low-bandwidth mechanisms such as speaking or texting to convey the thoughts. This is pie-in-the-sky stuff, but Musk has the credibility to talk about these things,” he adds.

Musk is not alone in believing that “neurotechnology” could be the next big thing. Silicon Valley is abuzz with similar projects. Bryan Johnson, for example, has also been testing “neural lace”. He founded Kernel, a startup to enhance human intelligence by developing brain implants linking people’s thoughts to computers.

In 2015, Facebook CEO Mark Zuckerberg said that people will one day be able to share “full sensory and emotional experiences” online – not just photos and videos. Facebook has been hiring neuroscientists for an undisclosed project at its secretive hardware division, Building 8.

However, it is unlikely this technology will be available anytime soon, and some of the more ambitious projects may be unrealistic, according to Mohseni.

Pie-in-the-sky

A brain scan of a patient with Alzheimer’s.
A brain scan of a patient with Alzheimer’s. Photograph: BSIP/UIG via Getty Images

“In my opinion, we are at least 10 to 15 years away from the cognitive enhancement goals in healthy, able-bodied subjects. It certainly appears to be, from the more immediate goals of Neuralink, that the neurotechnology focus will continue to be on patients with various neurological injuries or diseases,” he says.

Mohseni says one of the best current examples of cognitive enhancement is the work of Professor Ted Berger, of the University of Southern California, who has been working on a memory prosthesis to replace the damaged parts of the hippocampus in patients who have lost their memory due to, for example, Alzheimer’s disease.

“In this case, a computer is to be implanted in the brain that acts similaly to the biological hippocampus from an input and output perspective,” he says. “Berger has results from both rodents and non-human primate models, as well as preliminary results in several human subjects.”

Mohseni adds: “The [US government’s] Defense Advanced Research Projects Agency (DARPA) currently has a programme that aims to do cognitive enhancement in their soldiers – ie enhance learning of a wide range of cognitive skills, through various mechanisms of peripheral nerve stimulation that facilitate and encourage neural plasticity in the brain. This would be another example of cognitive enhancement in able-bodied subjects, but it is quite pie-in-the-sky, which is exactly how DARPA operates.”

Understanding the brain

Heading for cognitive enhancement? … US soldiers in Bagram, Afghanistan.
Heading for cognitive enhancement? … US soldiers in Bagram, Afghanistan. Photograph: Aaron Favila/AP

In the UK, research is ongoing. Davide Valeriani, senior research officer at University of Essex’s BCI-NE Lab, is using an electroencephalogram (EEG)-based BCI to tap into the unconscious minds of people as they make decisions.

“Everyone who makes decisions wears the EEG cap, which is part of a BCI, a tool to help measure EEG activity … it measures electrical activity to gather patterns associated with confident or non-confident decisions,” says Valeriani. “We train the BCI – the computer basically – by asking people to make decisions without knowing the answer and then tell the machine, ‘Look, in this case we know the decision made by the user is correct, so associate those patterns to confident decisions’ – as we know that confidence is related to probability of being correct. So during training the machine knows which answers were correct and which one were not. The user doesn’t know all the time.”

Valeriani adds: “I hope more resources will be put into supporting this very promising area of research. BCIs are not only an invaluable tool for people with disabilities, but they could be a fundamental tool for going beyond human limits, hence improving everyone’s life.”

He notes, however, that one of the biggest challenges with this technology is that first we need to better understand how the human brain works before deciding where and how to apply BCI. “This is why many agencies have been investing in basic neuroscience research – for example, the Brain initiative in the US and the Human Brain Project in the EU.”

Whenever there is talk of enhancing humans, moral questions remain – particularly around where the human ends and the machine begins. “In my opinion, one way to overcome these ethical concerns is to let humans decide whether they want to use a BCI to augment their capabilities,” Valeriani says.

“Neuroethicists are working to give advice to policymakers about what should be regulated. I am quite confident that, in the future, we will be more open to the possibility of using BCIs if such systems provide a clear and tangible advantage to our lives.”

Facebook is building brain-computer interfaces

Facebook is improving the 360 video experience by predicting where you will look

The plan is to eventually build non-implanted devices that can ship at scale. And to tamp down on the inevitable fear this research will inspire, Facebook tells me “This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain.” Facebook likened it to how you take lots of photos but only share some of them. Even with its device, Facebook says you’ll be able to think freely but only turn some thoughts into text.

Skin-Hearing

Meanwhile, Building 8 is working on a way for humans to hear through their skin. It’s been building prototypes of hardware and software that let your skin mimic the cochlea in your ear that translates sound into specific frequencies for your brain. This technology could let deaf people essentially “hear” by bypassing their ears.

A team of Facebook engineers was shown experimenting with hearing through skin using a system of actuators tuned to 16 frequency bands. A test subject was able to develop a vocabulary of nine words they could hear through their skin.

To underscore the gravity of Building 8s mind-reading technology, Dugan started her talk by saying she’s never seen something as powerful as the smartphone “that didn’t have unintended consequences.” She mentioned that we’d all be better off if we looked up from our phones every so often. But at the same time, she believes technology can foster empathy, education and global community.

Building 8’s Big Reveal

Facebook hired Dugan last year to lead its secretive new Building 8 research lab. She had previously run Google’s Advanced Technology And Products division, and was formerly a head of DARPA.

Facebook built a special Area 404 wing of its Menlo Park headquarters with tons of mechanical engineering equipment to help Dugan’s team quickly prototype new hardware. In December, it signed rapid collaboration deals with Stanford, Harvard, MIT and more to get academia’s assistance.

Yet until now, nobody really knew what Building 8 was…building. Business Insider had reported on Building 8’s job listings and that it might show off news at F8.

According to these job listings, Facebook is looking for a Brain-Computer Interface Engineer “who will be responsible for working on a 2-year B8 project focused on developing advanced BCI technologies.” Responsibilities include “Application of machine learning methods, including encoding and decoding models, to neuroimaging and electrophysiological data.” It’s also looking for a Neural Imaging Engineer who will be “focused on developing novel non-invasive neuroimaging technologies” who will “Design and evaluate novel neural imaging methods based on optical, RF, ultrasound, or other entirely non-invasive approaches.”

Elon Musk has been developing his own startup called Neuralink for creating brain interfaces.

Facebook Building 8 R&D division head Regina Dugan

Facebook has built hardware before to mixed success. It made an Android phone with HTC called the First to host its Facebook Home operating system. That flopped. Since then, Facebook proper has turned its attention away from consumer gadgetry and toward connectivity. It’s built the Terragraph Wi-Fi nodesProject ARIES antennaAquila solar-powered drone and its own connectivity-beaming satellite from its internet access initiative — though that blew up on the launch pad when the SpaceX vehicle carrying it exploded.

Facebook has built and open sourced its Surround 360 camera. As for back-end infrastructure, it’s developed an open-rack network switch called Wedge, the Open Vault for storage, plus sensors for the Telecom Infra Project’s OpenCellular platform. And finally, through its acquisition of Oculus, Facebook has built wired and mobile virtual reality headsets.

Facebook’s Area 404 hardware lab contains tons of mechanical engineering and prototyping equipment

But as Facebook grows, it has the resources and talent to try new approaches in hardware. With over 1.8 billion users connected to just its main Facebook app, the company has a massive funnel of potential guinea pigs for its experiments.

Today’s announcements are naturally unsettling. Hearing about a tiny startup developing these advanced technologies might have conjured images of governments or coporate conglomerates one day reading our mind to detect thought crime, like in 1984. Facebook’s scale makes that future feel more plausible, no matter how much Zuckerberg and Dugan try to position the company as benevolent and compassionate. The more Facebook can do to institute safe-guards, independent monitoring, and transparency around how brain-interface technology is built and tested, the more receptive it might find the public.

A week ago Facebook was being criticized as nothing but a Snapchat copycat that had stopped innovating. Today’s demos seemed design to dismantle that argument and keep top engineering talent knocking on its door.

“Do you want to work for the company who pioneered putting augmented reality dog ears on teens, or the one that pioneered typing with telepathy?” You don’t have to say anything. For Facebook, thinking might be enough.

The MOST IMPORTANT QUESTIONS!

There is no established legal protection for the human subject when researchers use Brain Machine Interface (cybernetic technology) to reverse engineer the human brain.

The progressing neuroscience using brain-machine-interface will enable those in power to push the human mind wide open for inspection.

There is call for alarm. What kind of privacy safeguard is needed, computers can read your thoughts!

In recent decades areas of research involving nanotechnology, information technology, biotechnology and neuroscience have emerged, resulting in, products and services.

We are facing an era of synthetic telepathy, with brain-computer-interface and communication technology based on thoughts, not speech.

An appropriate albeit alarming question is: “Do you accept being enmeshed in a computer network and turned into a multimedia module”?  authorities will be able to collect information directly from your brain, without your consent.

This kind of research in bioelectronics has been progressing for half a century.

Brain Machine Interface (Cybernetic technology) can be used to read our minds and to manipulate our sensory perception!

Invited presentation by Magnus Olsson at the 2017 First Annual Unity and Hope Conference

Invited presentation by Magnus Olsson

“Invited presentation by Magnus Olsson, at the 2017 First Annual Unity and Hope Conference” This event was for targeted individuals and those concerned about the growing crimes of electronic harassment.

The conference was held from October 20-22 at the Mass Audubon Blue Hills Trailside Museum: 1904 Canton Ave., Milton, MA 02186, USA. This presentation was co-produced by Mårten Hernebring. The speaker, Magnus Olsson, can be reached at bionicgate@live.se Event Description from the Conference Web Site: “Our goal is to bring together as many support groups, media shows, activism groups, and organizations of targeted individuals, so we can work together and learn from each other and strategize on solutions to bring about change and end the suffering of hundreds of thousands of victims nationwide.

The number of people experiencing electronic harassment and gang-stalking is growing exponentially daily. Our hope is to come together, to build, empower, and educate the community on technology, resources, and support, and as a unified front attempt to educate the public. As a result of this conference, we will be able to strategically fight for freedom and justice for the victims of targeted crimes.

The goal of this conference is to unify all the groups worldwide and provide a knowledge and understanding of the program and the technology. We also strongly encourage targeted individuals to bring friends and family for support and to educate the ones around them on what invisible crimes are being committed against them.”

Magnus Olsson: Artificial crystals in Magnus’ blood are Transhumanist remote neural control weapon

Magnus Olsson: Artificial crystals in Magnus’ blood are Transhumanist remote neural control weapon   

      

PART I: Magnus Olsson reveals artificial crystals in blood, spread via smart dust, are Transhumanist remote neural control weapon
“Magnus Olsson: it gets in your blood”

magnus olsson

 

 

THE Reality of Magnus Olsson’s blood – doctor’s report

THE Reality of Magnus Olsson’s blood – doctor’s report

14199276_1131910583541727_4982526152150987075_n

During my staying in Poland, in Lublin I met a specialist Dr, who took a sample of my blood from my finger and we watched this together through an atomic microscope.

I will not now describe what is wrong with my blood, but I want you to notice the cristal which was made artificially.

what this is doing to me:

  1. 1. makes my blood, not circulate in a natural manner ( blood is too dense )
  2. 2. damage DNA
  3. 3. Influence memory ( lack of memory )
  4. 4. you can not walk normally nor run
  5. 5. destroy your skin ( you may have albinism )
  6. 6. having a hard time sleeping
  7. 7. easy access to the brain
  8. 8. lack of oxygen

and many other symptoms…

There are millions of this in my blood. Every victim should do this checking with a doctor who knows about People Online and will check this blood with you through a atomic microscope.

This, together with the waves  RF, ELF, SCALAR, NEUTRINOS, QUATUM SPECTRA, LASER, SONAR and others is devastating.

Magnus Olssons blood

The most important for us is to secure ourselves and then cure. I am now working on a breathtaking solution for all of us, and I hope we will be free pretty soon. Then we can recover and heal.

Love and Light

Your Magnus

THE Reality of Magnus Olsson’s blood – doctor’s report:

Crimes against humanity are certain acts that are committed as part of a widespread or systematic attack directed against any civilian population or an identifiable part of a population:
https://en.m.wikipedia.org/wiki/Crimes_against_humanity
THE PENTAGON’S BLUE-SKY research arm wants to trick out troops’ brains, from the areas that regulate alertness and cognition to pain treatment and psychiatric well-being. And the scientists want to do it all from the outside in:
http://google.co.in/patents/US3951134

https://en.m.wikipedia.org/wiki/Rayleigh_wave

https://en.m.wikipedia.org/wiki/High_Frequency_Active_Auroral_Research_Program
What if a machine could read your mind?
https://drive.google.com/file/d/0B4iZUaEgNfFfMnpyem5fX1BPTTg/view
While most developmental robotics projects strongly interact with theories of animal and human development, the degrees of similarities and inspiration between identified biological mechanisms and their counterpart in robots, as well as the abstraction levels of modeling, may vary a lot. https://en.m.wikipedia.org/wiki/Neuron_(software)
While some projects aim at modeling precisely both the function and biological implementation (neural or morphological models), such as in neurorobotics www.humanbrainproject.eu, some other projects only focus on functional modeling of the mechanisms and constraints described above, and might for example reuse in their architectures techniques coming from applied mathematics or engineering fields.
https://en.m.wikipedia.org/wiki/Developmental_robotics

https://m.facebook.com/Artificial-Brain-Sweden-1014306015294202/
What are the ethical issues involved in simulating a human brain and in technology derived from human brain simulation?
Building computer models of the brain may challenge our concepts of personhood, free will and personal responsibility, and the nature of consciousness.
(Delgado stated that “brain transmitters can remain in a person’s head for life.
The energy to activate the brain transmitter is transmitted by way of radio frequencies.
https://en.m.wikipedia.org/wiki/Jos%C3%A9_Manuel_Rodriguez_Delgado)

https://drive.google.com/file/d/0B4iZUaEgNfFfZl9iU1BpbzNJTW8/view
In medicine, brain simulation could make it easier to communicate with people who cannot speak (e.g. people with severe disabilities, people in a vegetative state or with locked-in syndrome) or to enhance cognitive function in people with cognitive disabilities (e.g. dementia, trauma and stroke victims, etc.).
https://en.m.wikipedia.org/wiki/Wetware_(brain)
As in other fields of science, it also possible that new knowledge about the brain will be abused – deliberately, for example to create new weapons – but also involuntarily, because society does not realize the power and consequences of new technologies.
https://m.facebook.com/Martin-Ingvar-KI-1528410917453814/
For instance, it may be possible in the future to use knowledge about the brain to predict and modify individual behaviour, or even to irreversibly modify behaviour through electrical stimulation of the brain, pharmacology or neurosurgery. In cases of intractable mental disease, this may be desirable, but in other cases, the costs and benefits will be debatable. One example of debate is whether society should allow cognitive enhancement in healthy people.
Similar considerations apply to technology. Future computers that implement the same principles of computation and cognitive architectures as the brain have enormous potential to improve industrial productivity and offer new services to citizens.
However, they could also be used to implement new systems of mass surveillance and new weaponry.
https://m.youtube.com/watch?v=o9bd-B2dqCM&feature=youtu.be
If such systems came into widespread use they would undoubtedly have a huge impact on patterns of daily life and employment – this could be both beneficial and detrimental
https://www.humanbrainproject.eu/faq/ethics

This is what it’s all about:
https://m.youtube.com/watch?v=01hbkh4hXEk&feature=youtu.be
The development of your Future connected to a mindreading machine:
https://www.rt.com/usa/265029-kurzweil-google-hybrid-brain/

http://www.ted.com/talks/ray_kurzweil_get_ready_for_hybrid_thinking
And the developing of Future Surveillance:
https://m.youtube.com/watch?list=PL9DCCHoTYZ8KaMFrLrUzNblMsIECi9vQ0&v=30seQeBI-Tc

https://m.youtube.com/watch?feature=youtu.be&v=pW1HACMaOME
Imagine what U.S could do if they could master the nanoparticles that are distributed in vaccine to human’s worldwide and use them for remote neural monitoring:
https://mind-computer.com/2012/12/20/remote-neural-monitoring-a-technology-used-for-controlling-the-human-brain/
Nanoparticles BMI https://en.m.wikipedia.org/wiki/Brain%E2%80%93computer_interface in vaccine:
Nanofluids also have special acoustical properties and in ultrasonic fields display additional shear-wave reconversion of an incident compressional wave; the effect becomes more pronounced as concentration increases.
https://en.m.wikipedia.org/wiki/Vaccine

https://en.m.wikipedia.org/wiki/Aluminium

http://sharpbrains.com/blog/2015/03/19/non-invasive-brain-stimulation-meets-nanotechnology/

http://globalbiodefense.com/2012/07/28/darpa-program-hits-milestone-in-plant-based-vaccines-for-pandemics/

https://en.m.wikipedia.org/wiki/Nanofluid
This speaks volumes about the hidden impact certain vaccines may have on your body and your brain in particular…
http://articles.mercola.com/sites/articles/archive/2011/08/06/vaccine-increases-narcolepsy-by-660-percent.aspx

Delgado stated that “brain transmitters can remain in a person’s head for life.
The energy to activate the brain transmitter is transmitted by way of radio frequencies.
https://en.m.wikipedia.org/wiki/Jos%C3%A9_Manuel_Rodriguez_Delgado

https://www.google.com/patents/DE10253433A1?cl=en&dq=inassignee%3A

https://archive.org/details/pubmed-PMC4086297

https://en.m.wikipedia.org/wiki/Neurorobotics

https://drive.google.com/file/d/0B4iZUaEgNfFfell0YWFFMHd3RnM/view

https://en.m.wikipedia.org/wiki/Cybernetics

https://en.m.wikipedia.org/wiki/Target_Motion_Analysis

http://www.raytheon.com/
In analysis such as computational fluid dynamics (CFD), nanofluids can be assumed to be single phase fluids.
http://www.datacenterdynamics.com/news/sgi-gives-us-military-the-fastest-supercomputer/91007.fullarticle
The Worldwide Security Grid: MATRIX Smartdust is a system of many tiny microelectromechanical systems (MEMS) such as sensors, robots, or other devices, that can detect, for example, light, temperature, vibration, magnetism, or chemicals. They are usually operated on a computer network wirelessly and are distributed over some area to perform tasks, usually sensing through radio-frequency identification.
http://www.foi.se/en/Customer–Partners/Projects/Edge1/Edge/
Without an antenna of much greater size the range of tiny smart dust communication devices is measured in a few millimeters and they may be vulnerable to electromagnetic disablement and destruction by microwave exposure.
https://en.m.wikipedia.org/wiki/Smartdust
The Grid Security Infrastructure (GSI), formerly called the Globus Security Infrastructure https://en.m.wikipedia.org/wiki/Grid_Security_Infrastructure, is a specification for secret, tamper-proof, delegatable communication between software in a grid computing environment. Secure, authenticatable communication is enabled using asymmetric encryption.
http://ieeexplore.ieee.org/xpl/login.jsp?reload=true&tp&arnumber=6028683&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6028683
In software engineering, the terms “front end” and “back end” are distinctions which refer to the separation of concerns between a presentation layer and a data access layer respectively.
The front end is an interface between the user and the back end. The front and back ends may be distributed amongst one or more systems.
In software architecture, there may be many layers between the hardware and end user. Each can be spoken of as having a front end and a back end. The front is an abstraction, simplifying the underlying component by providing a user-friendly interface.
https://en.m.wikipedia.org/wiki/Pervasive_game
In software design, for example, the model-view-controller architecture provides front and back ends for the database, the user and the data processing components. The separation of software systems into front and back ends simplifies development and separates maintenance. A rule of thumb is that the front (or “client”) side is any component manipulated by the user. The server-side (or “back end”) code resides on the server.
https://en.m.wikipedia.org/wiki/Front_and_back_ends

https://m.youtube.com/watch?v=o9bd-B2dqCM&feature=youtu.be
The Utah Data Center, also known as the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center, is a data storage facility for the United States Intelligence Community that is designed to store data estimated to be on the order of exabytes or larger.
Its purpose is to support the Comprehensive National Cybersecurity Initiative (CNCI), though its precise mission is classified.
http://www.defense.gov/News/Speeches/Speech-View/Article/606635

https://www.iarpa.gov/index.php/research-programs/neuroscience-programs-at-iarpa

https://en.m.wikipedia.org/wiki/Wetware_(brain)

http://www.datacenterdynamics.com/news/sgi-gives-us-military-the-fastest-supercomputer/91007.fullarticle

http://bioethics.gov/node/4704

https://m.facebook.com/Brain-Print-1585272778401091/
The National Security Agency (NSA) leads operations at the facility as the executive agent for the Director of National Intelligence.
It is located at Camp Williams near Bluffdale, Utah, between Utah Lake and Great Salt Lake and was completed in May 2014 at a cost of $1.5 billion.
https://en.m.wikipedia.org/wiki/Utah_Data_Center

Any Questions?

http://www.lifecoachcode.com/2016/05/28/the-father-of-fractal-geometry-reveals-the-pattern/

In the computer software world, open source software concerns the creation of software, to wIntelligence.
https://en.m.wikipedia.org/wiki/List_of_artificial_intelligence_projects

https://m.facebook.com/Artificial-Genocide-1716276305283944/

https://m.facebook.com/Doomsday-Computers-AGI-ASI-516866275144461/

https://m.youtube.com/watch?list=PL9DCCHoTYZ8KaMFrLrUzNblMsIECi9vQ0&v=g1lVBNV6ztw

#NBIC #DualUseTechnology #HBP #LofarLois #Ericsson #IBM #SGI #Raytheon #NSA #MilitaryNanoTechnology  #BrainInitiative #NeuroEthics

hich access to the underlying source code is freely available.
This permits use, study, and modification without restriction.
In computer security, the debate is ongoing as to the relative merits of the full disclosure of security vulnerabilities, versus a security-by-obscurity approach.
There is a different (perhaps almost opposite) sense of transparency in human-computer interaction, whereby a system after change adheres to its previous external interface as much as possible while changing its internal behaviour.
That is, a change in a system is transparent to its users if the change is unnoticeable to them.
https://en.m.wikipedia.org/wiki/Transparency_(behavior)

All Human’s are Computers in Ubiquitous Computing that are monitored by Artificial Intelligence/Ambient

%d bloggers like this: