Cyber-Cognitive-Warfare!

Cyber-Cognitive-Warfare


By: François du Cluzel

Executive Summary ……………………………………………………………………………………4
Introduction ……………………………………………………………………………………………….5
The advent of Cognitive Warfare ……………………………………………………………….6
From Information Warfare to Cognitive Warfare …………………………………………….6
Hacking the individual ………………………………………………………………………………………….7
Trust is the target …………………………………………………………………………………………………..8
Cognitive Warfare, a participatory propaganda ………………………………………………8
Behavioural economy ……………………………………………………………………………………………9
Cyber psychology …………………………………………………………………………………………………11
The centrality of the human brain ……………………………………………………………..12
Understanding the brain is a key challenge for the future …………………………..12
The vulnerabilities of the human brain ……………………………………………………………..13
The role of emotions …………………………………………………………………………………………….15
The battle for attention ………………………………………………………………………………………..15
Long-term impacts of technology on the brain ………………………………………………16
The promises of neurosciences…………………………………………………………………………. 17
The militarisation of brain science …………………………………………………………….19
Progress and Viability of Neuroscience and Technology (NeuroS/T) …………19
Military and Intelligence Use of NeuroS/T ……………………………………………………….20
Direct Weaponisation of NeuroS/T ……………………………………………………………………21
Neurodata ………………………………………………………………………………………………………………22
The neurobioeconomy …………………………………………………………………………………………23
Towards a new operational domain …………………………………………………………..25
Russian and Chinese Cognitive Warfare Definition……………………………………….. 26
It’s about Humans …………………………………………………………………………………………………28
Recommendations for NATO ………………………………………………………………………………32
Definition of the Human Domain ………………………………………………………………………32
Impact on Warfare Development ……………………………………………………………………….34
Conclusion ………………………………………………………………………………………………….36
Bibliography and Sources …………………………………………………………………………..37
Annex 1 ………………………………………………………………………………………………………38
Nation State Case Study 1: The weaponisation of neurosciences in China …38
Annex 2 ………………………………………………………………………………………………………41
Nation State Case Study 2: The Russian National Technology Initiative ………41

Executive Summary.

As written in the Warfighting 2040 Paper, the nature of warfare has changed. The majority of current conflicts remain below the threshold of the traditionally accepted definition of warfare, but new forms of warfare have emerged such as Cognitive Warfare (CW), while the human mind is now being considered as a new domain of war.
With the increasing role of technology and information overload, individual cognitive abilities will no longer be sufficient to ensure an informed and timely decision-making, leading to
the new concept of Cognitive Warfare, which has become a recurring term in military termi- nology in recent years.
Cognitive Warfare causes an insidious challenge. It disrupts the ordinary understandings and
reactions to events in a gradual and subtle way, but with significant harmful effects over time.
Cognitive warfare has universal reach, from the individual to states and multinational organi-sations. It feeds on the techniques of disinformation and propaganda aimed at psychologically exhausting the receptors of information. Everyone contributes to it, to varying degrees,
consciously or sub consciously and it provides invaluable knowledge on society, especially
open societies, such as those in the West. This knowledge can then be easily weaponised. It
offers NATO’s adversaries a means of bypassing the traditional battlefield with significant
strategic results, which may be utilised to radically transform Western societies.
The instruments of information warfare, along with the addition of “neuro-weapons” adds to
future technological perspectives, suggesting that the cognitive field will be one of tomorrow’s battlefields. This perspective is further strengthened in by the rapid advances of NBICs
(Nanotechnology, Biotechnology, Information Technology and Cognitive Sciences) and the
understanding of the brain. NATO’s adversaries are already investing heavily in these new
technologies.
NATO needs to anticipate advances in these technologies by raising the awareness on the true
potential of CW. Whatever the nature and object of warfare, it always comes down to a clash
of human wills, and therefore what defines victory will be the ability to impose a desired behaviour on a chosen audience. Actions undertaken in the five domains – air, land, sea, space
and cyber – are all executed in order to have an effect on the human domain. It is therefore
time for NATO to recognise the renewed importance of the sixth operational domain, namely
the Human Domain.
Innovation Hub – Nov 2020 Page 4 of 45
Introduction
Individual and organisational cognitive capabilities will be of paramount importance because
of the speed and volume of information available in the modern battlespace. If modern technology holds the promise of improving human cognitive performance, it also holds the seeds
of serious threats for military organisations.
Because organisations are made up of human beings, human limitations and preferences ultimately affect organisational behaviour and decision-making processes. Military organisations are subject to the problem of limited rationality, but this constraint is often overlooked in
practice .

1
In an environment permeated with technology and overloaded with information, managing
the cognitive abilities within military organisations will be key, while developing capabilities
to harm the cognitive abilities of opponents will be a necessity. In other words, NATO will
need to get the ability to safeguard her decision-making process and disrupt the adversary’s
one.
This study intends to respond to the three following questions:
• Improve awareness on Cognitive Warfare, including a better understanding of the
risks and opportunities of new Cognitive / Human Mind technologies;
• Provide ‘out-of-the-box’ insight on Cognitive Warfare;
• And to provide strategic level arguments to SACT as to recommend, or not,
Cognitive / Human Mind as an Operational Domain.
Innovation Hub – Nov 2020 Page 5 of 45 The advent of Cognitive Warfare
From Information Warfare to Cognitive Warfare Information warfare (IW) is the most related, and, thus, the most easily conflated, type of warfare with regards to cognitive warfare. However, there are key distinctions that make cognitive warfare unique enough to be addressed under its own jurisdiction. As a concept, IW was first coined and developed under US Military doctrine, and has subsequently been adopted in different forms by several nations.
As former US Navy Commander Stuart Green described it as, “Information operations, the closest

2
existing American doctrinal concept for cognitive warfare, consists of five ‘core capabilities’, or elements. These include electronic warfare, computer network operations, PsyOps, military deception, and operational security.”
Succinctly, Information Warfare aims at controlling the flow of information. Information warfare has been designed primarily to support objectives defined by the traditional mission of military organisations – namely, to produce lethal kinetic effects on the battlefield. It was not designed to achieve lasting political successes.
As defined by Clint Watts, cognitive Warfare opposes the capacities to know and to produce,
it actively thwarts knowledge. Cognitive sciences cover all the sciences that concern knowledge and its processes (psychology, linguistics, neurobiology, logic and more).

3
Cognitive Warfare degrades the capacity to know, produce or thwart knowledge. Cognitive
sciences cover all the sciences that concern knowledge and its processes (psychology, linguistics, neurobiology, logic and more).
Cognitive Warfare is therefore the way of using knowledge for a conflicting purpose. In its broadest sense, cognitive warfare is not limited to the military or institutional world. Since the early 1990s, this capability has tended to be applied to the political, economic, cultural and societal fields.
Any user of modern information technologies is a potential target. It targets the whole of a nation’s human capital.
Innovation Hub – Nov 2020 Page 6 of 45
“Conflicts will increasingly depend on/and revolve around, information and communications— (…) Indeed, both cyberwar and netwar are modes of conflict that are largely about “knowledge”—about who knows what, when, where, and why, and about how secure a society”
John Arquilla and David Ronfeldt The Advent of Netwar, RAND, 1996

“Big Data allows us to develop fabulous calculation and analysis performances, but what makes it possible to respond to a situation is reason and reason is what enables to take a decision in what is not calculable, otherwise we only confirm the state of affairs.”
Bernard Stiegler
The most striking shift of this practice from the military, to the civilian, world is the perva siveness of CW activities across everyday life that sit outside the normal peace-crisis-conflict
construct (with harmful effects). Even if a cognitive war could be conducted to complement to
a military conflict, it can also be conducted alone, without any link to an engagement of the
armed forces. Moreover, cognitive warfare is potentially endless since there can be no peace
treaty or surrender for this type of conflict.
Evidence now exists that shows new CW tools & techniques target military personnel directly
, not only with classical information weapons but also with a constantly growing and rapidly
evolving arsenal of neuro-weapons, targeting the brain. It is important to recognise various
nations’ dedicated endeavours to develop non-kinetic operations, that target the Human with
effects at every level – from the individual level, up to the socio-political level.
Hacking the individual
The revolution in information technology has enabled cognitive manipulations of a new kind,
on an unprecedented and highly elaborate scale. All this happens at much lower cost than in
the past, when it was necessary to create effects and impact through non-virtual actions in the
physical realm. Thus, in a continuous process, classical military capabilities do not counter
cognitive warfare. Despite the military having difficulty in recognising the reality and effectiveness of the phenomena associated with cognitive warfare, the relevance of kinetic and resource-intensive means of warfare is nonetheless diminishing.
Social engineering always starts with a deep dive into the human environment of the target.
The goal is to understand the psychology of the targeted people. This phase is more important than any other as it allows not only the precise targeting of the right people but also to anticipate reactions, and to develop empathy. Understanding the human environment is the key to building the trust that will ultimately lead to the desired results. Humans are an easy target since theyall contribute by providing information on themselves, making the adversaries’ sockpuppets more powerful.

4
In any case NATO’s adversaries focus on identifying the Alliance’s centres of gravity and vulnerabilities. They have long identified that the main vulnerability is the human. It is easy to
find these centres of gravity in open societies because they are reflected in the study of human
and social sciences such as political science, history, geography, biology, philosophy, voting
systems, public administration, international politics, international relations, religious studies,
education, sociology, arts and culture…
Cognitive Warfare is a war of ideologies that strives to erode the trust that underpins every
society.
Innovation Hub – Nov 2020 Page 7 of 45
“Social engineering is the art and science of getting people to comply to your wishes. It is
not a way of mind control, it will not allow you to get people to perform tasks wildly outside of their normal behaviour and it isfar from foolproof”
Harl, People Hacking, 1997
Trust is the target
Cognitive warfare pursues the objective of undermining trust (public trust in electoral processes, trust in institutions, allies, politicians…). , therefore the individual becomes the

5
weapon, while the goal is not to attack what individuals think but rather the way they think .

6
It has the potential to unravel the entire social contract that underpins societies.
It is natural to trust the senses, to believe what is seen and read. But the democratisation of
automated tools and techniques using AI, no longer requiring a technological background,
enables anyone to distort information and to further undermine trust in open societies. The
use of fake news, deep fakes, Trojan horses, and digital avatars will create new suspicions
which anyone can exploit.
It is easier and cheaper for adversaries to undermine trust in our own systems than to attack
our power grids, factories or military compounds. Hence, it is likely that in the near future
there will be more attacks, from a growing and much more diverse number of potential players with a greater risk for escalation or miscalculation. The characteristics of cyberspace (lack
of regulation, difficulties and associated risks of attribution of attacks in particular) mean that
new actors, either state or non-state, are to be expected .

7
As the example of COVID-19 shows, the massive amount of texts on the subject, including
deliberately biased texts (example is the Lancet study on chloroquine) created an information
and knowledge overload which, in turn, generates both a loss of credibility and a need for
closure. Therefore the ability for humans to question, normally, any data/information presented is hampered, with a tendency to fall back on biases to the detriment of unfettered decision making.
It applies to trust among individuals as well as groups, political alliances and societies.
“Trust, in particular among allies, is a targeted vulnerability. As any international institution does, NATO relies on trust between its partners. Trust is based not only on respecting
some explicit and tangible agreements, but also on ‘invisible contracts,’ on sharing values,
which is not easy when such a proportion of allied nations have been fighting each other for
centuries. This has left wounds and scars creating a cognitive/information landscape that our
adversaries study with great care. Their objective is to identify the ‘Cognitive Centers of
Gravity’ of the Alliance, which they will target with ‘info-weapons’.”

8
Cognitive Warfare, a participatory propaganda

9
In many ways, cognitive warfare can be compared to propaganda, which can be defined as “a
set of methods employed by an organised group that wants to bring about the active or passive participation in its actions of a mass of individuals, psychologically unified through psychological manipulations and incorporated in an organisation.”

10
Innovation Hub – Nov 2020 Page 8 of 45
The purpose of propaganda is not to “program” minds, but to influence attitudes and
behaviours by getting people to adopt the right attitude, which may consist of doing
certain things or, often, stopping doing them.
Cognitive Warfare is methodically exploited as a component of a global strategy by adversaries aimed at weakening, interfering and destabilising targeted populations, institutions and states, in order to influence their choices, to undermine the autonomy of their decisions and the sovereignty of their institutions. Such campaigns combine both real and distorted information (misinformation), exaggerated facts and fabricated news (disinformation).
Disinformation preys on the cognitive vulnerabilities of its targets by taking advantage of pre-existing anxieties or beliefs that predispose them to accept false information.
This requires the aggressor to have an acute understanding of the socio-political dynamics at play and to know exactly when and how to penetrate to best exploit these vulnerabilities.
Cognitive Warfare exploits the innate vulnerabilities of the human mind because of the way it is designed to process information, which have always been exploited in warfare, of course. However, due to the speed and pervasiveness of technology and information, the human mind is no longer able to process the flow of information.
Where CW differs from propaganda is in the fact that everyone participates, mostly inadvertently, to information processing and knowledge formation in an unprecedented way. This is
a subtle but significant change. While individuals were passively submitted to propaganda,
they now actively contribute to it.
The exploitation of human cognition has become a massive industry. And it is expected that
emerging artificial intelligence (AI) tools will soon provide propagandists radically enhanced
capabilities to manipulate human minds and change human behaviour .

11
Behavioural economy
“Capitalism is undergoing a radical mutation. What many describe as the ‘data economy’ is
in fact better understood as a ‘behavioural economics’”.
Innovation Hub – Nov 2020 Page 9 of 45
“New tools and techniques, combined with the changing technological and information
foundations of modern societies, are creating an unprecedented capacity to conduct virtual societal warfare.”
Michael J. Mazarr
“Modern propaganda is based on scientific analyses of psychology and sociology. Step
by step, the propagandist builds his techniques on the basis of his knowledge of man,
his tendencies, his desires, his needs, his psychic mechanisms, his conditioning — and
as much on social psychology as on depth psychology.”
Jacques Ellul, Propaganda, 1962
Behavioural economics (BE) is defined as a method of economic analysis that applies psychological insights into human behaviour to explain economic decision-making.
As research into decision-making shows, behaviour becomes increasingly computational, BE
is at the crossroad between hard science and soft science .

12
Operationally, this means massive and methodical use of behavioural data and the development of methods to aggressively seek out new data sources. With the vast amount of (behavioural) data that everyone generates mostly without our consent and awareness, further manipulation is easily achievable.
The large digital economy companies have developed new data capture methods, allowing
the inference of personal information that users may not necessarily intend to disclose. The
excess data has become the basis for new prediction markets called targeted advertising.
“Here is the origin of surveillance capitalism in an unprecedented and lucrative brew: behavioural surplus, data science, material infrastructure, computational power, algorithmic systems, and automated platforms”, claims Soshanna Zuboff .

13
In democratic societies, advertising has quickly become as important as research. It has finally
become the cornerstone of a new type of business that depends on large-scale online monitoring.
The target is the human being in the broadest sense and it is easy to divert the data obtained from just commercial purposes, as the Cambridge Analytica (CA) scandal demonstrated.
Thus, the lack of regulation of the digital space – the so-called “data swamp”- does not only benefit the digital-age regimes, which “can exert remarkable
control over not just computer networks and human bodies, but the minds of their citizens as
well” .

14
It can also be utilised for malign purposes as the example of the CA scandal has shown.
CA digital model outlined how to combine personal data with machine learning for political
ends by profiling individual voters in order to target them with personalised political advertisements.
Using the most advanced survey and psychometrics techniques, Cambridge Analytica was
actually able to collect a vast amount of individuals’ data that helped them understand
through economics, demographics, social and behavioural information what each of them
thought. It literally provided the company a window into the minds of people.
The gigantic collection of data organised via digital technologies is today primarily used to
define and anticipate human behaviour. Behavioural knowledge is a strategic asset. “Behavioural economics adapts psychology research to economic models, thus creating more accurate representations of human interactions.”

15
“Cambridge Analytica has demonstrated how it’s possible […] to leverage tools to build a
scaled-down version of the massive surveillance and manipulation machines”

16
Innovation Hub – Nov 2020 Page 10 of 45
“Technology is going on unabatedand will continue to go on unabated.
[…] Because technology is going so fast and because people don’t understand it, there was always going to be a Cambridge Analytica.”
Julian Wheatland
Ex-Chief Operating Officer of
Cambridge Analytica
As shown by the example of Cambridge Analytica, one can weaponise such knowledge and
develop appropriate offensive and defensive capabilities, paving the way for virtual societal
warfare. A systematic use of BE methods applied to the military could lead to better under

17 –
standing of how individuals and groups behave and think, eventually leading to a wider understanding of the decision-making environment of adversaries. There is a real risk that access to behavioural data utilising the tools and techniques of BE, as shown by the example of
Cambridge Analytica, could allow any malicious actor- whether state or non-state- to strategically harm open societies and their instruments of power.
Cyberpsychology
Assuming that technology affects everyone, studying and understanding human behaviour
in relation to technology is vital as the line between cyberspace and the real world is becoming blurry.
The exponentially increasing impact of cybernetics, digital technologies, and virtuality can
only be gauged when considered through their effects on societies, humans, and their respective behaviours.
Cyberpsychology is at the crossroads of two main fields: psychology and cybernetics. All this
is relevant to defense and security, and to all areas that matter to NATO as it prepares for
transformation. Centered on the clarification of the mechanisms of thought and on the conceptions, uses and limits of cybernetic systems, cyberpsychology is a key issue in the vast
field of Cognitive Sciences. The evolution of AI introduces new words, new concepts, but also
new theories that encompass a study of the natural functioning of humans and of the machines they have built and which, today, are fully integrated in their natural environment (anthropo-technical). Tomorrow’s human beings will have to invent a psychology of their relation to machines. But the challenge is to develop also a psychology of machines, artificial intelligent software or hybrid robots.
Cyber psychology is a complex scientific field that encompasses all psychological phenomena
associated with, or affected by relevant evolving technologies. Cyber psychology examines
the way humans and machines impact each other, and explores how the relationship between
humans and AI will change human interactions and inter-machine communication .

18


Paradoxically, the development of information technology and its use for manipulative purposes in particular highlights the increasingly predominant role of the brain.
The brain is the most complex part of the human body. This organ is the seat of intelligence,
the interpreter of the senses, the initiator of body movements, the controller of behaviour and
the centre of decisions.
Innovation Hub – Nov 2020 Page 11 of 45
The centrality of the human brain . For centuries, scientists and philosophers have been fascinated by the brain, but until recently they considered the brain to be almost incomprehensible. Today, however, the brain is beginning to reveal its secrets. Scientists have learned more about the brain in the past decade than in any previous century, thanks to the accelerating pace of research in the neurological and
behavioural sciences and the development of new research techniques. For the military, it represents the last frontier in science, in that it could bring a decisive advantage in tomorrow’s wars.
Understanding the brain is a key challenge for the future Substantial advances have been made in recent decades in understanding how the brain functions. While our decisionmaking processes remain centered on Human in particular with its capacity to orient (OODA loop), fed by data, analysis and visualisations, the inability of human to process, fuse and analyse the profusion of data in a timely manner calls for
humans to team with AI machines to compete with AI machines. In order to keep a balance between the human and the machine in the decision-making process, it becomes necessary to be aware of human limitations and vulnerabilities.
It all starts with understanding our cognition processes and the way our brain’s function.
Over the past two decades, cognitive science and neuroscience have taken a new step in the analysis and understanding of the human brain, and have opened up new perspectives in terms of brain research, if not indeed of a hybridisation, then of human and artificial intelligence. They have mainly made a major contribution to the study of the diversity of neuro-psychic mechanisms facilitating learning and, as a result, have, for example, challenged the intuition of “multiple intelligences”. No one today can any longer ignore the fact that the brain is both the seat of emotions the interactive mechanisms of memorisation, information processing, problem solving and decision-making.
Innovation Hub – Nov 2020 Page 12 of 45
Cognitive Science
Discipline associating psychology, sociology, linguistics, artificial intelligence and neurosciences, and having for object the explicitation of the mechanisms of thought and information processing mobilised for the acquisition, conservation, use and transmission of knowledge.
Neuroscience
Trans-disciplinary scientific discipline associating biology, mathematics, computer science, etc., with the aim of studying the organisation and functioning of the nervous system, from the point of view of both its structure and its functioning, from the molecular scale down to the level of the organs.
The vulnerabilities of the human brain “In the cognitive war, it’s more important than ever to know thyself.”

19
Humans have developed adaptations to cope with cognitive limitations allowing more effcient processing of information. Unfortunately, these same shortcuts introduce distortions in our thinking and communication, making communication efforts ineffective and subject to manipulation by adversaries seeking to mislead or confuse. These cognitive biases can lead to inaccurate judgments and poor decision making that could trigger an unintended escalation or prevent the timely identification of threats. Understanding the sources and types of cognitive biases can help reduce misunderstandings and inform the development of better strategies to respond to opponents’ attempts to use these biases to their advantage.


In particular, the brain:

  • is unable to distinct whether particular information is right or wrong;
  • Is led to take shortcuts in determining the trustworthiness of messages in case of information overload;
  • is led to believe statements or messages that its already heard as true, even though these
    may be false;
  • accepts statements as true, if backed by evidence, with no regards to the authenticity of the
    that evidence.
    Innovation Hub – Nov 2020 Page 13 of 45
    Those are, among many others, the cognitive bias, defined as a systematic pattern of deviation
    from norm or rationality in judgment.
  • 20
    There are many different cognitive biases inherently stemming from the human brain. Most
  • 21
    of them are relevant to the information environment. Probably the most common and most
    damaging cognitive bias is the confirmation bias. This is the effect that leads people to look
    for evidence that confirms what they already think or suspect, to regard facts and ideas they
    encounter as further confirmation, and to dismiss or ignore any evidence that seems to support another point of view. In other words, “people see what they want to see” .
  • 22
    Cognitive biases effect everyone, from soldiers on the ground to staff officers, and to a greater
    extent than everyone admits.
    It is not only important to recognise it in ourselves, but to study the biases of adversaries to
    understand how they behave and interact.
    As stated by Robert P. Kozloski, “The importance of truly “knowing yourself” cannot be understated. Advances in computing technology, particularly machine learning, provide the military with the opportunity to know itself like never before. Collecting and analysing the data
    Innovation Hub – Nov 2020 Page 14 of 45
    generated in virtual environments will enable military organisations to understand the cognitive performance of individuals.”
  • 23
    Ultimately, operational advantages in cognitive warfare will first come from the improvement
    of understanding of military cognitive abilities and limitations.
    The role of emotions
    In the digital realm, what allows the digital industries and their customers (and notably advertisers) to distinguish individuals in the crowd, to refine personalisation and behavioural analysis, are emotions. Every social media platform, every website is designed to be addictive and to trigger some emotional bursts, trapping the brain in a cycle of posts. The speed, emotional intensity, and echo-chamber qualities of social media content cause those exposed to it to experience more extreme reactions. Social media is particularly well suited to worsening political and social polarisation because of their ability to disseminate violent images and scary rumours very quickly and intensely. “The more the anger spreads, the more Internet users are susceptible to becoming a troll.”
  • 24
    At the political and strategic level, it would be wrong to underestimate the impact of emotions. Dominique Moïsi showed in his book “The Geopolitics of Emotion” , how emotions –
  • 25
    hope, fear and humiliation – were shaping the world and international relations with the
    echo-chamber effect of the social media. For example, it seems important to integrate into
    theoretical studies on terrorist phenomena the role of emotions leading to a violent and/or a
    terrorist path.
    By limiting cognitive abilities, emotions also play a role in decision-making, performance, and
    overall well-being, and it’s impossible to stop people from experiencing them. “In the face of
    violence, the very first obstacle you will have to face will not be your abuser, but your own
    reactions.”
  • 26
    The battle for attention
    Never have knowledge and information been so accessible, so abundant, and so shareable.
    Gaining attention means not only building a privileged relationship with our interlocutors to
    better communicate and persuade, but it also means preventing competitors from getting that
    attention, be it political, economic, social or even in our personal life. This battlefield is global via the internet. With no beginning and no end, this conquest knows no respite, punctuated by notifications from our smartphones, anywhere, 24 hours a day, 7 days a week.
    Coined in 1996 by Professor B.J. Fogg from Stanford University,
    “captology” is defined as the science of using “computers as technologies of persuasion”.
  • 27
    Innovation Hub – Nov 2020 Page 15 of 45
    “We are competing with
    sleep”
    Reed Hastings
    CEO of Netflix
    The time has therefore come to adopt the rules of this “attention economy”, to master the
    technologies related to “captology”, to understand how these challenges are completely new.
    Indeed, this battle is not limited to screens and design, it also takes place in brains, especially
    in the way they are misled. It is also a question of understanding why, in the age of social
    networks, some “fake news”, conspiracy theories or “alternative facts”, seduce and convince,
    while at the same time rendering their victims inaudible.
    Attention on the contrary is a limited and increasingly scarce resource. It cannot be shared: it
    can be conquered and kept. The battle for attention is now at work, involving companies, states and citizens.
    The issues at stake now go far beyond the framework of pedagogy, ethics and screen addiction. The consumption environment, especially marketing, is leading the way. Marketers have
    long understood that the seat of attention and decision making is the brain and as such have
    long sought to understand, anticipate its choices and influence it.
    This approach naturally applies just as well to military affairs and adversaries have already
    understood this.
    Long-term impacts of technology on the brain
    As Dr. James Giordano claims, “the brain will the battlefield of the 21st century”.
  • 28
    And when it comes to shaping the brain, the technological environment plays a key role.
    The brain has only one chance to develop. Damage to the brain is very often irreversible. Understanding and protecting our brains from external aggression, of all kinds, will be one of
    the major challenges of the future.
    According to the neuroscientist Maryanne Wolf, humans were not meant to read and the invention of printing changed the shape of our brains . It took years, if not centuries, to assess
  • 29
    the consequences – social, political or sociological for example – of the invention of printing. It
    will likely take longer before understanding accurately the long-term consequences of the
    digital age but one thing everyone agrees on is that the human brain is changing today faster
    than ever before with the pervasiveness of digital technology.
    There is a growing amount of research that explores how technology affects the brain. Studies
    show that exposure to technology shapes the cognitive processes and the ability to take in information. One of the major findings is the advent of a society of ‘cognitive offloaders’, meaning that no one memorises important information any longer. Instead, the brain tends to remember the location where they retrieved when it is next required. With information and visual overload, the brain tends to scan information and pick out what appears to be important
    with no regard to the rest.
    One of the evolutions already noticed is the loss of critical thinking directly related to screen
    reading and the increasing inability to read a real book. The way information is processed affects brain development, leading to neglect of the sophisticated thought processes. Brains will
    thus be different tomorrow. It is therefore highly probable that our brains will be radically
    Innovation Hub – Nov 2020 Page 16 of 45
    transformed in an extremely short period, but it is also likely that this change will come at the
    expense of more sophisticated, more complex thinking processes necessary for critical analysis.
    In an era where memory is outsourced to Google, GPS, calendar alerts and calculators, it will
    necessarily produce a generalised loss of knowledge that is not just memory, but rather motor
    memory. In other words, a long-term process of disabling connections in your brain
  • 30
  • is ongoing. It will present both vulnerabilities and opportunities.
    However, there is also plenty of research showing the benefits of technology on our cognitive
    functions. For example, a Princeton University study found that expert video gamers have a
  • 31
    higher ability to process data, to make decisions faster or even to achieve simultaneous multitasks in comparison to non-gamers. There is a general consensus among neuroscientists that a
    reasoned use of information technology (and particularly games) is beneficial to the brain.
    By further blurring the line between the real and the virtual, the development of technologies
    such as Virtual Reality (VR), Augmented Reality (AR) or Mixed Reality (MR) has the potential
    to transform the brain’s abilities even more radically . Behaviours in virtual environments
  • 32
    can continue to influence real behaviour long after exiting VR.
  • 33
    Yet, virtual environments offer the opportunity to efficiently complement live training since it
    can provide cognitive experience that a live exercise cannot replicate. While there are concerns and research on how digital media are harming developing minds, it is still difficult to predict how the technology will affect and change the brain, but with the ubiquity of IT, it will become increasingly crucial to carefully detect and anticipate the impacts of information technology on the brain and to adapt the use of information technology.
    In the long-term, there is little doubt that Information Technologies will transform the brain,
    thus providing more opportunities to learn and to apprehend the cyber environment but also
    vulnerabilities that will require closely monitoring in order to counter and defend against
    them and how to best exploit them.
    The promises of neurosciences
    “Social neuroscience holds the promise of understanding people’s thoughts, emotions and
    intentions through the mere observation of their biology.”
  • 34
    Should scientists be able to establish a close and precise correspondence between biological
    functions on the one hand and social cognitions and behaviours on the other hand, neuroscientific methods could have tremendous applications for many disciplines and for our society
    in general. It includes decision-making, exchanges, physical and mental health care, prevention, jurisprudence, and more.
    This highlights how far neurosciences occupies a growing place in medical and scientific
    research. More than just a discipline, they articulate a set of fields related to the knowledge of
    the brain and nervous system and question the complex relationships between man and his
    Innovation Hub – Nov 2020 Page 17 of 45
    environment and fellow human beings. From biomedical research to cognitive sciences, the
    actors, approaches and organisations that structure neuroscience are diverse.
    Often convergent, they can also be competitive.
    While the discoveries and challenges of the neurosciences are relatively well known, this field
    raises both hope and concern. In a disorganised and, at times, ill-informed way,
    “neuroscience” seems to be everywhere. Integrated, sometimes indiscriminately, in many
    debates, they are mobilised around the issues of society and public health, education, aging,
    and nourish the hopes of an augmented man.

Today, the manipulation of our perception, thoughts and behaviours is taking place on
previously unimaginable scales of time, space and intentionality. That, precisely, is the source
of one of the greatest vulnerabilities that every individual must learn to deal with. Many
actors are likely to exploit these vulnerabilities, while the evolution of technology for
producing and disseminating information is increasingly fast. At the same time, as the cost of
technology steadily drops, more actors enter the scene.
As the technology evolves, so do the vulnerabilities.
Innovation Hub – Nov 2020 Page 18 of 45
The militarisation of brain science
Scientists around the world are asking the question of how to free humanity from the limitations of the body. The line between healing and augmentation becomes blurred. In addition,
the logical progression of research is to achieve a perfect human being through new technological standards.
In the wake of the U.S. Brain Initiative initiated in 2014, all the major powers (EU/China/
Russia) have launched their own brain research programs with substantial fundings. China
sees the brain “as the HQ of the Human body and precisely attacking the HQ is one of the
most effective strategies for determining victory or defeat on the battlefield” .

35
The revolution in NBIC (Nanotechnology, biotechnology, information technology, and cognitive science) including advances in genomics, has the potential for dual-use technology development. A wide range of military applications such as improving the performance of soldiers, developing new weapons such as directed energy weapons are already discussed.
Progress and Viability of Neuroscience and Technology (NeuroS/T)
Neuroscience employs a variety of methods and technologies to evaluate and influence neurologic substrates and processes of cognition, emotion, and behaviour. In general, brain science can be either basic or applied research. Basic research focuses upon obtaining knowledge and furthering understanding of structures and functions of the nervous system on a variety of levels by employing methods of the physical and natural sciences. Applied research seeks to develop translational approaches that can be directly utilised to understand and modify the physiology, psychology, and/or pathology of target organisms, including humans. Neuroscientific methods and technologies (neuroS/T) can be further categorised as those used to assess, and those used to affect the structures and functions of the nervous system, although these categories and actions are not mutually exclusive. For example, the use of certain drugs, toxins, and probes to elucidate functions of various sites of the central and peripheral nervous
system can also affect neural activity.
NeuroS/T is broadly considered a natural and/or life science and there is implicit and explicit
intent, if not expectation to develop and employ tools and outcomes of research in clinical
medicine. Neuroscientific techniques, technologies, and information could be used for medical as well as non-medical (educational, occupational, lifestyle, military, etc.) purposes .

36
It is questionable whether the uses, performance enablements, and resulting capabilities could (or should) be used in intelligence and/or diplomatic operations to mitigate and subvert aggression, violence, and conflict. Of more focal concern are uses of research findings and products to directly facilitate the performance of combatants, the integration of human-machine interfaces to optimise combat capabilities of semi-autonomous vehicles (e.g., drones), and development of biological and chemical weapons (i.e., neuroweapons).
Innovation Hub – Nov 2020 Page 19 of 45
Some NATO Nations have already acknowledged that neuroscientific techniques and technologies have high potential for operational use in a variety of security, defense and intelligence
enterprises, while recognising the need to address the current and short-term ethical, legal
and social issues generated by such use .

37
Military and Intelligence Use of NeuroS/T
The use of neuroS/T for military and intelligence purposes is realistic, and represents a clear
and present concern. In 2014, a US report asserted that neuroscience and technology had matured considerably and were being increasingly considered, and in some cases evaluated for operational use in security, intelligence, and defense operations. More broadly, the iterativerecognition of the viability of neuroscience and technology in these agenda reflects the paceand breadth of developments in the field. Although a number of nations have pursued, andare currently pursuing neuroscientific research and development for military purposes, perhaps the most proactive efforts in this regard have been conducted by the United States Department of Defense; with most notable and rapidly maturing research and development conducted by the Defense Advanced Research Projects Agency (DARPA) and Intelligence Advanced Research Projects Activity (IARPA). To be sure, many DARPA projects are explicitly directed toward advancing neuropsychiatric treatments and interventions that will improve both military and civilian medicine. Yet, it is important to note the prominent ongoing –and expanding – efforts in this domain by NATO European and trans-Pacific strategic competitor nations.
As the 2008 National Research Council report stated, “… for good or for ill, an ability to better

38 –
understand the capabilities of the body and brain… could be exploited for gathering intelligence, military operations, information management, public safety and forensics”. To paraphrase Aristotle, every human activity and tool can be regarded as purposed toward somedefinable “good”. However, definitions of “good” may vary, and what is regarded as good for some may present harm to others. The potential for neuroS/T to afford insight, understanding, and capability to affect cognitive, emotional, and behavioural aspects of individuals and groups render the brain sciences particularly attractive for use in security, intelligence, and military/warfare initiatives.
To approach this issue, it is important to establish four fundamental premises.
• Firstly, neuroS/T is, and will be increasingly and more widely incorporated into approaches to national security, intelligence gathering and analysis, and aspects of military operations;
• Secondly, such capabilities afford considerable power;
• Thirdly, many countries are actively developing and subsidising neuro S/T research
under dual-use agendas or for direct incorporation into military programs;
• Fourthly, these international efforts could lead to a “capabilities race” as nations react
to new developments by attempting to counter and/or improve upon one another’s
discoveries.
Innovation Hub – Nov 2020 Page 20 of 45
This type of escalation represents a realistic possibility with potential to affect international
security. Such “brinksmanship” must be acknowledged as a potential impediment to attempts to develop analyses and guidelines (that inform or prompt policies) that seek to constrain or restrict these avenues of research and development.
Neuroscientific techniques and technologies that are being utilised for military efforts include:

  • Neural systems modelling and human/brain-machine interactive networks in intelligence, training and operational systems;
  • Neuroscientific and neurotechnological approaches to optimising performance and
    resilience in combat and military support personnel;
  • Direct weaponisation of neuroscience and neurotechnology.
    Of note is that each and all may contribute to establishing a role for brain science on the 21st
    century battlescape.
    Direct Weaponisation of NeuroS/T
    The formal definition of a weapon as “a means of contending against others” can be extended
    to include any implement “…used to injure, defeat, or destroy”. Both definitions apply to
    products of neuroS/T research that can be employed in military/warfare scenarios. The objectives for neuroweapons in warfare may be achieved by augmenting or degrading functions of the nervous system, so as to affect cognitive, emotional and/or motor activity and capability (e.g., perception, judgment, morale, pain tolerance, or physical abilities and stamina) necessary for combat. Many technologies can be used to produce these effects, and there is demonstrated utility for neuroweapons in both conventional and irregular warfare scenarios. At present, outcomes and products of computational neuroscience and neuropharmacologic research could be used for more indirect applications, such as enabling human efforts by simulating, interacting with, and optimising brain functions, and the classification and detection of human cognitive, emotional, and motivational states to augment intelligence or counterintelligence tactics. Human/brain-machine interfacing neurotechnologies capable of optimising data assimilation and interpretation systems by mediating access to – and manipulation of – signal detection, processing, and/or integration are being explored for their potential to delimit “human weak links” in the intelligence chain.
    The weaponised use of neuroscientific tools and products is not new. Historically, such
    weapons which include nerve gas and various drugs, pharmacologic stimulants (e.g., amphetamines), sedatives, sensory stimuli, have been applied as neuroweapons to incapacitate the enemy, and even sleep deprivation and distribution of emotionally provocative information in psychological operations (i.e., PSYOPS) could rightly be regarded as forms of weaponised applications of neuroscientific and neurocognitive research.
    Innovation Hub – Nov 2020 Page 21 of 45
    Products of neuroscientific and neurotechnological research can be utilised to affect
    1) memory, learning, and cognitive speed;
    2) wake-sleep cycles, fatigue and alertness;
    3) impulse control;
    4) mood, anxiety, and self-perception;
    5) decision-making;
    6) trust and empathy;
    7) and movement and performance (e.g., speed, strength, stamina, motor learning, etc.).
    In military/warfare settings, modifying these functions can be utilised to mitigate aggression
    and foster cognitions and emotions of affiliation or passivity; induce morbidity, disability or
    suffering; and “neutralise” potential opponents or incur mortality.
    Neurodata
    The combination of multiple disciplines (e.g., the physical, social, and computational sciences), and intentional “technique and technology sharing” have been critical to rapid and numerous discoveries and developments in the brain sciences. This process, advanced integrative scientific convergence (AISC), can be seen as a paradigm for de-siloing disciplines toward fostering innovative use of diverse and complementary knowledge-, skill-, and tool-sets to both de-limit existing approaches to problem resolution; and to develop novel means ofexploring and furthering the boundaries of understanding and capability. Essential to theAISC approach in neuroscience is the use of computational (i.e., big data) methods and advancements to enable deepened insight and more sophisticated intervention to the structureand function(s) of the brain, and by extension, human cognition, emotion, and behaviour .
  • 39
    Such capacities in both computational and brain sciences have implications for biosecurity
    and defense initiatives. Several neurotechnologies can be employed kinetically (i.e., providing
    means to injure, defeat, or destroy adversaries) or non-kinetically (i.e., providing “means of
    contending against others,” especially in disruptive ways) engagements. While many types of
    neuroS/T have been addressed in and by extant forums, treaties, conventions, and laws, other
    newer techniques and technologies – inclusive of neurodata – have not. In this context, the
    term “neurodata” refers to the accumulation of large volumes of information; handling of
    large scale and often diverse informational sets; and new methods of data visualisation, assimilation, comparison, syntheses, and analyses. Such information can be used to:
    • more finely elucidate the structure and function of human brain;
    • and develop data repositories that can serve as descriptive or predictive metrics for
    neuropsychiatric disorders.
    Purloining and/or modifying such information could affect military and intelligence readiness, force conservation, and mission capability, and thus national security. Manipulation of
    both civilian and military neurodata would affect the type of medical care that is (or is not)
    Innovation Hub – Nov 2020 Page 22 of 45
    provided, could influence the ways that individuals are socially regarded and treated, and in
    these ways disrupt public health and incur socio-economic change. As the current COVID-19 pandemic has revealed, public – and institutional public health – responses to novel pathogens are highly variable at best, chaotic at worst, and indubitablycostly (on many levels) in either case. To be sure, such extant gaps in public health and safetyinfrastructures and functions could be exploited by employing “precision pathologies” (capable of selectively affecting specific targets such as individuals, communities;, domestic animals, livestock, etc.) and an aggressive program of misinformation to incur disruptive effects on social, economic, political, and military scales that would threaten national stability andsecurity. Recent elucidation of the Chinese government’s Overseas Key Individuals Database(OKIDB), which, via collaboration with a corporate entity, Shenzhen Zhenua Data Technology, has amassed data to afford “insights into foreign political, military, and diplomatic figures…containing information on more than 2 million people…and tens of thousands whohold prominent public positions…” that could be engaged by “Beijing’s army of cyberhackers”.
    Digital biosecurity – a term that describes the intersection of computational systems and biological information and how to effectively prevent or mitigate current and emerging risk arising at this intersection – becomes ever more important and required. The convergence of neurobiology and computational capabilities, while facilitating beneficial advances in brain research and its translational applications, creates a vulnerable strategic asset that will besought by adversaries to advance their own goals for neuroscience. Hacking of biological data within the academic, industry, and the health care systems has already occurred – and neurodata are embedded within all of these domains.
    Thus, it is likely that there will be more direct attempts at harnessing neurodata to gain leverageable informational, social, legal, and military capability and power advantage(s), as several countries that are currently strategically competitive with the U.S. and its allies invest heavily in both neuro- and cyber-scientific research programs and infrastructure. The growing fortitude of these states’ quantitative and economic presence in these fields can – and is intended to – shift international leadership, hegemony, and influence ethical, technical, commercial and politico-military norms and standards of research and use. For example, Russian leadership has declared interest in the employment of “genetic passports” such that those in the military who display genetic indications of high cognitive performance can be directed to particularmilitary tasks.
    The neurobioeconomy
    Advancements in neuroS/T have contributed to much growth in the neuro-bioeconomy. With
    neurological disorders being the second leading cause of death worldwide (with approximately 9 million deaths; constituting 16.5% of global fatalities), several countries have initiated programs in brain research and innovation.
    These initiatives aim to:
    Innovation Hub – Nov 2020 Page 23 of 45
    1) advance understanding of substrates and mechanisms of neuropsychiatric disorders;
    2) improve knowledge of processes of cognition, emotion, and behaviour;
    3) and augment the methods for studying, assessing, and affecting the brain and its
    functions.
    New research efforts incorporate best practices for interdisciplinary approaches that can
    utilise advances in computer science, robotics, and artificial intelligence to fortify the scope
    and pace of neuroscientific capabilities and products. Such research efforts are strong drivers
    of innovation and development, both by organising larger research goals, and by shaping
    neuroS/T research to meet defined economic, public health, and security agendas.
    Rapid advances in brain science represent an emerging domain that state and non-state actors
    can leverage in warfare. While not all brain sciences engender security concerns, predominant
    authority and influence in global biomedical, bioengineering, wellness/lifestyle, and defense
    markets enable a considerable exercise of power. It is equally important to note that such
    power can be exercised both non-kinetic and kinetic operational domains, and several countries have identified neuroS/T as viable, of value, and of utility in their warfare programs.
    While extant treaties (e.g., the BTWC and CWC40) and laws have addressed particular products of the brain sciences (e.g., chemicals, biological agents, and toxins), other forms of neuroS/T, (e.g., neurotechnologies and neuroinformatics) remain outside these conventions’ focus, scope, and governance. Technology can influence, if not shape the norms and conduct of warfare, and the future battlefield will depend not only upon achieving “biological dominance”, but achieving “mental/cognitive dominance” and “intelligence dominance” as well.
    It will be ever more difficult to regulate and restrict military and security applications of neuroS/T without established standards and proper international oversight of research and potential use-in-practice.
  • * * *. *
    In sum, it is not a question of whether neuro S/T will be utilised in military, intelligence, and
    political operations, but rather when, how, to what extent, and perhaps most importantly, if
    NATO nations will be prepared to address, meet, counter, or prevent these risks and threats.
    In this light (and based upon the information presented) it is, and will be increasingly important to address the complex issues generated by the brain sciences’ influence upon global
    biosecurity and the near-term future scope and conduct of both non-kinetic and kinetic military and intelligence operations.41
    Innovation Hub – Nov 2020 Page 24 of 45
    Towards a new operational domain
    The advent of the concept of “cognitive warfare” (CW) brings a third major combat dimension
    to the modern battlefield: to the physical and informational dimensions is now added a cognitive dimension. It creates a new space of competition, beyond the land, maritime, air, cybernetic and spatial domains, which adversaries have already integrated. In a world permeated with technology, warfare in the cognitive domain mobilises a wider range of battle spaces than the physical and informational dimensions can do. Its very essence is to seize control of human beings (civilian as well as military), organisations, nations, butalso of ideas, psychology, especially behavioural, thoughts, as well as the environment. In addition, rapid advances in brain science, as part of a broadly defined cognitive warfare, have
  • the potential to greatly expand traditional conflicts and produce effects at lower cost.
    Through the joint action it exerts on the 3 dimensions (physical, informational and cognitive),
    cognitive warfare embodies the idea of combat without fighting dear to Sun Tzu (“The
    supreme art of war is to subdue the enemy without fighting”). It therefore requires the mobilisation of a much broader knowledge. Future conflicts will likely occur amongst the people digitally first and physically thereafter in proximity to hubs of political and economic power.
  • 42
    The study of the cognitive domain, thus centred on the human being, constitutes a new major
    challenge that is indispensable to any strategy relating to the combat power generation of the
    future.
    Cognition is our “thinking machine”. The function of cognition is to perceive, to pay attention, to memorise, to reason, to produce movements, to express oneself, to decide. To act on
    cognition means to act on the human being.
    Therefore, defining a cognitive domain would be too restrictive; a human domain would
    therefore be more appropriate.
    While actions taken in the five domains are executed in order to have an effect on the human
    domain , cognitive warfare’s objective is to make everyone a weapon.
  • 43
    To turn the situation around, NATO must strive to define in a very broad sense and must
    have a clear awareness of the meanings and advances of international actors providing NATO
    with specific strategic security and broader challenges in the field of cognitive warfare.
    Innovation Hub – Nov 2020 Page 25 of 45
    Russian and Chinese Cognitive Warfare Definition
    Russian Reflexive Control
    In 2012, Vladimir Karyakin added: “The advent of information and network technologies,
    coupled with advances in psychology regarding the study of human behaviour and the control of people’s motivations, make it possible to exert a specified effect on large social groups
    but [also] to also reshape the consciousness of entire peoples.”
  • 44
    Russian CW falls under the definition of the Reflexive Control Doctrine. It is an integrated
    operation that compels an adversary decision maker to act in favour of Russia by altering
    their perception of the world .
  • 45
    This goes beyond “pure deception” because it uses multiple inputs to the decision maker using both true and false information, ultimately aiming to make the target feel that the decision
    to change their behaviour was their own:
  • The Reflexive Control is ultimately aimed at the target’s decision making.
  • The information transmitted must be directed towards a decision or position.
  • The information must be adapted to the logic, culture, psychology and emotions of the
    target.
    The reflexive control has been turned into a broader concept taking into account the
    opportunities offered by new IT technologies called ‘Perception Management’. It is about
    controlling perception and not managing perception.
    The Russian CW is based on an in-depth understanding of human targets thanks to the study
    of sociology, history, psychology, etc. of the target and the extensive use of information
    technology.
    As shown in Ukraine, Russia used her in-depth knowledge as a precursor and gained a
    strategic advantage before the physical conflict.
    Russia has prioritised Cognitive Warfare as a precursor to the military phase.

Innovation Hub – Nov 2020 Page 26 of 45
China Cognitive Warfare Domain
China has adopted an even broader definition of CW that includes the systematic
utilisation of cognitive science and biotechnology to achieve the “mind superiority.”
China has defined the Cognitive Domain of Operations as the battlefield for conducting
ideological penetration (…) aiming at destroying troop morale and cohesion, as well as
forming or deconstructing operational capabilities”
It encompasses six technologies, divided across two categories (Cognition, which includes
technologies that affect someone’s ability to think and function; and subliminal cognition that
covers technologies that target a person’s underlying emotions, knowledge, willpower and
beliefs).
In particular, “Chinese innovation is poised to pursue synergies among brain science, artificial
intelligence (AI), and biotechnology that may have far-reaching implications for its future
military power and aggregate national competitiveness.”

46
The goal of cognitive operations is to achieve the “mind superiority” by using information to
influence an adversary’s cognitive functions,
spanning from peacetime public opinion to
wartime decision-making.

47
Chinese strategists predict that the pace and
complexity of operations will increase dramatically, as the form or character of warfare continues to evolve. As a result, People’s Liberation Army (PLA) strategists are concerned about the intense cognitive challenges that future commanders will face, especially considering the importance of optimising coordination and human-machine fusion or integration. These trends have necessarily increased the PLA’s interest in the military relevance not only of artificial intelligence, but also of brain science and new directions in interdisciplinary biological technologies, ranging from biosensing and biomaterials to human
enhancement options. The shift from computerisation to intelligentisation is seen as requiring
the improvement of human cognitive performance to keep pace with the complexity of warfare” .

48
As part of its Cognitive Domain of Operations, China has defined “Military Brain Science
(MBS) as a cutting-edge innovative science that uses potential military application as the
guidance. It can bring a series of fundamental changes to the concept of combat and combat
methods, creating a whole new “brain war” combat style and redefining the battlefield.”49
The pursuit of advances in the field of MBS is likely to provide cutting edge advances to
China.The development of MBS by China benefits from a multidisciplinary approach
between human sciences, medicine, anthropology, psychology etc. and also benefits from
“civil” advances in the field, civilian research benefiting military research by design.
Innovation Hub – Nov 2020 Page 27 of 45
“The sphere of operations will be expanded
from the physical domain and the information domain to the domain of consciousness,
the human brain will become a new combat
space.”
He Fuchu, “The Future Direction of the New Global Revolution in Military Affairs.
It’s about Humans
A cognitive attack is not a threat that can be countered in the air, on land, at sea, in cyberspace, or in space. Rather, it may well be happening in any or all of these domains, for onesimple reason: humans are the contested domain. As previously demonstrated, the human is very often the main vulnerability and it should be acknowledged in order to protect NATO’s human capital but also to be able to benefit from our adversaries’s vulnerabilities.
“Cognition is natively included in the Human Domain, thus a cognitive domain would be too restrictive”, claimed August Cole and Hervé Le Guyader in “NATO’s 6th domain” and:
“…the Human Domain is the one defining us as individuals and structuring our societies. It has its
own specific complexity compared to other domains, because of the large number of sciences it’s based
upon (…) and these are those our adversaries are focusing on to identify our centres of gravity, our
vulnerabilities.” .

50
The practice of war shows that although physical domain warfare can weaken the military
capabilities of the enemy, it cannot achieve all the purposes of war. In the face of new contradictions and problems in ideology, religious belief and national identity, advanced weapons and technologies may be useless and their effects can even create new enemies. It is therefore difficult if not impossible to solve the problem of the cognitive domain by physical domain warfare alone.
The importance of the Human Environment The Human Domain is not solely focusing of the military human capital. It encompasses the human capital of a theatre of operations as a whole (civilian populations, ethnic groups, leaders…), but also the concepts closely related to humans such as leadership, organisation, decision-making processes, perceptions and behaviour. Eventually the desired effect should be defined within the Human Domain (aka the desired behaviour we want to achieve: collaboration/ cooperation, competition, conflict).
“To win (the future) war, the military must be culturally knowledgeable enough to thrive in
an alien environment” .

51
In the 21st century, strategic advantage will come from how to engage with people, understand them, and access political, economic, cultural and social networks to achieve a position of relative advantage that complements the sole military force. These interactions are not reducible to the physical boundaries of land, air, sea, cyber and space, which tend to focus on geography and terrain characteristics. They represent a network of networks that define power and interests in a connected world. The actor that best understands local contexts and builds a network around relationships that harness local capabilities is more likely to win.
Innovation Hub – Nov 2020 Page 28 of 45
“Victory will be defined more in terms of capturing the psycho-cultural rather than the geographical high
ground. Understanding and empathy will be important weapons of war.”
Maj. Gen. Robert H. Scales
For the historian Alan Beyerchen, social sciences will be the amplifier of the 21st century’s
wars.

52
In the past wars, the problem was that the human factor could not be a significant amplifier
simply because its influence was limited and difficult to exploit; humans were considered
more as constants than as variables. Certainly, soldiers could be improved through training,
selection, psychological adaptation and, more recently, education. But in the end, the human
factor was reduced to numbers. The larger the army, the greater the chance of winning the
war, although the action of a great strategist could counterbalance this argument. Tomorrow,
to have better soldiers and more effective humans will be key.
Last, the recent developments in science, all kinds of science, including science related to the
human domain, have empowered anyone, whether individuals or committed minorities, with
potential devastating power at their disposal. It has created a situation never seen before in
the history of mankind , where individuals or small groups may jeopardise the success of 53
military operations.
The crucible of Data Sciences and Human Sciences The combination of Social Sciences and System Engineering will be key in helping military analysts to improve the production of intelligence for the sake of decision-making .

54
The Human Domain of Operations refers to the whole human environment, whether friend of
foe. In a digital age it is equally important to understand first NATO’s own human strengths
and vulnerabilities before the ones of adversaries.
Since everyone is much more vulnerable than before everyone needs to acknowledge that one
may endanger the security of the overall. Hence, a deep understanding of the adversary’s
human capital (i.e. the human environment of the military operation) will be more crucial
than ever.
“If kinetic power cannot defeat the enemy, (…) psychology and related behavioural and social
sciences stand to fill the void.55”
“Achieving the strategic outcomes of war will necessarily go through expanding the dialogue
around the social sciences of warfare alongside the “physical sciences” of warfare..(…) it will
go through understanding, influence or exercise control within the “human domain”.

56
Leveraging social sciences will be central to the development of the Human Domain Plan of
Operations. It will support the combat operations by providing potential courses of action for
the whole surrounding Human Environment including enemy forces, but also determining
key human elements such as the Cognitive center of gravity, the desired behaviour as the end
state. Understanding the target’s goals, strengths, and vulnerabilities is paramount to an operation for enduring strategic outcomes.
The deeper the understanding of the human environment, the greater will be the freedom of
action and relative advantage.
Innovation Hub – Nov 2020 Page 29 of 45
Psychology and social sciences have always been essential to warfare, and while warfare is
moving away from kinetic operations, they might be the new game changer. Psychology, for
instance, can help to understand the personal motives of terrorist groups and the social dynamics that make them so attractive to the (mostly) young men who join their ranks.
As an example, the picture below depicts a methodology (called Weber) applied to the study
of terrorist groups in Sahel. It combines Social Sciences and System Engineering in order to
help predicting the behaviours of terrorist groups. The tool allows the decision-makers to assess the evolution of actors through behavioural patterns according to several criteria and social science parameters, and ultimately to anticipate courses of action.

57
The analysis, turned towards understanding the other in the broad sense (and often nonWestern), cannot do without anthropology. Social and cultural anthropology is a formidable
tool for the analyst, the best way to avoid yielding to one of the most common biases of intelligence, ethnocentrism, i.e. the inability to get rid of mental structures and representations of
one’s own cultural environment.
Cognitive sciences can be leveraged to enhance training at every level, especially in order to
improve the ability to make decisions in complex tactical situations. Cognitive sciences can be
employed in the creation of highly efficient and flexible training programs that can respond to
fast-changing problems.
Innovation Hub – Nov 2020 Page 30 of 45
Legal and ethical aspects
Legal aspects
The development, production and use of Cognitive Technologies for military purposes raise
questions as to whether, and to what extent, existing legal instruments apply. That is, how the
relevant provisions are to be interpreted and applied in light of the specific technological
characteristics and to what extent international law can sufficiently respond to the legal challenges involved with the advent of such technology.
It is essential to ensure that international law and accepted norms will be able to take into account the development of cognitive technologies. Specifically, to ensure that such technologies are capable of being used in accordance with applicable law and accepted international norms. NATO, through its various apparatus, should work at establishing a common understanding of how cognitive weapons might be employed to be compliant with the law and accepted international norms.
Equally, NATO should consider how the Law of Armed Conflict (LoAC) would apply to the
use of cognitive technologies in any armed conflict in order to ensure that any future development has a framework from which to work within. Full compliance with the rules and principles of LoAC is essential.
Given the complexity and contextual nature of the potential legal issues raised by Cognitive
technologies and techniques, and the constraints associated with this NATO sponsored study,
further work will be required to analyse this issue fully. Therefore, it is recommended that
such work be conducted by an appropriate body and that NATO Nations collaborate in establishing a set of norms and expectations about the use and development of Cognitive technologies. The immediate focus being how they might be used within extant legal frameworks and the Law of Armed Conflict.
Ethics
This area of research – human enhancement and cognitive weapons – is likely to be the subject
of major ethical and legal challenges, but we cannot afford to be on the back foot when international actors are already developing strategies and capabilities to employ them. There is a need to consider these challenges as there is not only the possibility that these human enhancement technologies are deliberately used for malicious purposes, but there may be implications for the ability of military personnel to respect the law of armed conflict.
It is equally important to recognise the potential side effects (such as speech impairment, memory impairment, increased aggression, depression and suicide) of these technologies. For example, if any cognitive enhancement technology were to undermine the capacity of a subject to comply with the law of armed conflict, it would be a source of very serious concern.
The development, and use of, cognitive technologies present numerous ethical challenges as
well as ethical benefits, such as recovery from Post traumatic Stress Disorder (PTSD). Policy
makers should take these challenges seriously as they develop policy about Cognitive Technologies, explore issues in greater depth and determine if other ethical issues may arise as this, and other related, technology develops.
Innovation Hub – Nov 2020 Page 31 of 45
Recommendations for NATO
The need for cooperation.While the objective of Cognitive Warfare is to harm societies and not only the military, this type of warfare resembles to “shadow wars” and requires a whole-of-government approach
to warfare. As previously stated, the modern concept of war is not about weapons but about
influence. To shape perceptions and control the narrative during this type of war, battle will have to be fought in the cognitive domain with a whole-of-government approach at the national level. This will require improved coordination between the use of force and the other levers of power across government. This could mean changes to how defence is resourced, equipped, and organised in order to offer military options below the threshold of armed conflict and improve the military contribution to resilience.
For NATO, the development of actions in the cognitive domain also requires a sustained cooperation between Allies in order to ensure an overall coherence, to build credibility and to allow a concerted defense.
Within the military, expertise on anthropology, ethnography, history, psychology among other
areas will be more than ever required to cooperate with the military, in order to derive qualitative insights from quantitative data, as an example. In other words, if the declaration of a new field of combat consecrates the new importance of humans, it is more about rethinking
the interaction between the hard sciences and the social sciences. The rise of cognitive technologies has endowed human with superior analysis and accuracy. In order to deliver timely
and robust decisions, it will not be a question of relying solely on human cognitive capacities
but of cross engineering systems with social sciences (sociology, anthropology, criminology,
political science…) in order to face complex and multifaceted situations. The modelisation of
human dynamics as part of what is known as Computational Social Science will allow the use
of knowledge from social sciences and relating to the behaviour of social entities, whether enemies or allies. By mapping the human environment, strategists and key military leaders will
be provided reliable information to decide on the right strategy.
Definition of the Human Domain
Thus defined by NATO’s major adversaries, the mastery of the field of perceptions is an abstract space where understanding of oneself (strengths and weaknesses), of the other (adversary, enemy, human environment), psychological dimension, intelligence collection, search for
ascendancy (influence, taking and conservation of the initiative) and capacity to reduce the
will of the adversary are mixed.
Within the context of multi-domain operations, the human domain is arguably the most important domain, but it is often the most overlooked. Recent wars have shown the inability to
achieve the strategic goals (e.g. in Afghanistan) but also to understand foreign and complex
human environments.
Innovation Hub – Nov 2020 Page 32 of 45
Cognitive warfare was forced upon the Western liberal democracies by challenging international actors who have strategised to avoid the military confrontation, thus blurring the
line between peace and war by targeting the weakest element: humans. CW which includes
the increasing use of NBICs for military purposes may provide a sure way of military dominance in a near future.
“Military power is of course one essential segment of security. But global security refers to a
broad range of threats, risks, policy responses that span political, economic, societal, health
(including cognitive health!) and environmental dimensions, none of these being covered by
your current domains of operations! Some international actors already use weapons that precisely target these dimensions, while keeping their traditional kinetic arsenal in reserve as
long as they possibly can. NATO, if it wishes to survive, has to embrace this continuum and
claim as its responsibility, together with its allies to, seamlessly, achieve superiority all across
it.”58
Raising awareness among Allies
While advances in technology have always resulted in changes in military organisations and
doctrines, the rapid advancements in technology, in particular in brain science and NBIC,
should force NATO to take action and give a greater consideration to the emergence of the
threats that represents Cognitive Warfare. Not all NATO nations have recognised this
changing character of conflicts. Declaring the Human as sixth domain of operations is a way
to raise awareness among the NATO Nations. NATO should consider further integrating
Human situational awareness in the traditional situation awareness processes of the Alliance.
Anticipating the trends
There is evidence that adversaries have already understood the potential of developing
human-related technologies. Declaring the Human Domain as a sixth domain of operations
has the potential to reveal possible vulnerabilities, which could otherwise amplify rapidly. It
is not too late to face the problem and help keep the dominance in the field of cognition.
Innovation Hub – Nov 2020 Page 33 of 45
The Human Domain of operations could tentatively be defined as “the sphere of interest in
which strategies and operations can be designed and implemented that, by targeting the
cognitive capacities of individuals and/or communities with a set of specific tools and
techniques, in particular digital ones, will influence their perception and tamper with
their reasoning capacities, hence gaining control of their decision making, perception
and behaviour levers in order to achieve desired effects.”
Delays in declaring the Human Domain as a domain of operations may lead to fight the last
war.
Given that the process of declaring a new domain of operations is a lengthy process and given
the sensitivity of the topic, NATO needs to be fast in focusing on political/military responses
while capacity/threats of our opponents are still low.
Finally, ethical problems should be raised. Since there is no agreed international legal
framework in the field of neurosciences, NATO may play a role in pushing to establish an
international legal framework that meets the NATO Nations’ ethical standards.
Accelerating information sharing
Accelerated information sharing among Alliance members may help faster integration of
interoperability, to assure coherence across multi-domain operations. Information sharing
may also assist some nations in catching up in this area. In particular, surveillance of ongoing
international activities in brain science, and their potential dual-use in military and
intelligence operations should be undertaken and shared between Allies along with
identification and quantification of current and near-term risks and threats posed by such
enterprises.
Establishing DOTMLPFI components upstream The first step is to define the “human domain” in military doctrine and use the definition toconduct a full spectrum of capability development analysis, optimising the military for the most likely 21st century contingencies. Since the Human Domain complements the five others, each capability development should include the specificities of modern threats,
including those related to cognitive warfare and, more generally, the sixth domain of
operations. The Human Domain is not an end in itself but a means to achieve our strategic
objectives and to respond to a type of conflict that the military is not accustomed to dealing with.
Dedication of resources for developing and sustaining NATO Nations capabilities to prevent
escalation of future risk and threat by:
1) continued surveillance;
2) organisational and systemic preparedness;
3) coherence in any/all entities necessary to remain apace with, and/or ahead of tactical and
strategic competitors’ and adversary’s capabilities in this space.
Impact on Warfare Development
By essence, defining a new domain of operations and all the capabilities and concepts that go
along with it, is part of ACT’s mission.
Innovation Hub – Nov 2020 Page 34 of 45
ACT should lead a further in-depth study with a focus on:
• Advancements on brain science initiatives that may be developed and used for nonkinetic and kinetic engagements.
• Different ethical systems that govern neuroscientific research and development. This
will mandate a rigorous, more granular, and dialectical approach to negotiate and resolve issues and domains of ethical dissonance in multi- and international biosecurity
discourses.
• Ongoing review and evaluation of national intellectual property laws, both in relation
to international law(s), and in scrutiny of potential commercial veiling of dual-use enterprises.
• Identification and quantification of current and near-term risks and threats posed by
such enterprise(s)
• Better recognizing the use of social and human sciences in relation with “hard” sciences to better understand the human environment (internal and external)
• Include the cognitive dimension in every NATO exercises by leveraging new tools and
techniques such as immersive technologies
Along with those studies, anticipating the first response (such as the creation of a new NATO
COE or rethink and adapt the structure by strengthening branches as required) and defining a
common agreed taxonomy (Cognitive Dominance/Superiority/Cognitive Center of Gravity
etc…) will be key tasks for ACT to help NATO keep the military edge.
Innovation Hub – Nov 2020 Page 35 of 45
Conclusion
Failing to thwart the cognitive efforts of NATO’s opponents would condemn Western liberal societies to lose the next war without a fight. If NATO fails to build a sustainable and proactive basis for progress
in the cognitive domain, it may have no other option than kinetic conflict. Kinetic capabilities may dictate a tactical or operational outcome, but victory in the long run will remain solely dependent on the ability to influence, affect, change or impact the cognitive domain.
Because the factors that affect the cognitive domain can be involved in all aspects of human
society through the areas of will, concept, psychology and thinking among other, so that
particular kind of warfare penetrates into all fields of society. It can be foreseen that the future
information warfare will start from the cognitive domain first, to seize the political and
diplomatic strategic initiative, but it will also end in the cognitive realm.
Preparing for high-intensity warfare remains highly relevant, but international actors
providing NATO with specific strategic security challenges have strategised to avoid
confronting NATO in kinetic conflicts and chose an indirect form of warfare. Information
plays a key role in this indirect form of warfare but the advent of cognitive warfare is
different from simple Information Warfare: it is a war through information, the real target
being the human mind, and beyond the human per se.
Moreover, progresses in NBIC make it possible to extend propaganda and influencing strategies. The sophistication of NBIC-fueled hybrid attacks today represent an unprecedented
level of threat inasmuch they target the most vital infrastructure everyone relies on: the human mind . 59
Cognitive warfare may well be the missing element that allows the transition from military
victory on the battlefield to lasting political success. The human domain might well be the decisive domain, wherein multi-domain operations achieve the commander’s effect. The five
first domains can give tactical and operational victories; only the human domain can achieve
the final and full victory. “Recognising the human domain and generating concepts and capabilities to gain advantage therein would be a disruptive innovation.”

60

Innovation Hub – Nov 2020 Page 36 of 45
“Today’s progresses in nanotechnology, biotechnology, information technology and cognitive
science (NBIC), boosted by the seemingly unstoppable march of a triumphant troika made of
Artificial Intelligence, Big Data and civilisational “digital addiction” have created a much more
ominous prospect: an embedded fifth column, where everyone, unbeknownst to him or her, is
behaving according to the plans of one of our competitors.” August Cole, Hervé Le Guyader
NATO’s 6th Domain Bibliography and Sources Essays
August Cole, Hervé Le Guyader, NATO 6th Domain of Operations, September 2020
Dr. James Giordano, Emerging Neuroscience and Technology (NeuroS/T): Current and Near-Term
Risks and Threats to NATO Biosecurity, October 2020 Article Nicolas Israël and Sébastien-Yves Laurent, “Analysis Facing Worldwide Jihadist Violence and Conflicts. What to do?” September 2020 Online Collaboration with Johns Hopkins University “Cognitive Biotechnology, Altering the Human Experience”, Sep 2020 “Cognitive Warfare, an attack on truth and thoughts”, Sep 2020 Under the direction of Professor Lawrence Aronhnime Contributors: Alonso Bernal, Cameron Carter, Melanie Kemp, Ujwal Arunkumar Taranath, Klinzman Vaz, Ishpreet Singh, Kathy Cao, Olivia Madreperla
Experiments DTEX (Disruptive Technology Experiment) – 7 October 2020
NATO Innovation Hub Disruptive Technology Experiment (DTEX) on disinformation.
Under the direction of Girish Sreevatsan Nandakumar (Old Dominion University)
Hackathon “Hacking the Mind” Run by Dr. Kristina Soukupova and the Czech Republic Defense and Security Innovation
Hub, October 2020.
https://www.hackthemind.cz
Innovation Hub – Nov 2020 Page 37 of 45
Annex 1
Nation State Case Study 1: The weaponisation of neurosciences in China. As described in the Five-Year Plans (FYPs) and other national strategies, China has identified and acknowledged the technical, economic, medical, military, and political value of the brain sciences, and has initiated efforts to expand its current neuroS/T programs. China utilises broader strategic planning horizons than other nations and attempts to combine efforts from government, academic, and commercial sectors (i.e., the “triple helix”) to accomplish cooperation and centralisation of national agendas. This coordination enables research projects andobjectives to be used for a range of applications and outcomes (e.g., medical, social, military).
As noted by Moo Ming Poo, director of China’s Brain Project, China’s growing aging population is contributing to an increasing incidence and prevalence of dementia and other neurological diseases. In their most recent FYP, China addressed economic and productivity concerns fostered by this aging population, with a call to develop medical approaches for neurological disorders and to expand research infrastructure in neuro S/T.
This growing academic environment has been leveraged to attract and solicit multi-national
collaboration. In this way, China is affecting international neuroS/T through
1) research tourism;
2) control of intellectual property;
3) medical tourism;
4) and influence in global scientific thought. While these strategies are not exclusive to neuroS/T; they may be more opportunistic in the brain sciences because the field isnew, expanding rapidly, and its markets are growing, and being defined by both share- and stake-holder interests.
Research tourism involves strategically recruiting renowned, experienced scientists (mostly
from Western countries), as well as junior scientists to contribute to and promote the growth,
innovation, and prestige of Chinese scientific and technological enterprises. This is apparent
by two primary efforts. First, initiatives such as the Thousand Talents Program (launched in
2008) and other programs (e.g., Hundred Person Program, Spring Light Program, Youth
Thousand Talents Program, etc.) aim to attract foreign researchers, nurture and sustain domestic talent, and bring back Chinese scientists who have studied or worked abroad. Further, China’s ethical research guidelines are, in some domains, somewhat more permissive than those in the West (e.g., unrestricted human and/or non-human primate experimentation), and the director of China’s Brain Project, Mu-Ming Poo, has stated that this capability to engage research that may not be (ethically) viable elsewhere may (and should) explicitly attract international scientists to conduct research in China.
Second, China continues to engage with leading international brain research institutions to
foster greater cooperation. These cooperative and collective research efforts enable China to
Innovation Hub – Nov 2020 Page 38 of 45
achieve a more even “playing field” in the brain sciences. China leverages intellectual property (IP) policy and law to advance (and veil) neuroS/T and other biotechnologies in several ways. First, via exploitation of their patent process by creating a “patent thicket”. The Chinese patent system focuses on the end-utility of a product (e.g., a specific neurological function in a device), rather than emphasising the initial innovative idea in contrast to the U.S. system. Thisenables Chinese companies and/or institutions to copy or outrightly usurp foreign patents and products. Moreover, Chinese patent laws allow international research products and ideas to be used in China “for the benefit of public health,” or for “a major technological advancement.” Second, the aforementioned coordination of brain science institutions and the corporate sector establishes compulsory licensing under Chinese IP and patent laws. This strategy (i.e., “lawfare”) allows Chinese academic and corporate enterprises to have economic and legal support, while reciprocally enabling China to direct national research agendas and directives through these international neuroS/T collaborations. China enforces its patent and IP rights worldwide, which can create market saturation of significant and innovative products, and could create international dependence upon Chinese neuroS/T. Further, Chinese companies have been heavily investing in knowledge industries, including artificial intelligence enterprises, and academic book and journal partnerships. For example, TenCent established a partnership with Springer Nature to engage in various educational products. This will allow a significant stake in future narratives and dissemination of scientific and technological discoveries.
Medical tourism is explicit or implicit attraction and solicitation of international individuals
or groups to seek interventions that are either only available, or more affordable in a particular locale. Certainly, China has a presence in this market, and at present, available procedures range from the relatively sublime, such as using deep brain stimulation to treat drug addiction, to the seemingly “science-fictional”, such as the recently proposed body-to-head transplant to be conducted at Harbin Medical University in collaboration with Italian neurosurgeon Sergio Canavero. China can advance and develop areas of neuroS/T in ways that other
countries cannot or will not, through homogenising a strong integrated “bench to bedside”
capability and use of non-Western ethical guidelines.
China may specifically target treatments for diseases that may have a high global impact,
and/or could offer procedures that are not available in other countries (for either socio-political or ethical reasons). Such medical tourism could create an international dependence on Chinese markets as individuals become reliant on products and services available only in China, in addition to those that are “made in China” for ubiquitous use elsewhere. China’s growing biomedical industry, ongoing striving for innovation, and expanding manufacturing capabilities have positioned their pharmaceutical and technology companies to prominence in world markets. Such positioning – and the somewhat permissive ethics that enable particular aspects and types of experimentation – may be seductive to international scientists to engage research, and/or commercial biomedical production within China’s sovereign borders.
Innovation Hub – Nov 2020 Page 39 of 45
Through these tactics of economic infiltration and saturation, China can create power hierarchies that induce strategically latent “bio-political” effects that influence real and perceived
positional dominance of global markets.
China is not the only country that has differing ethical codes for governing research. Of note
is that Russia has been, and continues to devote resources to neuroS/T, and while not uniformly allied with China, has developed projects and programs that enable the use of neurodata for non-kinetic and/or kinetic applications. Such projects, programs, and operations can
be conducted independently and/or collaboratively to exercise purchase over competitors
and adversaries so as to achieve greater hegemony and power.
Therefore, NATO, and its international allies must
4) recognise the reality of other countries’ science and technological capabilities;
5) evaluate what current and near-term trends portend for global positions, influence, and
power;
6) and decide how to address differing ethical and policy views on innovation, research, and
product development.
Innovation Hub – Nov 2020 Page 40 of 45
Annex 2
Nation State Case Study 2: The Russian National Technology Initiative61
Russian President Vladimir Putin has explicitly stated intent to implement an aggressive
modernisation plan via the National Technology Initiative (NTI). Designed to grant an overmatch advantage in both commercial and military domains against Russia’s current and nearterm future key competitors, the NTI has been viewed as somewhat hampered by the nation’s
legacy of government control, unchanging economic complexity, bureaucratic inefficiency
and overall lack of transparency. However, there are apparent disparities between such assessment of the NTI and its capabilities, and Russia’s continued invention and successful deployment of advanced technologies.
Unlike the overt claims and predictions made by China’s scientific and political communities
about the development and exercise of neuroS/T to re-balance global power, explication and
demonstration(s) of Russian efforts in neuroS/T tend to be subtle, and detailed information
about surveillance and extent of such enterprise and activity is, for the most part, restricted to
the classified domain. In general, Russian endeavours in this space tend to build upon prior
work conducted under the Soviet Union, and while not broad in focus, have gained relative
sophistication and capability in particular areas that have high applicability in non-kinetic
disruptive engagements. Russia’s employments of weaponised information, and neurotropic
agents have remained rather low-key, if not clandestine (and perhaps covert), often entail nation-state or non-state actors as proxies, and are veiled by a successful misinformation campaign to prevent accurate assessment of their existing and developing science and technologies.
Military science and technology efforts of the USSR were advanced and sustained primarily
due to the extensive military-industrial complex which, by the mid-1970s through 1980s, is
estimated to have employed up to twenty percent of the workforce. This enabled the USSR to
become a world leader in science and technology, ranked by the U.S. research community as
second in the world for clandestine S&T programs (only because the overall Soviet system of
research and development (R&D) was exceptionally inefficient, even within the military sector). The collapse of the USSR ended the Soviet military-industrial complex, which resulted in
significant decreases in overall spending and state support for R&D programs. Any newly
implemented reforms of the post-Soviet state were relatively modest, generating suboptimal
R&D results at best. During this time, Russian R&D declined by approximately 60% and aside
from the Ministries’ involvement with the military sector, there was a paucity of direct cooperation between Russian R&D institutions and operational S&T enterprises. This limited interaction, was further compounded by a lack of resources, inability to bring new technologyto markets, absent protections for intellectual property, and “brain drain” exodus of talented
researchers to nations with more modern, cutting-edged programs with better pay and opportunities for advancement.
Innovation Hub – Nov 2020 Page 41 of 45
Recognising the inherent problems with the monoculture of the Russian economic and S&T
ecosystems, the Putin government initiated a process of steering Russia toward more lucrative, high-tech enterprises. The NTI is ambitious, with goals to fully realise a series of S&T/
R&D advancements by 2035. The central objective of the NTI is establish “the program for
creation of fundamentally new markets and the creation of conditions for global technological
leadership of Russia by 2035.” To this end, NTI Experts and the Agency for Strategic Initiatives (ASI) identified nine emerging high-tech markets for prime focus and penetrance, including neuroscience and technology (i.e., what the ASI termed “NeuroNet”). Substantive investment in this market is aimed at overcoming the post-Soviet “resource curse”, by capitalising on the changes in global technology markets – and engagement sectors – to expand both economic and military/intelligence priorities and capabilities. According to the ASI, NeuroNet is focused upon “distributed artificial elements of consciousness and mentality”, withRussia’s prioritisation of neuroS/T being a key factor operative in influence operations directed and global economies and power. Non-kinetic operations represent the most viable intersection and exercise of these commercial, military, and political priorities, capabilities, and foci of global influence and effect(s).
Innovation Hub – Nov 2020 Page 42 of 45
Notes
Robert P. Kozloski, https://www.realcleardefense.com/articles/2018/02/01/knowing_your 1 –
self_is_key_in_cognitive_warfare_112992.html, February 2018
Green, Stuart A. “Cognitive Warfare.” The Augean Stables , Joint Military Intelligence College, July 2008, 2
http://www.theaugeanstables.com/wp-content/uploads/2014/04/Green-Cognitive-Warfare.pdf.
Clint Watts, (2018 ) Messing with the Enemy, HarperCollins 3
As defined by Wikipedia, a sock puppet or sockpuppet is an online identity used for purposes of deception. It 4
usually refers to the Russian online activism during the US electoral campaign 2016. https://en.wikipedia.org/
wiki/Sock_puppet_account
https://www.belfercenter.org/sites/default/files/2019-11/CognitiveWarfare.pdf 5
Dr Zac Rogers, in Mad Scientist 158, (July 2019), https://madsciblog.tradoc.army.mil/158-in-the-cognitive- 6
war-the-weapon-is-you/
7 August Cole-Hervé Le Guyader, NATO 6th Domain of Operation, 2020
Ibid. 8
Alicia Wanless, Michael Berk (2017), Participatory Propaganda: The Engagement of Audiences in the Spread of 9
Persuasive Communications: https://www.researchgate.net/publication/329281610_Participatory_Propaganda_The_Engagement_of_Audiences_in_the_Spread_of_Persuasive_Communications
10 Jacques Ellul, (1962) Propaganda, Edition Armand Colin
Matt Chessen, The MADCOM Future: How AI will enhance computational propaganda, The Atlantic Council, 11
Sep 2017
https://en.wikipedia.org/wiki/al_economics 12
Shoshana Zuboff, (2019) The Age of Surveillance Capitalism, Public Affairs 13
Peter W. Singer, Emerson T. Brooking (2018) LikeWar The Weaponisation of Social Media, HMH Edition page 14
95
Victoria Fineberg, (August 2014 ) Behavioural Economics of Cyberspace Operations, Journal of Cyber Security 15
and Information Systems Volume: 2
Shoshana Zuboff, (2019) The Age of Surveillance Capitalism, Public Affairs 16
17 Michael J Mazarr, (July 2020) Survival: Global Politics and Strategy, Virtual Territorial Integrity: The Next International Norm, in Survival: Global Politics and Strategy, IISS
18 Bernard Claverie and Barbara Kowalczuk, Cyberpsychology, Study for the Innovation Hub, July 2018
Dr Zac Rogers, in Mad Scientist 158, (July 2019), https://madsciblog.tradoc.army.mil/158-in-the-cognitive- 19
war-the-weapon-is-you/
Haselton MG, Nettle D, Andrews PW (2005). “The evolution of cognitive bias.”. In Buss DM (ed.). The Handbook 20
of Evolutionary Psychology
Innovation Hub – Nov 2020 Page 43 of 45
Wikipedia lists more than 180 different cognitive biases: https://en.wikipedia.org/wiki/Cognitive_bias 21
Lora Pitman (2019)“The Trojan horse in your Head: Cognitive Threats and how to counter them” ODU Digital 22
Commons
Robert P. Kozloski, https://www.realcleardefense.com/articles/2018/02/01/knowing_your 23 –
self_is_key_in_cognitive_warfare_112992.html, February 2018
Peter W. Singer, Emerson T. Brooking (2018) LikeWar The Weaponisation of Social Media, HMH Edition page 24
165
Dominique Moïsi (2010) The Geopolitics of Emotion, Edition Anchor. 25
26 Christophe Jacquemart (2012), Fusion Froide Edition
Fogg, B.J. (2003). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kauf 27 –
mann Publishers.
28 https://mwi.usma.edu/mwi-video-brain-battlefield-future-dr-james-giordano/
Maryanne Wolf, (2007)“Proust and the Squid: The Story and Science of the Reading Brain” HarperCollins 29
Bernard Stiegler, https://www.observatoireb2vdesmemoires.fr/publications/video-minute-memoire-vers- 30
une-utilisation-raisonnee-du-big-data 2019
31 https://pphr.princeton.edu/2017/04/30/are-video-games-really-mindless/
32“Never has a medium been so potent for beauty and so vulnerable to creepiness. Virtual reality will test us. It will amplify
our character more than other media ever have.” Jaron Lanier, (2018) Dawn of the New Everything: Encounters with
Reality and Virtual Reality, Picador Edition
Philosopher Thomas Metzinger: https://www.newscientist.com/article/2079601-virtual-reality-could-be-an- 33
ethical-minefield-are-we-ready/
Gayannée Kedia, Lasana Harris, Gert-Jan Lelieveld and Lotte van Dillen, (2017) From the Brain to the Field: 34
The Applications of Social Neuroscience to Economics, Health and Law
35 Pr. Li-Jun Hou, Director of People’s Liberation Army 202nd Hospital, (May 2018), Chinese Journal of Traumatology,
36 For more on the definition of “dual use” in neuro S/T, see Dr. James Giordano’s essay October 2020
National Research Council and National Academy of Engineering. 2014. Emerging and Readily Available 37
Technologies and National Security: A Framework for Addressing Ethical, Legal, and Societal Issues.
Ibid. 38
39 Giordano J. (2014). Intersections of “big data”, neuroscience and national security: Technical issues and derivative concerns. In: Cabayan H et al. (eds.) A New Information Paradigm? From Genes to “Big Data”, and Instagrams to Persistent Surveillance: Implications for National Security, p. 46-48. Department of Defense; Strategic Multilayer Assessment Group- Joint Staff/J-3/Pentagon Strategic Studies Group.
Biological and Chemical Weapons Conventions 40
DeFranco JP, DiEuliis D, Bremseth LR, Snow JJ. Giordano J. (2019). Emerging technologies for disruptive ef 41 –
fects in non-kinetic engagements. HDIAC Currents 6(2): 49-54.
Innovation Hub – Nov 2020 Page 44 of 45
Parag Khanna, Connectography: Mapping the Future of Global Civilisation (New York Random House, 2016) 42
43 Megan Bell, An Approachable Look at the Human Domain and why we should care (2019), https://othjournal.com/
2019/06/17/an-approachable-look-at-the-human-domain-and-why-we-should-care/
Vladimir Vasilyevich Karyakin, (2012) “The Era of a New Generation of Warriors—Information and Strategic 44
Warriors— Has Arrived,” Moscow, Russia, Nezavisimaya Gazeta Online, in Russian, April 22, 2011, FBIS SOV
GILES, SHERR et SEABOYER (2018), Russian Reflexive Control, Royal Military College of Canada, Defence 45
Research and Development Canada.
46 Elsa B. Kania, Prism Vol.8, N.3, 2019
Nathan Beauchamp-Mustafaga, China Brief, (Sep 2019) https://jamestown.org/program/cognitive-domain- 47
operations-the-plas-new-holistic-concept-for-influence-operations/
Ibid. 48
Hai Jin, Li-Jun Hou, Zheng-Guo Wang, (May 2018 )Military Brain Science – How to influence future wars, 49
Chinese Journal of Traumatology
50 August Cole, Hervé Le Guyader, NATO ’s 6th Domain, September 2020
51 Maj. Gen. Robert H. Scales, (2006), http://armedforcesjournal.com/clausewitz-and-world-war-iv/
52 Alan Beyerchen, “Clausewitz, Nonlinearity and the Unpredictability of War,” International Security, 17:3 (Winter, 1992)
53 August Cole, Hervé Le Guyader, NATO ’s 6th Domain, September 2020
“Analysis Facing Worldwide Jihadist Violence and Conflicts. What to do?” Article for the Innovation Hub, 54
Nicolas Israël and Sébastien-Yves LAURENT, September 2020
https://www.psychologytoday.com/us/blog/head-strong/201408/psychology-and-less-lethal-military-strategy

55 –
56 Generals Odierno, Amos and Mc Raven, Strategic Landpower, NPS Publication 2014
“Analysis Facing Worldwide Jihadist Violence and Conflicts. What to do?” Article for the Innovation Hub, 57
Nicolas Israël and Sébastien-Yves LAURENT, September 2020
58 August Cole, Hervé Le Guyader, NATO 6th Domain of Operations, September 2020
59 Hervé Le Guyader, the Weaponisation of Neurosciences, Innovation Hub Warfighting Study February 2020
Ibid. 60
Ibid. 61
Innovation Hub – Nov 2020 Page 45 of 45

Digital DNA through your digital twin in the sentient-world-simulation

Perhaps your real life is so rich you don’t have time for another.

Even so, the US Department of Defense (DOD) may already be creating a copy of you in an alternate reality to see how long you can go without food or water, or how you will respond to televised propaganda.

The DOD is developing a parallel to Planet Earth, with billions of individual “nodes” to reflect every man, woman, and child this side of the dividing line between reality and AR.

Called the Sentient World Simulation (SWS), it will be a “synthetic mirror of the real world with automated continuous calibration with respect to current real-world information”, according to a concept paper for the project.

“SWS provides an environment for testing Psychological Operations (PSYOP),” the paper reads, so that military leaders can “develop and test multiple courses of action to anticipate and shape behaviors of adversaries, neutrals, and partners”.

SWS also replicates financial institutions, utilities, media outlets, and street corner shops. By applying theories of economics and human psychology, its developers believe they can predict how individuals and mobs will respond to various stressors.

SEAS can display regional results for public opinion polls, distribution of retail outlets in urban areas, and the level of unorganization of local economies, which may point to potential areas of civil unrest

Yank a country’s water supply. Stage a military coup. SWS will tell you what happens next.

“The idea is to generate alternative futures with outcomes based on interactions between multiple sides,” said Purdue University professor Alok Chaturvedi, co-author of the SWS concept paper.

Chaturvedi directs Purdue’s laboratories for Synthetic Environment for Analysis and Simulations, or SEAS – the platform underlying SWS. Chaturvedi also makes a commercial version of SEAS available through his company, Simulex, Inc.

SEAS users can visualise the nodes and scenarios in text boxes and graphs, or as icons set against geographical maps.

Corporations can use SEAS to test the market for new products, said Chaturvedi. Simulex lists the pharmaceutical giant Eli Lilly and defense contractor Lockheed Martin among its private sector clients.

The US government appears to be Simulex’s number one customer, however. And Chaturvedi has received millions of dollars in grants from the military and the National Science Foundation to develop SEAS.

Chaturvedi is now pitching SWS to DARPA and discussing it with officials at the US Department of Homeland Security, where he said the idea has been well received, despite the thorny privacy issues for US citizens.

Research: Key Criteria for Evaluating Kubernetes Data Protection

In fact, Homeland Security and the Defense Department are already using SEAS to simulate crises on the US mainland.

The Joint Innovation and Experimentation Directorate of the US Joint Forces Command (JFCOM-J9) in April began working with Homeland Security and multinational forces over “Noble Resolve 07”, a homeland defense experiment.

 SEAS (as will SWS) provides figures for specific economic sectors, and helps military, intel and marketing people visualize their global connections. Users can vary export and import figures for manufactured goods, for example, to gauge the potential impacts on other sectors

In August, the agencies will shift their crises scenarios from the East Coast to the Pacific theatre.

JFCOM-J9 completed another test of SEAS last year. Called Urban Resolve, the experiment projected warfare scenarios for Baghdad in 2015, eight years from now.

JFCOM-9 is now capable of running real-time simulations for up to 62 nations, including Iraq, Afghanistan, and China. The simulations gobble up breaking news, census data, economic indicators, and climactic events in the real world, along with proprietary information such as military intelligence.

Military and intel officials can introduce fictitious agents into the simulations (such as a spike in unemployment, for example) to gauge their destabilising effects on a population.

Officials can also “inject an earthquake or a tsunami and observe their impacts (on a society)”, Chaturvedi added.

Jim Blank, modelling and simulation division chief at JFCOM-J9, declined to discuss the specific routines military commanders are running in the Iraq and Afghanistan computer models. He did say SEAS might help officers determine where to position snipers in a city square, or to envision scenarios that might emerge from widespread civil unrest.

SEAS helps commanders consider the multitude of variables and outcomes possible in urban warfare, said Blank.

“Future wars will be asymetric in nature. They will be more non-kinetic, with the center of gravity being a population.”

The Iraq and Afghanistan computer models are the most highly developed and complex of the 62 available to JFCOM-J9. Each has about five million individual nodes representing things such as hospitals, mosques, pipelines, and people.

The other SEAS models are far less detailed, encompassing only a few thousand nodes altogether, Blank said.

Feeding a whole-Earth simulation will be a colossal challenge.

“(SWS) is a hungry beast,” Blank said. “A lot of data will be required to make this thing even credible.”

Research: Key Criteria for Evaluating SD-WAN Platforms

Alok Chaturvedi wants SWS to match every person on the planet, one-to-one.

Right now, the 62 simulated nations in SEAS depict humans as composites, at a 100-to-1 ratio.

One organisation has achieved a one-to-one level of granularity for its simulations, according to Chaturvedi: the US Army, which is using SEAS to identify potential recruits.

Chaturvedi insists his goal for SWS is to have a depersonalised likeness for each individual, rather than an immediately identifiable duplicate. If your town census records your birthdate, job title, and whether you own a dog, SWS will generate what Chaturvedi calls a “like someone” with the same stats, but not the same name.

Of course, government agencies and corporations can add to SWS whatever personally-identifiable information they choose from their own databases, and for their own purposes.

And with consumers already giving up their personal information regularly to websites such as MySpace and Twitter, it is not a stretch to imagine SWS doing the same thing.

“There may be hooks through which individuals may voluntarily contribute information to SWS,” Chaturvedi said.

SEAS bases its AI “thinking” on the theories of cognitive psychologists and the work of Princeton University professor Daniel Kahneman, one of the fathers of behavioural economics.

Chaturvedi, as do many AR developers, also cites the work of positive psychology guru Martin Seligman (known, too, for his concept of “learned hopelessness”) as an influence on SEAS human behaviour models. The Simulex website says, if a bit vaguely, SEAS similarly incorporates predictive models based upon production, marketing, finance and other fields.

But SWS may never be smart enough to anticipate every possibility, or predict how people will react under stress, said Philip Lieberman, professor of cognitive and linguistic studies at Brown University.

“Experts make ‘correct’ decisions under time pressure and extreme stress that are not necessarily optimum but work,” said Lieberman, who nevertheless said the simulations might be useful for anticipating some scenarios.

JFCOM’s Blank agreed that SWS, which is using computers and code to do cultural anthropology, does not include any “hard science at this point”.

“Ultimately,” said Blank, “the guy to make decision is the commander.” ®

Research: Key Criteria for Evaluating Kubernetes Data Protection

WILL ARTIFICIAL INTELLIGENCE ENHANCE OR HACK HUMANITY? “Mind Hacking”: Information in the Cyber Age.

WILL ARTIFICIAL INTELLIGENCE ENHANCE OR HACK HUMANITY?

“Mind Hacking”: Information in the Cyber Age

 

Watch Yuval Noah Harari speak with Fei-Fei Li, renowned computer scientist and Co-Director of Stanford University’s Human-Centered AI Institute — in a conversation moderated by Nicholas Thompson, WIRED’s Editor-in-Chief. The discussion explores big themes and ideas, including ethics in technology, hacking humans, free will, and how to avoid potential dystopian scenarios. Publication is available under Creative Commons, CC BY-NC-ND 4.0 – https://creativecommons.org/licenses/…. The event was hosted at Stanford in April 2019, and was jointly sponsored by the university’s Humanities Center, McCoy Family Center for Ethics in Society, and the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

The good old days of cold war disinformatia are gone. Social media are increasingly relevant in shaping the public opinion, but they are just “eco chambers”. Foreign actors with malicious intent can easily exploit this intrinsic feature of social media manipulating online information in order to influence the public opinion.  Moreover, cyberspace allows a large degree of anonymity, behind which it is easy to automate propaganda, and cyber attacks may be leveraged to exfiltrate and expose sensitive content or to gain information dominance during military operations, increasing the strategic relevance of the “information space”.  Operations in this domain are central in Russia’s security strategic thinking, featuring predominantly in its “New Generation War” military doctrine. But the ongoing militarization of cyberspace risks having dangerous spillovers in the conventional domain. What can we do in order to protect our open democracies while preserving a global, free will and resilient Internet? The answer is multi-faceted, in as much as CEIW (cyber-enabled information warfare) is an emerging asymmetric threat that forces us to innovate our security approach in many ways.

Nanoethics, On The Mind and the Machine Interaction…

The Mind and the Machine. On the Conceptual and Moral Implications of Brain-Machine Interaction…

Maartje Schermer

Medical Ethics and Philosophy of Medicine, ErasmusMC, Room AE 340, PO Box 2040, 3000 Rotterdam, The Netherlands

Find articles by Maartje Schermer
Medical Ethics and Philosophy of Medicine, ErasmusMC, Room AE 340, PO Box 2040, 3000 Rotterdam, The Netherlands
Maartje Schermer, Phone: +31-107-043062, ln.cmsumsare@remrehcs.m.
corresponding authorCorresponding author.
Received 2009 Nov 5; Accepted 2009 Nov 5.

Abstract

Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have on our ‘symbolic order’—on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.

Keywords: Brain-machine interaction, Brain-computer interfaces, Converging technologies, Cyborg, Deep brain stimulation, Moral responsibility, Neuroethics

Introduction

Within two or three decades our brains will have been entirely unravelled and made technically accessible: nanobots will be able to immerse us totally in virtual reality and connect our brains directly to the Internet. Soon after that we will expand our intellect in a spectacular manner by melting our biological brains with non-biological intelligence. At least that is the prophecy of Ray Kurzweil, futurist, transhumanist and successful inventor of, amongst other things, the electronic keyboard and the voice-recognition system.1 He is not the only one who foresees great possibilities and, what’s more, has the borders between biological and non-biological, real and virtual, and human and machine, disappear with the greatest of ease. Some of these possibilities are actually already here. On 22 June 2004, a bundle of minuscule electrodes was implanted into the brain of the 25-year-old Matthew Nagel (who was completely paralysed due to a high spinal cord lesion) to enable him to operate a computer by means of his thoughts. This successful experiment seems to be an important step on the way to the blending of brains and computers or humans and machines, that Kurzweil and others foresee. With regard to the actual developments in neuroscience and the convergence of neurotechnology with information, communication- and nanotechnology in particular it is still unclear how realistic the promises are. The same applies to the moral and social implications of these developments. This article offers a preliminary exploration of this area. The hypothesis is that scientific and technological developments in neuroscience and brain-machine interfacing challenge—and may contribute to shifts in—some of the culturally determined categories and classification schemes (our ‘symbolic order’), such as body, mind, human, machine, free will and responsibility (see the Introduction to this issue: Converging Technologies, Shifting Boundaries)

Firstly I will examine the expectations regarding the development of brain-machine interfaces and the forms of brain-machine interaction that already actually exist. Subsequently, I will briefly point out the moral issues raised by these new technologies, and argue the debate on these issues will be influenced by the shifts that may take place in our symbolic order—that is, the popular categories that we use in our everyday dealings to make sense of our world—as a result of these developments. It is important to consider the consequences these technologies might have for our understanding of central organizing categories, moral concepts and important values. Section four then focuses on the categories of human and machine: are we all going to become cyborgs? Will the distinction between human and machine blur if more and more artificial components are built into the body and brain? I will argue that the answer depends partly on the context in which this question is asked, and that the concept of the person may be more suitable here than that of the human. Section five is about the distinction between body and mind. I argue that as a result of our growing neuroscientific knowledge and the mounting possibilities for technological manipulation, the mind is increasingly seen as a component of the body, and therefore also more and more in mechanical terms. This put the concept of moral responsibility under pressure. I will illustrate the consequences of these shifts in concepts and in category-boundaries with some examples of the moral questions confronting us already.

Developments in Brain-Machine Interaction

Various publications and reports on converging technologies and brain-machine interaction speculate heatedly on future possibilities for the direct linkage of the human brain with machines, that is: some form of computer or ICT technology or other. If the neurosciences provide further insight into the precise working of the brain, ICT technology becomes increasingly powerful, the electronics become more refined and the possibilities for uniting silicones with cells more advanced, then great things must lie ahead of us—or so it seems. The popular media, but also serious governmental reports and even scientific literature, present scenarios that are suspiciously like science fiction as realistic prospects: the expansion of memory or intelligence by means of an implanted chip; the direct uploading of encyclopaedias, databases or dictionaries into the brain; a wireless connection from the brain to the Internet; thought reading or lie detection via the analysis of brain activity; direct brain-to-brain communication. A fine example comes from the report on converging technologies issued by the American National Science Foundation:

‘Fast, broadband interfaces directly between the human brain and machines will transform work in factories, control automobiles, ensure military superiority, and enable new sports, art forms and modes of interaction between people. […] New communication paradigms (brain-to-brain, brain-machine-brain, group) could be realized in 10–20 years.’ []

It is not easy to tell which prospects are realistic, which to a certain extent plausible and which are total nonsense. Some scientists make incredible claims whilst others contradict them again. These claims often have utopian characteristics and seem to go beyond the border between science and science fiction. Incidentally, they are frequently presented in such a way as to create goodwill and attract financial resources. After all, impressive and perhaps, from the scientific point of view, exaggerated future scenarios have a political and ideological function too—they help to secure research funds2 and to create a certain image of these developments, either utopian or dystopian, thus steering public opinion.

Uncertainty about the facts—which expectations are realistic, which exaggerated and which altogether impossible—is great, even amongst serious scientists []. Whereas experts in cyberkinetic neurotechnology in the reputable medical journal, The Lancet, are seriously of the opinion that almost naturally-functioning, brain-driven prostheses will be possible, the editorial department of the Dutch doctors’ journal, Medisch Contact, wonders sceptically how many light-years away they are [, ]. It is precisely the convergence of knowledge and technology from very different scientific areas that makes predictions so difficult. Although claims regarding future developments sometimes seem incredible, actual functioning forms of brain-machine interaction do in fact exist, and various applications are at an advanced stage of development. Next, I will look at what is currently already possible, or what is actually being researched and developed.

Existing Brain-Machine Interactions

The first category of existing brain-machine interaction is formed by the sensory prostheses. The earliest form of brain-machine interaction is the cochlear implant, also known as the bionic ear, which has been around for about 30 years. This technology enables deaf people to hear again, by converting sound into electrical impulses that are transmitted to an electrode implanted in the inner ear, which stimulates the auditory nerve directly. While there have been fierce discussions about the desirability of the cochlear implant, nowadays they are largely accepted and are included in the normal arsenal of medical technology (e.g. []). In this same category, various research lines are currently ongoing to develop an artificial retina or ‘bionic eye’ to enable blind people to see again.

A second form of brain-machine interaction is Deep Brain Stimulation (DBS). With this technique small electrodes are surgically inserted directly into the brain. These are connected to a subcutaneously implanted neurostimulator, which sends out tiny electrical pulses to stimulate a specific brain area. This technology is used for treatment of neurological diseases such as Parkinson’s disease and Gilles de la Tourette’s syndrome. Many new indications are being studied experimentally, ranging from severe obsessive-compulsive disorders, addictions, and obesity to Alzheimer’s disease and depression. The use of this technique raises a number of ethical issues, like informed consent from vulnerable research subjects, the risks and side-effects, including effects on the patient’s mood and behaviour [].

More spectacular, and at an even earlier stage of development, is the third form of brain-machine interaction in which the brain controls a computer directly. This technology, called neuroprosthetics, enables people to use thought to control objects in the outside world such as the cursor of a computer or a robotic arm. It is being developed so that people with a high spinal cord lesion, like Matt Nagel mentioned in the introduction, can act and communicate again. An electrode in the brain receives the electrical impulses that the neurons in the motor cerebral cortex give off when the patient wants to make a specific movement. It then sends these impulses to a computer where they are translated into instructions to move the cursor or a robot that is connected to the computer. This technology offers the prospect that paraplegics or patients with locked in syndrome could move their own wheelchair with the aid of thought-control, communicate with others through written text or voice synthesis, pick up things with the aid of artificial limbs et cetera.

In future, the direct cortical control described above could also be used in the further development of artificial limbs (robotic arms or legs) for people who have had limbs amputated. It is already possible to receive the signals from other muscles and control a robotic arm with them (a myoelectrical prosthesis); whether the patient’s own remaining nerves can be connected directly to a prosthesis to enable it to move as though it is the patient’s own arm is now being examined. Wireless control by the cortex would be a great step in prosthetics, further enabling patient rehabilitation. Next to motor control of the prosthesis, tactile sensors are being developed and placed in artificial hands to pass on the feeling to the patient’s own remaining nerves, thus creating a sense of touch. It is claimed that this meeting of the (micro)technological and (neuro)biological sciences will in the future lead to a significant reduction in invalidity due to amputation or even its total elimination [, ].

In the fourth form of brain-machine interaction, use is made of neurofeedback. By detecting brain activity with the aid of electroencephalography (EEG) equipment, it can be made visible to the person involved. This principle is used, for instance, in a new method for preventing epileptic attacks with the aid of Vagal Nerve Stimulation (VNS). Changes in brainwaves can be detected and used to predict an oncoming epileptic attack. This ‘warning system’ can then generate an automatic reaction from the VNS system which stimulates the vagal nerve to prevent the attack. In time, the detection electrodes could be implanted under the skull, and perhaps the direct electrical stimulation of the cerebral cortex could be used instead of the vagal nerve []. Another type of feedback system is being developed by the American army and concerns a helmet with binoculars that can draw a soldier’s attention to a danger that his brain has subconsciously detected enabling him to react faster and more adequately. The idea is that EEG can spot ‘neural signatures’ for target detection before the conscious mind becomes aware of a potential threat or target [].

Finally, yet another technology that is currently making rapid advances is the so-called exoskeleton. Although this is not a form of brain-machine interaction in itself, it is a technology that will perhaps be eminently suitable for combination with said interaction in the future. An exoskeleton is an external structure that is worn around the body, or parts of it, to provide strength and power that the body does not intrinsically possess. It is chiefly being developed for applications in the army and in the health care sector.3 Theoretically, the movements of exoskeletons could also be controlled directly by thought if the technology of the aforementioned ‘neuroprostheses’ was to be developed further. If, in the future, the exoskeleton could also give feedback on feelings (touch, temperature and suchlike), the possibilities could be expanded still further.

Ethical Issues and Shifts in Our Symbolic Order

The developments described above raise various ethical questions, for instance about the safety, possible risks and side effects of new technologies. There are also speculations as to the moral problems or dangers that may arise in connection with further advances in this type of technologies. The European Group on Ethics in Science and New Technologies (EGE), an influential European advisory body, warns for the risk that ICT implants will be used to control and locate people, or that they could provide third parties with access to information about the body and mind of those involved, without their permission EGE []. There are also concerns about about the position of vulnerable research subjects, patient selection and informed consent, the effects on personal identity, resource allocation and about the use of such technologies for human enhancement Over the past few years, the neuroethical discussion on such topics has been booming (e.g. [, , , , , , ]).

It has been argued that while these ethical issues are real, they do not present anything really new []. The point of departure of this article, however, is that it is not so easy to deal adequately with the moral questions raised by these new technologies because they also challenge some of the central concepts and categories that we use in understanding and answering moral questions. Hansson, for example, states that brain implants may be “reason to reconsider our criteria for personal identity and personality changes” [; p. 523]. Moreover, these new technologies may also change some elements of our common morality itself, just like the birth control pill once helped to change sexual morality []. In brief: new technologies not only influence the ways we can act, but also the symbolic order: our organizing categories and the associated views on norms and values.

The concepts and categories we, as ordinary people, use to classify our world to make it manageable and comprehensible are subject to change. These categories also play an important part in moral judgement, since they often have a normative next to a descriptive dimension. Categories such as human and machine, body and mind, sick and healthy, nature and culture, real and unreal are difficult to define precisely and the boundaries of such notions are always vague and movable. Time and again it takes ‘symbolic labour’ to reinterpret these categories and to re-conceptualise them and make them manageable in new situations. In part, this symbolic labour is being done by philosophers who explicitly work with concepts and definitions, refining and adjusting them; in part it is also a diffuse socio-cultural process of adaptation and emerging changes in symbolic order.4 Boundaries are repeatedly negotiated or won, and new concepts arise where old ones no longer fit the bill An example is the new concept of ‘brain dead’ which arose a few decades ago as a consequence of the concurrent developments in electroencephalography, artificial respiration and organ transplantation. Here the complex interplay of technology, popular categories of life and death, and scientific and philosophical understandings of these concepts is clearly demonstrated [; p. 16].

Morality, defined as our more or less shared system of norms, values and moral judgements, is also subject to change It is not a static ‘tool’ that we can apply to all kinds of new technologically induced moral problems. Technological and social developments influence and change our morality, although this does not apply equally to all its elements. Important values such as justice, well-being or personal autonomy are reasonably stable, but they are also such abstract notions that they are open to various and changing interpretations. The norms we observe in order to protect and promote our values depend on these interpretations and may require adjustment under new circumstances. Some norms are relatively fixed, others more contingent and changing []. The detailed and concrete moral rules of conduct derived from the general norms are the most contingent and changeable. The introduction of the notion brain death, for example, led to adaptations in ethical norms and regulations. Likewise, the new developments in genomics research are now challenging and changing existing rules of informed consent as well as notions of privacy and rules for privacy protection [].

In the field of brain-machine interaction we can therefore also expect that certain fixed categories that we classify our world with and that structure our thinking, will evolve alongside the new technologies. This will have consequences for the ethical questions these technologies raise and for the way in which we handle both new and familiar moral issues. A first shift that can be expected concerns the distinction between human and machine. This distinction might fade as more parts of the body can be replaced with mechanical or artificial parts that become more and more ‘real’. Secondly, we might expect a blurring of boundaries between our familiar concepts of body and mind when neuroscience and neurotechnologies increasingly present the brain as an ordinary part of our body and the mind as simply one of its ‘products’. The following sections analyse these possible shifts in the symbolic order and the associated moral questions in more detail.

Symbolic Order in Motion: The Human Machine

The blurring of the boundary between human and machine brought about by brain-machine interaction forms the first challenge to the familiar categories with which we think. The more artificial parts are inserted in and added to the human body, the more uncertainty there is about where the human stops and the machine begins. Instead of human or machine, we increasingly seem to be looking at cyborgs: human and machine in a single being.

For a long time it was easy to distinguish between people and the tools, machines or devices that they used. Gradually, however, our lives have become increasingly entangled with machines—or, in the broader sense, with technology—and we have become dependent on them for practically every facet of our daily lives. Increasingly, parts of the human body itself are replaced or supplemented by technology.5 Of course, the notion that the human body works as a machine has been a leitmotiv in western culture since Descartes; this vision has enabled modern medicine while the successes achieved substantiate the underlying beliefs about the body at the same time. The emergence of transplantation medicine was a clear step in the development of popular views on the body as a machine. Since the first kidney transplantation in 1954 and the first heart transplantation in 1967, lungs, liver, pancreas and even hands and faces have become transplantable, thus enforcing the image of the human body as a collection of replaceable parts. Some have criticised transplantation medicine because of the ensuing mechanization and commodification of the human body.

Besides organs, more and more artefacts are now being implanted in the human body: artificial heart valves, pacemakers, knees, arterial stents and subcutaneous medicine pumps. Prostheses that are attached to the body, such as artificial limbs, are becoming increasingly advanced, and are no longer easy to detach—unlike the old-fashioned wooden leg. Experiences of patients who wear prostheses seem to indicate that people rapidly adapt to using them and fuse with them to the extent that they perceive them as natural parts of themselves. Artificial parts are rapidly included in the body scheme and come to be felt as ‘ones own’.6

In a certain sense, then, we are familiar with the perception of the body as a sort of machine, and with the fact that fusing the human body with artificial parts is possible. Do technologies like neuroprostheses, artificial limbs and exoskeletons break through the boundary between human and machine in a fundamentally new, different fashion? Should the conceptual distinction between human and machine perhaps be revised? Many publications, both popular and more academic, suggest that the answer has to be yes. A notion that is often used in this connection is that of the cyborg: the human machine.

Cyborgs

The term ‘cyborg’—derived from cybernetic organism—was coined in 1960 by Manfred Clynes and Nathan Kline, American researchers who wrote about the ways in which the vulnerable human body could be technologically modified to meet the requirements of space travel and exploration. The figure of the cyborg appealed to the imagination and was introduced into popular culture by science fiction writers, filmmakers, cartoonists and game designers; famous cyborgs include The Six Million Dollar Man, Darth Vader and RoboCop. In the popular image, the cyborg thus stands for the merging of the human and the machine.

In recent literature, both popular and scientific, the cyborg has come to stand for all sorts of man-machine combinations and all manner of technological and biotechnological enhancements or modifications of the human body. With the publication of books like I, Cyborg or Cyborg Citizen, the concept now covers a whole area of biopolitical questions. Everything that is controversial around biotechnological interventions, that raises moral questions and controversy, that evokes simultaneous horror and admiration, is now clustered under the designation ‘cyborg’ [, , ].

The concept of the cyborg indicates that something is the matter, that boundaries are transgressed, familiar categorizations challenged, creating unease and uncertainty. For Donna Haraway, well-known for her Cyborg Manifesto [], the concept of the cyborg stands for all the breaches of boundaries and disruptions of order, not merely for the specific breaking through of the distinction between human and machine which concerns me here. The term cyborg can thus be used to describe our inability to categorize some new forms of human life or human bodies. The use of the term compels us to delay categorization—in familiar terms of human or machine—at least for the moment and so creates a space for further exploration.

Monsters

Following Mary Douglas, Martijntje Smits has called these kind of entities that defy categorization and challenge the familiar symbolic order monsters []. Smits discusses four strategies for treating these monsters, four ways of cultural adaptation to these new entities and the disruption they bring about.

The first strategy, embracing the monster, is clearly reflected in the pronouncements of adherents of the transhumanist movement. They welcome all manner of biotechnological enhancements of humans, believe in the exponential development of the possibilities to this end and place the cyborg, almost literally, on a pedestal. The second strategy is the opposite of the first and entails exorcizing the monster. Neo-Luddites or bioconservatives see biotechnology in general and the biotechnological enhancement of people in particular, as a threat to the existing natural order. They frequently refer to human nature, traditional categories and values and norms when attacking and trying to curb the new possibilities. To them, the cyborg is a real monster that has to be stopped and exorcized.

The third strategy is that of adaptation of the monster. Endeavours are made to classify the new phenomenon in terms of existing categories after all. Adaptation seems to be what is happening with regard to existing brain-machine interaction. The conceptual framework here is largely formed by the familiar medical context of prostheses and aids. The designation of the electrodes and chips implanted in the brain as neuroprostheses, places them in the ethical area of therapy, medical treatment, the healing of the sick and support of the handicapped. As long as something that was naturally present but is now lost due to sickness or an accident is being replaced, brain-machine interaction can be understood as therapy and therefore accepted within the ethical limits normally assigned to medical treatments. However, for non-medical applications the problem of classifications remains. Prostheses to replace functions that have been lost may be accepted relatively easily, but how are we going to regard enhancements or qualitative changes in functions such as the addition of infrared vision to the human visual faculty? Are we only going to allow the creation of cyborgs for medical purposes, or also for military goals, or for relaxation and entertainment?

Finally, the fourth strategy is assimilation of the monster, whereby existing categories and concepts are adjusted or new ones introduced.7 In the following I will suggest that the concept of the person—in the sense in which it is used in ethics, rather than in common language—may be useful for this purpose.

Morality of Persons

In the empirical sense, cyborgs, or blends of human bodies with mechanical parts, are gradually becoming less exceptional. It therefore seems exaggerated to view people with prostheses or implants as something very exceptional or to designate them as a separate class. And this raises the question of why we should really worry about the blurring of the distinction between human and machine? This is not merely because the mixing of the flesh with steel or silicone intuitively bothers us, or because the confusion about categories scares us. More fundamentally, I believe this is because the distinction between the human and the machine also points to a significant moral distinction. The difference between the two concepts is important because it indicates a moral dividing line between two different normative categories. For most of our practices and everyday dealings the normative distinction between human and machine matters. You just treat people differently to machines—with more respect and care—and you expect something else from people than you expect from machines—responsibility and understanding, for example. Human beings deserve praise and blame for their actions while machines cannot. The important question is therefore whether brain-machine interfaces will somehow affect the moral status of the people using them []. Do we still regard a paralysed patient with a brain implant and an exoskeleton as a human being, or do we see him as a machine? Will we consider someone with two bionic legs to be a human being or a machine?

I belief that in part this also depends on the context and the reasons for wanting to make the distinction. In the context of athletic competition, the bionic runner may be disqualified because of his supra-human capacities. In this context, he is ‘too much of a machine’ to grant fair competition with fully biological human beings. However, in the context of everyday interaction with others, a person with bionic legs is just as morally responsible for his actions as any other person. In this sense he clearly is human and not a machine. This is because, with regard to moral status, the human being as an acting, responsible moral agent is identified more with the mind than with the body. The mind is what matters in the moral sense. Whether this mind controls a wheelchair with the aid of hands, or electrical brain-generated pulses, is irrelevant to the question of who controls the wheelchair: the answer in both cases is the person concerned. Whether someone is paralysed or not does not alter the question of whether he or she is a person or not; it will of course affect the kind of person he or she is but whether he or she is a person depends on his or her mental capacities. Ethical theories consider the possession of some minimal set of cognitive, conative and affective capacities as a condition for personhood. This means that, ethically speaking, under certain conditions, intelligent primates or Martians could be considered persons while human babies or extremely demented old people would not. Whatever the exact criteria one applies, there is no reason to doubt the fact that someone who is paralysed, someone who controls a robot by remote or someone who has a DBS electrode is a person. Certain moral entitlements, obligations and responsibilities are connected to that state of ‘being a person’. This notion therefore helps to resolve the confusion surrounding the cyborg. Rather than classifying him as either man or machine, we should be looking at personhood. Personhood is what really matters morally and this is not necessarily affected by brain-machine interfaces. As long as they do not affect personhood, brain-machine interfaces are no more special than other types of prosthesis, implants or instrumental aids that we have already grown used to.

New Views on Physical Integrity?

Nevertheless, brain-machine interfaces may in some cases cause new moral issues. A concrete example that can illustrate how shifting catagories can affect concepts and ethics is that of physical integrity. How should this important ethical and legal principle be interpreted when applied to cyborgs? The principle itself is not under discussion. We want to continue to guard and protect physical integrity. The question is, however, how to define the concept ‘body’ now that biological human bodies are becoming increasingly fused with technology and where to draw the line between those plastic, metal or silicone parts that do belong to that body and the parts that do not.

In the spring of 2007 the Dutch media paid attention to an asylum seeker who had lost an arm as a result of torture in his native country and had received a new myoelectrical prosthesis in the Netherlands. He just got used to the arm and was trained in using it naturally when it became apparent that there were problems with the insurance and he would have to return the prosthesis. Evidently, according to the regulations a prosthesis does not belong to the body of the person in question and it does not enjoy the protection of physical integrity. However, the loss of an arm causes a great deal of damage to the person, whether the arm is natural or a well-functioning prosthesis. If prostheses become more intimately connected to and integrated with the body (also through tactile sensors) such that they become incorporated in the body scheme and are deemed a natural part of the body by the person concerned, it seems there must come a point at which such a prosthesis should be seen as belonging to the (body of the) person concerned from the moral and legal point of view. It has even been questioned whether the interception of signals that are transmitted by a wireless link from the brain to a computer or artificial limb, should perhaps also fall under the protection of physical integrity []

Symbolic Order in Motion: Body-Mind

In the previous section I assumed the distinction between body and mind to be clear-cut. The common view is that the mind controls the body (whether this body is natural or artificial) and that the mind is the seat of our personhood, and of consciousness, freedom and responsibility. In this section I examine how this view might change under the influence of new brain-machine interactions and neuroscientific developments in general and what implications this may have for ethics. I will concentrate on DBS, since this brain-machine technique has at present the clearest impact on human mind and behaviour.8 Of course, however, our categories and common views will not change because of one single new technique—rather, it is the whole constellation of neuroscientific research and (emerging) applications that may change the ways in which we understand our minds and important related concepts.

The Mind as Machine

Neuroprostheses and other brain-machine interactions call into question the demarcation between body and mind, at least in the popular perception. Technologies such as neuroprosthetics and DBS make very clear the fact that physical intervention in the brain has a direct effect on the mind of the person in question. By switching the DBS electrode on or off, the behaviour, feelings and thoughts of the patient can be changed instantly. Thoughts of a paralysed person can be translated directly into electrical pulses and physical processes. As a result of neuroscience and its applications the human mind comes to be seen more and more as a collection of neurones, a system of synapses, neurotransmitters and electrical conductors. A very complex system perhaps, but a physical system nonetheless, that can be connected directly to other systems.

For some, this causes moral concern, since it may lead us to see ourselves merely in mechanical terms:

‘The obvious temptation will be to see advances in neuroelectronics as final evidence that man is just a complex machine after all, that the brain is just a computer, that our thoughts and identity are just software. But in reality, our new powers should lead us to a different conclusion: even though we can make the brain compatible with machines to serve specific functions, the thinking being is a being of very different sorts.’ [; p. 40–41]

I believe this change in our popular view of the mind that Keiper fears is actually already taking place. Neuroscientific knowledge and understanding penetrate increasingly into our everyday lives, and it is becoming more normal to understand our behaviour and ourselves in neurobiological terms. This shift is for example noticeable in the rise of biological psychiatry. Many psychiatric syndromes that were still understood in psychoanalytical or psychodynamic terms until well into the second half of the twentieth century, are now deemed biological brain diseases. The shift is also noticeable in the discussion on the biological determinants of criminal behaviour (and opportunities to change such behaviour by intervening in the brain) or in the increased attention for the biological and evolutionary roots of morality. Also in popular magazines and books, our behaviour and ourselves are increasingly presented as the direct result of our brains’ anatomy and physiology.

Scientific and technological developments have contributed to this shift. The development of EEG in the first half of the last century revealed the electrical activity of the brain for the first time, thus creating the vision of the brain as the wiring of the mind. The development of psychiatric drugs in the second half of the last century also helped naturalize our vision of the mind, picturing the brain as a neurochemical ‘soup’, a collection of synapses, neurotransmitters and receptors []. More recently the PET scan and the fMRI have made it possible to look, as it were, inside the active brain. The fact that fMRI produces such wonderful pictures of brains ‘in action’ contributes to our mechanical view of the relation between brain and behaviour. Certain areas of the brain light up if we make plans, others if an emotional memory is evoked; damage in one area explains why the psychopath has no empathy, a lesion in another correlates with poor impulse control or hot-headedness. While neurophilosophers have warned against the oversimplified idea that images are just like photographs that show us directly how the brain works, these beautiful, colourful images appeal to scientists and laymen alike [].

According to Nikolas Rose, we have come to understand ourselves increasingly in terms of a biomedical body, and our personalities and behaviour increasingly in terms of the brain. He says that a new way of thinking has taken shape: ‘In this way of thinking, all explanations of mental pathology must ‘pass through’ the brain and its neurochemistry—neurones, synapses, membranes, receptors, ion channels, neurotransmitters, enzymes, etc.’ [; p. 57]

We are experiencing what he calls a ‘neurochemical reshaping of personhood’ [; p. 59]. Likewise, Mooij has argued that the naturalistic determinism of the neurosciences is also catching on in philosophy and has now spread broadly in the current culture ‘that is to a large extent steeped in this biological thinking, in which brain and person more or less correspond’ [; p.77].

The mind is being seen more and more as a physical, bodily object (the ‘the mind = the brain’ idea), and given that the human body, as described above, has long been understood in mechanical terms, the equal status of the mind and brain means that the mind can also be understood in mechanical terms. As the basic distinction between mind and machine seems to drop away the distinction between human and machine once more raises its head, but now on a more fundamental and extremely relevant level, morally speaking. If in fact our mind, the seat of our humanity, is also a machine, how should we understand personhood in the morally relevant sense? How can we hold on to notions such as free will and moral responsibility?

Neuroscientific Revisionism

A recent notion amongst many neuroscientists and some neurophilosophers is that our experience of having a self, a free will or agency, is based on a misconception. The self as a regulating, controlling authority does not exist, but is only an illusion produced by the brain.9 From this notion it seems to follow that there is no such thing as free will and that there can therefore be no real moral responsibility. Within philosophy revisionists, who claim that our retributive intuitions and practices are unwarranted under determinism, claim that this view obliges us to revise our responsibility-attributing practices, including our legal system. Revisionism implies the need to replace some of our ordinary concepts with new ones. It has, for example, been suggested to substitute blame with ‘dispraise’ [] or to eliminate concepts connected to desert like blame, moral praise, guilt and remorse, altogether []. On a revisionist account, praise, blame and punishment are just devices that modify conduct, and that can be more or less effective, but not more or less deserved.

Greene and Cohen assume that because of the visible advances in the neurosciences—and I take brain-machine interfaces to be part of those—the naturalistic deterministic view on human behaviour will by degrees be accepted by more and more people, and revisionism will catch on []. To their way of thinking, our moral intuitions and our folk psychology will slowly adapt to the overwhelming evidence the neurosciences present us with. The technologies enabled on the basis of neuroscientific understanding, such as DBS, neurofeedback, psychiatric drugs, and perhaps also intelligent systems or intelligent robots, can contribute to this. Little by little we will hold people less responsible and liable for their actions, according to Greene and Cohen, but will see them increasingly as determined beings who can be regulated, more or less effectively, by sanctions or rewards. They allege that questions concerning free will and responsibility will lose their power in an age in which the mechanistic nature of the human decision process will be totally understood. This will also have consequences for the legal system. ‘The law will continue to punish misdeeds, as it must for practical reasons, but the idea of distinguishing the truly, deeply guilty from those who are merely victims of neuronal circumstances will, we submit, seem pointless.’ ([; p. 1781])

Greene and Cohen, like other revisionists, advocate a shift in the nature of our criminal justice system, from a retributive to a consequentialistic system. This means a shift from a system based on liability and retribution to one based on effects and effectiveness of punishment. A consequentialistic system of this kind is, in their opinion, in keeping with the true scientific vision of hard determinism and the non-existence of free will. Greene and Cohen recognize that many people will intuitively continue to think in terms of free will and responsibility. What is more, they think that this intuitive reflex has arisen through evolution and is deeply rooted in our brains. We can hardly help thinking in these sorts of terms, despite the fact that we know better, scientifically speaking. Nonetheless, Greene and Cohen insist that we should base important, complex matters such as the criminal justice system10 on the scientific truth about ourselves and not allow ourselves to be controlled by persistent, but incorrect, intuitions.

Moral Responsibility Reconsidered

A whole body of literature has accumulated refuting this thesis and arguing that new neuroscientific evidence need not influence our moral and legal notions of responsibility (e.g. []). This literature reflects the dominant position in the determinism debate nowadays, that of compatibilism. According to compatibilism determinism is reconcilable with the existence of a free will, and with responsibility. As long as we can act on the basis of reasons and as long as we are not coerced, we are sufficiently free to carry responsibility and the naturalistic neuroscientific explanatory model of behaviour is therefore not necessarily a threat to our free will and responsibility, according to the compatibilist. The question is, however, whether the compatibilist’s philosophical argumentation also convinces the average layman or neuroscientist, certainly in the light of new experimental findings and technical possibilities. How popular views on this topic will develop remains to be seen.

At the moment even adherents of the revisionist view seem convinced that we will never be able to stop thinking, or even think less, in terms of intentions, reasons, free will and responsibility. It seems almost inconceivable not to hold one another responsible for deeds and behaviour [].

Nevertheless, neuroscientific research does challenge our view of ourselves as rational, autonomous and moral beings []. Research shows us, for example, that many if not most of our actions automatic, unconsciously initiated and only some of our actions are deliberate and consciously based on reasons. Our rationality, moreover, is limited by various biases, like confirmation bias, hyperbolic discounting, false memories et cetera. New findings in neuroscience, such as the fact that immaturity of the frontal lobes impedes the capacities for reasoning, decision making and impulse control in adolescents, or that exercise of self-constraint eventually leads to exhaustion of the capacity for self-control (ego-depletion), do necessitate us to re-think the ways in which or the degrees to which we are actually morally responsible is specific situations and circumstances (see for example the series of articles on addiction and responsibility in AJOB Neuroscience 2007).

A more naturalized view of the human mind could thus still have important consequences, even if we do not jettison the notion of moral responsibility altogether. More grounds for ‘absence of criminal responsibility’ could, for example, be acknowledged in criminal law, whereby new technologies could play a role. Functional brain scans might provide more clarity on the degree to which an individual has control over his or her own behaviour.

Prosthetic Responsibility?

Due to their ability to directly influence complex human behaviour by intervening in the brain, brain-machine interfaces may raise interesting issues of responsibility, even when we reject revisionism, as can be illustrated by the following case of a 62 year old Parkinson patient treated with DBS.11

After implantation of the electrodes, this patient became euphoric and demonstrated unrestrained behaviour: he bought several houses that he could not really pay for; he bought various cars and got involved in traffic accidents; he started a relationship with a married woman and showed unacceptable and deviant sexual behaviour towards nurses; he suffered from megalomania and, furthermore, did not understand his illness at all. He was totally unaware of any problem. Attempts to improve his condition by changing the settings of the DBS failed as the manic characteristics disappeared but the patient’s severe Parkinson’s symptoms reappeared. The patient was either in a reasonable motor state but in a manic condition lacking any self reflection and understanding of his illness, or bedridden in a non-deviant mental state. The mania could not be treated by medication [].

Who was responsible for the uninhibited behaviour of the patient in this case? Was that still the patient himself, was it the stimulator or the neurosurgeon who implanted and adjusted the device? In a sense, the patient was ‘not himself’ during the stimulation; he behaved in a way that he never would have done without the stimulator.12 That behaviour was neither the intended nor the predicted result of the stimulation and it therefore looks as though no one can be held morally responsible for it. However, in his non-manic state when, according to his doctors, he was competent to express his will and had a good grasp of the situation, the patient chose to have the stimulator switched on again. After lengthy deliberations the doctors complied with his wishes. To what extent were his doctors also responsible for his manic behaviour? After all, they knew the consequences of switching on the stimulator again. To what extent was the patient himself subsequently to blame for getting into debt and bothering nurses?

For such decisions, the notion of ‘diachronic responsibility’ [] can be of use, indicating that a person can take responsibility for his future behaviour by taking certain actions. Suppose, for example, that DBS would prove an effective treatment for addiction, helping people to stay off drugs, alcohol or gambling, could it then rightly be considered a ‘prosthesis for willpower’ [], or even a prosthesis for responsibility? I believe that technologies that enable us to control our own behaviour better—as DBS might do in the case of addiction, or in the treatment of Obsessive Compulsive Disorder—can be understood in terms of diachronic responsibility and self-control, and thus enhance autonomy and responsibility [].

Future applications of brain-machine interaction may raise further questions: suppose a doctor would adjust the settings of DBS without consent from the patient and cause behaviour the patient claimed not to identify with—who would then be responsible? As Clausen has pointed out, neuroprostheses may challenge our traditional concept of responsibility when imperfections in the system lead to involuntary actions of the patient []. Likewise, if the wireless signals of a neuroprosthesis were incidentally or deliberately disrupted, it would be questionable who would be responsible for the ensuing ‘actions’ of the patient.

Clearly, even without major shifts in our views on free will and responsibility, brain-machine interfaces will require us to consider questions of responsibility.

Conclusion

The convergence of neuroscientific knowledge with bio-, nano-, and information technology is already beginning to be fruitful in the field of brain-machine interaction, with applications like DBS, neuroprosthesis and neurofeedback. It is hard to predict the specific applications awaiting us in the future, although there is no shortage of wild speculations. The emergence of new technical possibilities also gives rise to shifts in our popular understanding of basic categories, and to some new moral issues. The boundaries of the human body are blurring and must be laid down anew; our views on what it is to be a person, to have a free will and to have responsibility are once more up for discussion. In this article I have explored how these shifts in categories and concepts might work out.

I have argued that the distinction between human and machine, insofar as it concerns a morally relevant distinction, does not have to be given up immediately because increasingly far-reaching physical combinations are now being made between human and mechanical parts. Depending on the context and the reasons we have for wanting to make a distinction, we will draw the line between human and machine differently. In the context of sports, a bionic limb may disqualify its user for being too much of a ‘machine’ while in another context such a limb may be qualified as an integral part of a human being and be protected under the right to physical integrity. Important general moral questions that lie behind the confusion about categories of human and machine concern moral responsibility and moral status. The concept of a person, as used in ethical theory to designate moral actors, is more precise and more useful in this context than the general category of the ‘human’ or the poly-interpretable notion of the ‘cyborg’.

In the most radical scenario of shifts in our symbolic order, the concept of ‘person’ may also come under pressure. As I have shown based on Greene and Cohen’s vision, the person as a being with a free will and moral responsibility, and as a moral actor, should, according to some, disappear from the stage altogether. Implementing such a neuroreductionistic vision on the mind and free will would have clear consequences for criminal law: it would have to be revised to a consequentialist, neo-behaviouristic system. People would then barely be considered to be morally responsible beings but be seen as systems that respond to praise and blame in a mechanical fashion. I believe it is unlikely that such a shift in our popular views will come about, because the intuitive appeal of the notion of responsibility, and because there are many good arguments to resist this shift. Even if we do not jettison responsibility altogether, however, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.

A general lesson for ethics of emerging technologies is that such technologies necessitate renewed consideration and reinterpretation of important organizing concepts and distinctions that are crucial to moral judgement. The symbolic labour required to answer such conceptual and normative questions is at least as important for the development of converging technologies as the technical-scientific labour involved.

Acknowledgments

Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Footnotes

1See his website www.kurzweilAI.net for these and other future forecasts

2A lot of research in the field of brain-machine interaction and other converging technologies is carried out by DARPA, the American Ministry of Defence research institute. In 2003, for example, DARPA subsidized research into brain-machine interfaces to the tune of 24 million dollars [, ].

3In the former case, these applications might enable soldiers to carry heavy rucksacks more easily, in the latter they could, for example, help a nurse to lift a heavy patient on his or her own.

4How these processes interact with one another and how socio-cultural changes influence philosophical thinking and vice versa is an interesting and complicated question, that I cannot start to answer here.

5Not that this is an entirely new phenomenon, seeing that all sorts of bodily prostheses have existed for centuries; the first artificial leg dates back to 300 before Christ. Other prostheses that we more or less attach to our bodies are, for instance, spectacles, hairpieces and hearing aids. But the insertion of external parts into the human body is more recent.

6More attention has recently been paid to the importance of the body for the development and working of our consciousness. In the Embodied Mind model, body and mind are seen far more as interwoven than in the past (e.g. []). This can have even further implications for brain-machine interaction; if for example, neuroprostheses change our physical, bodily dealings with the world, this may also have consequences for the development of the brain and for our consciousness. Neuroscientists have even claimed that: ‘It may sound like science fiction but if human brain regions involved in bodily self-consciousness were to be monitored and manipulated online via a machine, then not only will the boundary between user and robot become unclear, but human identity may change, as such bodily signals are crucial for the self and the ‘I’ of conscious experience’ []

7The distinction between adaptation and assimilation is not very clear—it depends on what one would wish to call an ‘adaptation’ or ‘adjustment’ of a concept.

8By contrast, in the case of the neuroprosthesis discussed in the previous section, it is mainly the mind that influences the body, through the interface.

9‘Obviously we have thoughts. Ad nauseam, one might say. What is deceptive, is the idea that these thoughts control our behaviour. In my opinion, that idea is no more than a side effect of our social behaviour. […] The idea that we control our deeds with our thoughts, that is an illusion’, says cognitive neuroscientist Victor Lamme, echoing his collegue Wegner. [; p. 22, ].

10Likewise, moral views on responsibility may change. An instrumental, neo-behaviouristic vision on morality and the moral practice of holding one another responsible might arise. Holding one another responsible may still prove a very effective way of regulating behaviour, even if it is not based on the actual existence of responsibility and free will. As long people change their behaviour under the influence of moral praise and blame, there is no reason to throw the concept of responsibility overboard. From this point of view, there would be no relevant difference anymore between a human being and any other system that would be sensitive to praise and blame, such as an intelligent robot or computer system. If such a system would be sensitive to moral judgements and respond to them with the desired behaviour on this view they would qualify as much as moral actors as human beings would.

11This case also been discussed by [, , ].

12Of course, this problem is not exclusive for DBS; some medications can have similar effects. However, with DBS the changes are more rapid and more specific and can be controlled literally by a remote control (theoretically, the patients behaviour can thus be influenced without the patient’s approval once the electrode is in his brain). These characteristics do make DBS different from more traditional means of behaviour influencing, though I agree with an anonymous reviewer that this is more a matter of degree than an absolute qualitative difference.

References

1. Anonymus Kunstarm met gevoel. Med Contact. 2007;62:246.
2. Bell EW, Mathieu G, Racine E. Preparing the ethical future of deep brain stimulation. Surg Neurol. 2009 [PubMed]
3. Berghmans R, Wert G. Wilsbekwaamheid in de context van elektrostimulatie van de hersenen. Ned Tijdschr Geneeskd. 2004;148:1373–75. [PubMed]
4. Blanke O, Aspell JE (2009) Brain technologies raise unprecedented ethical challenges. Nature, l458, 703 [PubMed]
5. Blume S. Histories of cochlear implantation. Soc Sci Med. 1999;49:1257–68. doi: 10.1016/S0277-9536(99)00164-1. [PubMed] [CrossRef]
6. Burg W. Dynamic Ethics. J Value Inq. 2003;37:13–34. doi: 10.1023/A:1024009125065. [CrossRef]
7. Clausen J. Moving minds: ehical aspects of neural motor prostheses. Biotechnol J. 2008;3:1493–1501. doi: 10.1002/biot.200800244. [PubMed] [CrossRef]
8. Clausen J. Man, machine and in between. Nature. 2009;457:1080–1081. doi: 10.1038/4571080a. [PubMed] [CrossRef]
9. EGE—European Group on Ethics in Science and New Technologies (2005) Ethical aspects of ICT implants in the human body. opinion no. 20. Retrieved from http://ec.europa.eu/european_group_ethics/docs/avis20_en.pdf
10. Ford PJ. Neurosurgical implants: clinical protocol considerations. Camb Q Healthc Ethics. 2007;16:308–311. doi: 10.1017/S096318010707034X. [PubMed] [CrossRef]
11. Ford P, Kubu C. Ameliorating or exacerbating: surgical ‘prosthesis’ in addiction. Am J Bioeth. 2007;7:29–32. [PubMed]
12. Foster KR. Engineering the brain. In: Illes J, editor. Neuroethics. Defining issues in theory, practice and policy. Oxford: Oxford University; 2006. pp. 185–200.
13. Gillet G. Cyborgs and moral identity. J Med Ethics. 2006;32:79–83. doi: 10.1136/jme.2005.012583. [PMC free article] [PubMed] [CrossRef]
14. Glannon W. Bioethics and the brain. Oxford: Oxford University; 2007.
15. Glannon W. Our brains are not us. Bioethics. 2009;23:321–329. doi: 10.1111/j.1467-8519.2009.01727.x. [PubMed] [CrossRef]
16. Glannon W. Stimulating brains, altering minds. J Med Ethics. 2009;35:289–292. doi: 10.1136/jme.2008.027789. [PubMed] [CrossRef]
17. Graham-Rowe D (2006) Catching seizures before they occur. Retrieved September 15, 2009, from http://www.technologyreview.com/biotech/17124/
18. Gray CH. Cyborg ctizen: politics in the posthuman age. New York: Routledge; 2001.
19. Greene J, Cohen J. For the law, neuroscience changes nothing and everything. Phil Trans Roy Soc Lond B. 2004;359:1775–1785. doi: 10.1098/rstb.2004.1546. [PMC free article] [PubMed] [CrossRef]
20. Hansson SO. Implant ethics. J Med Ethics. 2005;31:519–525. doi: 10.1136/jme.2004.009803. [PMC free article] [PubMed] [CrossRef]
21. Haraway D. A cyborg manifesto: science, technology, and socialist-feminism in the late twentieth century. In: Haraway D, editor. Simians, cyborgs and women: the reinvention of nature. New York: Routledge; 1991. pp. 149–181.
22. Healy D. The creation of psychopharmacology. Cambridge: Harvard University; 2000.
23. Hochberg L, Taylor D. Intuitive prosthetic limb control. Lancet. 2007;369:345–346. doi: 10.1016/S0140-6736(07)60164-0. [PubMed] [CrossRef]
24. Hochberg L, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–171. doi: 10.1038/nature04970. [PubMed] [CrossRef]
25. Hughes J. Citizen cyborg: why democratic societies must respond to the redesigned human of the future. Cambridge: Westview; 2004.
26. Keiper A. The age of neuroelectronics. The New Atlantis Winter. 2006;2006:4–41. [PubMed]
27. Keulartz JM, Korthals MS, Swierstra T, editors. Pragmatist ethics for a technological culture. Dordrecht: Kluwer Academic; 2002.
28. Keulartz J, Schermer M, Korthals M, Swierstra T. Ethics in technological culture: a programmatic proposal for a pragmatist approach. Sci Technol human values. 2004;29(1):3–29. doi: 10.1177/0162243903259188. [PubMed] [CrossRef]
29. Koops B, van Schooten H, Prinsen B (2004) Recht naar binnen kijken: een toekomstverkenning van huisrecht, lichamelijke integriteit en nieuwe opsporingstechnieken, eJure, ITER series 70. Retrieved from http://www.ejure.nl/mode=display/downloads/dossier_id=296/id=301/Deel_70_Koops.pdf
30. Lamme V. De geest uit de fles. Dies rede. Amsterdam: University of Amsterdam; 2006.
31. Levy N. Neuroethics. Cambridge: Cambridge University; 2007.
32. Leentjes AFG, et al. Manipuleerbare wilsbekwaamheid: een ethisch probleem bij elektrostimulatie van de nucleaus subthalamicus voor een ernstige ziekte van Parkinson. Ned Tijdschr Geneeskd. 2004;148:1394–1397. [PubMed]
33. Lunshof JE, Chadwick R, Vorhaus DB, Church GB. From genetic privacy to open consent. Nat Rev Genet. 2008;9:406–411. doi: 10.1038/nrg2360. [PubMed] [CrossRef]
34. Mooij A. Toerekeningsvatbaarheid. Over handelingsvrijheid. Amsterdam: Boom; 2004.
35. Moreno JD. DARPA on your mind. Cerebrum. 2004;6:91–99. [PubMed]
36. Morse S. Voluntary control of behavior and responsibility. Am J Bioeth. 2007;7:12–13. [PubMed]
37. Morse SJ. Determinism and the death of folk psychology: two challenges to responsibility from neuroscience. Minnesota Journal of Law, Science and Technology. 2008;9:1–35.
38. Rabins P, et al. Scientific and ethical issues related to deep brain stimulation for disorders of mood, behavior and thought. Arch Gen Psychiatry. 2009;66:931–37. doi: 10.1001/archgenpsychiatry.2009.113. [PMC free article] [PubMed] [CrossRef]
39. Roco MC, Bainbridge WS. Converging technologies for improving human performance. Arlington: National Science Foundation; 2002.
40. Rose N. Neurochemical selves. Society. 2003;41:46–59. doi: 10.1007/BF02688204. [CrossRef]
41. Roskies A. Neuroimaging and inferential distance. Neuroethics. 2008;1:19–30. doi: 10.1007/s12152-007-9003-3. [CrossRef]
42. Schermer M. Gedraag Je! Ethische aspecten van gedragsbeïnvloeding door nieuwe technologie in de gezondheidszorg. Rotterdam: NVBE; 2007.
43. Smart JJC. Free will, praise and blame. In: Watson G, editor. Free will. 2. New York: Oxford University; 2003. pp. 58–71.
44. Smits M. Taming monsters: the cultural domestication of new technology. Tech Soc. 2006;28:489–504. doi: 10.1016/j.techsoc.2006.09.008. [CrossRef]
45. Strawson P. Freedom and resentment. In: Watson G, editor. Free will. 2. New York: Oxford University; 2003. pp. 72–93.
46. Synofzik M, Schlaepfer TE. Stimulating personality: ethical criteria for deep brain stimulation in psychiatric patients for enhancement purposes. Biotechnol J. 2008;3:1511–1520. doi: 10.1002/biot.200800187. [PubMed] [CrossRef]
47. Vargas M. The revisionist’s guide to responsibility. Philos Stud. 2005;125:399–429. doi: 10.1007/s11098-005-7783-z. [CrossRef]
48. Warwick K. I, cyborg. London: Century; 2004.
49. Wegner DM. The illusion of conscious will. Cambridge: MIT; 2002.
50. Weinberger S (2007) Pentagon to merge the next-gen binoculars with soldiers’ brain. Retrieved September 15 2009 from http://www.wired.com/gadgets/miscellaneous/news/2007/05/binoculars

Remote Control of the Brain and Human Nervous System

Remote Control of the Brain and Human Nervous System

The USA and the European Union invest since the beginning of the millenium billions of dollars and euros into brain research. As a result of this research perfect maps of the brain were developed, including the areas of the brain that control the activity of different body organs or parts where higher brain activities, such as speech and thoughts, are taking place. The brain activities corresponding to different actions in those areas were also deciphered.

Thanks to the knowledge of specific locations of different centers in the brain and frequencies of the neuronal activity in them, teams of physicians are now capable of helping many people who were in the past, for different reasons, unable to participate in a normal life. There exist prostheses, which are controlled directly from the brain centers that normally control the movement of the limbs (see this) and enable people, who lost them, to use the prosthesis in a way similar to the way normal people use their limbs. Higher brain activities were produced as well. In 2006 scientists placed into the brain of a completely paralyzed man an implant, which transferred the activity of his brain into different devices and enabled him to open his e-mail, control his TV set and control his robotic arm. Other paralyzed people were able to search the Internet, play computer games and drive their electrical wheelchairs (see this).

Thanks to extensive brain research, computers were taught to understand the neuronal activity so much so that they are now capable of using the activity of our brain to reproduce our perceptions. Canadian scientists demonstrated an experiment, where the computer could interpret the electroencephalographical recordings from the brain to produce the painting of a face that the subject of experiment was perceiving (see this).

In the opposite way the data, processed by the computer in the way that will make them intelligible for the nervous system, can be transmitted into the brain and produce there a new reality. When an implant is placed in the brain and connected to a camera, placed on spectacles, for people whose photoreceptors in their retina stopped working, the sight is at least partially restored. In this case the camera on the spectacles is transmitting into the implant light frequencies and the implant re-transmits them in frequencies which “understand” the neurons processing the visual perceptions (see this).

In California scientists developed a device, which can register the brain waves and, using analysis, find among them consonants and vowels and in this way transform our thoughts to words. A paralyzed man could use this device to write without using a keyboard. Presently the accuracy of the device reaches 90%.  Scientists believe that within five years they will manage to develop a smartphone, to which their device could be connected (see this).

Just like in the case of visual perception it is possible, when knowing the algorithms of brain processing of words, to generate algorithms of different words in the computer and transmit them into the brain in ultrasound frequencies and in this way produce in the human brain particular “thoughts”.

Everybody will easily fall victim to the proposal that, instead of typing or searching with the use of mouse, his computer or cell phone could react directly to his brain’s activity and take down his thoughts directly to the documents or carry out operations that has just occurred to him.

As a matter of fact Apple and Samsung companies have already developed prototypes of necessary electroencephalographical equipment, which can be placed on top of a head and transmit electromagnetic waves produced by the brain into the prototypes of new smart phones. The smart phones should analyze those waves, find out what are the intentions of their owners and carry them out. Apple and Samsung companies expect that the direct connection with brains will gradually replace computer keyboards, touch screens, mouse and voice orders (see this). When the system is complete, it will be feasible for hackers, government agencies and foreign government’s agencies to implant thoughts and emotions in people’s minds and “hearts“, when they will be connected to internet or cell phone systems.

In 2013 scientists in the USA could infer from the brain activity the political views of people and distinguish democrats from republicans and in 2016 scientists used transcranial magnetic stimulation to make subjects of experiment more positive towards criticism to their country, than the participants whose brains were unaffected (see this).

Last year historian Juval Noah Harari was invited to deliver a speech at the World economic Forum in Davos. The editor of the British daily Financial Times stressed, when introducing him, that it is not usual to invite a historian to speak to most important world economists and politicians. Juval Noah Harari warned in his speech against the rise of new totality, based on the access to human brain. He said:

“Once we have algorithms that can understand you better than you understand yourself, they could predict my desires, manipulate my feelings and even make decisions on my behalf. And if we are not careful the outcome can be the rise of digital dictatorships. In the 21st century we may be enslaved under digital dictatorships”

In a similar way the Stanford University researcher in neurology and Dolby Labs’ chief scientist Poppy Crum warned at the conference in Las Vegas:

“Your devices will know more about you than you will. I believe we need to think about how [this data] could be used“.

In April 2017 neuroethicist at the University of Basel Marcello Ienca and Roberto Andorno, a human rights lawyer at the University of Zurich, writing in the journal Life Sciences, Society and Policy, published the article “Toward new human rights in the age of neuroscience and neurotechnology“ where they called for the creation of legislation which would protect human right to freedom and other human rights from the abuse of technologies opening access to the human brain. In the article they wrote that “the mind is a kind of last refuge of personal freedom and self-determination” and “at present, no specific legal or technical safeguard protects brain data from being subject to the same data-mining and privacy intruding measures as other types of information“. Among the world media only the British newspaper The Guardian wrote about their proposal (see this). This fact suggests that in the actual democratic world there exists no political will to forbid remote control of human thoughts and feelings, no matter that such perspective breaks elementary principles of democracy.

In 2016 and 2017 10 European organizations tried to convince the European Parliament and the European Commission to enact the legislation that would ban the remote control of activity of the human nervous system, since pulsed microwaves could be used to manipulate the human nervous system at a distance at present time already (see this). Then in 2017, 19 world organizations addressed the G20 meeting with the same proposal. They received no positive response to their effort.

To achieve the ban of the use of remote mind control technologies it is necessary to work out an international agreement. In the past century the USA and Russia built systems (HAARP and Sura), capable to produce, by manipulation of the ionosphere, extra long electromagnetic waves in frequencies corresponding to frequencies of the activity of the human nervous system and in this way to control the brain activity of populations of vast areas of this planet (See this, “Psychoelectronic Threat to Democracy“). At the beginning of this year China announced the building of a similar, more advanced, system. The Chinese daily The South China Morning Post admitted in its article that the system could be used to control the activity of the human nervous system.

The politicians should, instead of classifying those weapons of mass destruction, make effort to create more democratic system of international politics to replace the current system of struggle for military power. Only in this way conditions could be provided for the ban of use of   If this does not happen, in a few years there will be no chance to preserve democracy.

By Mojmir Babacek

Mojmir Babacek is the founder of the International Movement for the Ban of the Manipulation of the Human Nervous System by Technical Means,  He is the author of numerous articles on the issue of mind manipulation. 

Assassination is the murder of a person, often (but not or ruler, usually for political reasons or paymen. Or the common man.

Assassination is the murder of a person, often (but not always) or ruler, usually for political reasons or paymen. Or the common man.

brain control

An assassination may be prompted by, political, or military motives; it is an act that may be done for financial gain, to avenge a grievance, from a desire to acquire fame or notoriety, or because of a military, security or insurgent group’s command to carry out the homicide.

The World Coalition against Covert Harassment is committed to raising awareness to the legal systems as well as to the medical and scientific community to the crime of illegal biomedical and weaponry research committed on citizens in the European Union and beyond. As a European network, EUCACH acts as a lobbying and advocacy platform towards the EU. Using our international network of scientific and technology experts, partner civil and human rights organisations as well as important stakeholders in civil society, we provide consultancy services to the EU Institutions based on our expertise. EUCACH’s organisational goal is to influence EU legislation and the decision-making process in calling for a worldwide ban on weapons that might enable any form of manipulation of human beings.

Online-connected brains and neural networks.

“ICT” = Information and Communication Technologies

“BMI” = Brain Machine Interface, brain-computer interconnection

“FET” = Future and Emerging Technologies

“S.T.” = Synthetic Telepathy

“A.I.” = Artificial Intelligence.

On the road to mind control: DARPA’s new program will use a chip to connect brains.For additional background to the latest press release from DARPA posted in full below, I encourage you to read the following selection of linked articles
where I discuss the scope and chronology of what is being studied. Therein, you will find that the U.S. BRAIN Initiative and its European counterpart, the Human Brain Project, are not spending multi-billions of dollars on neuroscience research
simply to help people with Post Traumatic Stress Disorder and organic brain dysfunction. It is, perhaps first and foremost, a military endeavor that has wide ramifications if even 1/10th of what is being studied comes to fruition. In short, it’s more about mind control than it is about brain restoration and improvement. Please keep this in mind when you read DARPA’s emphasis on “new therapies.” •Obama Doubles Down on BRAIN Project and Military Mind Control

•Mind Control Scientists Find New Memory Manipulation Technology
•Secret DARPA Mind Control Project Revealed: Leaked Document
•The 9 Goals of Mind Control: Interim Report
•New Mind Reading Research Aims to Synchronize Humans
•Nanoparticles Enable Remote Control Brains Via Magnetic Field: New Study

DARPA Press Release:

Darpa mind

A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.

To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available athttps://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available athttps://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought is available at: http://go.usa.gov/cP474.

DARPA anticipates investing up to $60 million in the NESD program over four years.

NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit:http://www.darpa.mil/…/our-r…/darpa-and-the-brain-initiative.

FET and ICT research and development (of the new “computer – brain language”) allows computers to read and learn human thought patterns by using injectable brain – machine – interface.

About the same brain-machine interface has has been used to cure Parkinson’s disease , Alzheimer’s and depression can also be used for accessing the brain’s memory.

brain mind

Digital Mind

Professor Goran Hermerén OPINION OF THE EUROPEAN GROUP ON ETHICS IN SCIENCE AND NEW TECHNOLOGIES TO THE EUROPEAN COMMISSION (ETHICAL ASPECTS OF ICT IMPLANTS IN THE HUMAN BODY) March 2005

Link to document

The new software for the brain-machine interface in nano electronics combined with “Europe’s new information technology” provides the researchers with the possibillities for reading and taking “software images” of the brain’s neuron network.

A method that provides high-resolution copies of the brains cognitive behavior and human perception.

Tomorrow’s high-speed computers and related research has evolved into a sophisticated “computer game” with “mind reading” on real people….

These research methods are unknown to our society’s legal system!

So how can research on the brain by “serious” (criminal) organizations in ICT-FET be stopped?

SYNTHETIC TELEPATHY for medicine or mind reading is a direct communication with nanoelectronics between computer and human brain.

Synthetic telepathy is a communication system based on thought, not speech.

It can be used to control for example a prosthesis and cure diseases such as Parkinson’s disease but can also decode the patterns used to study cognitive behavior such as memory, learning and emotion. With nano technology, the digital information can be recorded using the new quantum-inspired computers.

Simulation of behavior can be identified and provide diagnostic data for identifying the precursors to diseases such as dementia, stroke and myocardial infarction

This letter is intended to demonstrate a paradox in the Swedish so-called “protection of human rights” and thus the entire Swedish justice system. Synthetic telepathy could, as practiced in Sweden, lie behind an unknown number of violent crimes and suicides due to research deliberately kept hidden from regulators.

Doctors and psychiatrics diagnose people with “voices in his head” following their “manuals”. DSM-IV (Diagnostic and Statistical Manual of Mental Disorders) published by the American Psychiatric Association and the ICD-10 (International Statistical Classification of Disorders and Related Health Problems).

This is an old-fashioned black and white and diagnose when the EU priority FET ICT research and develop new information technologies adapted to nano-electronics.

mind-control-weapons

Ulf Gorman writes in the book that with nano technology, we open the doors to an unknown area where we do not know how to apply ethics. What should be and what should not be allowed when you implant a chip that can both read and influence the brain? He takes the example of studying learning and memory.

Micro Implants can provide unprecedented opportunities to understand how we learn and remember things, and hence why we forget and find it difficult to learn. And it can be understood as a form of abuse to look like that into our most private mental world.

Lund University writes about the development of nano-electrodes that can both listen and communicate with neuronal synapses and their cell membranes.

EU priority ICT and FET research are talking about a “A whole new communication technology in Europe” “It will help us understand and exploit the ways in which social and biological systems, organization and evolution, will pave the way for the development of new opportunities for next-generation software and network technologies “.
To understand how the human brain works not only leads to innovations in medicine but also provides new models for energy, fault-tolerant and adaptive computing technologies “.
An initiative of the Virtual Human Physiology that are individually tailored and virtual simulations of the human body where you would expect enormous progress in disease prevention and health care.
The pioneering work carried out also on new ideas such as artificial living cells, synthetic biology, chemical communication, collective intelligence and two-way interface between brain and machine.

New Brain

Brain reading

Other sources, e.g The UCI (University of California Irvin. Department of Cognitive Science) describes the development of Synthetic Telepathy:

Collaboration between cognitive science, neuro-science, specialists in speech recognition and brain imaging will develop a brain-machine interface. This device could help paralyzed and soldiers would be able to send messages directly from the brain to a computer.

Researcher Michael D’Zmura, President of the UCI describes that the system begins with “the little voice in your head“.

How can a medical diagnosis unequivocally describe people’s perception of voices in their head and that no alternative can exist except mental disease.

Why? -because it is a prerequisite to be able to “hear” voices for the use of new information technology?

This is why these researchers must be forced to go public and announce this new scientific communication technology.

This paradox must be investigated immediately.With the exclusion of the development of a Swedish and European military force with superior two-way “radio” communication with the brain.

imagesCAYGMUJA

A number of past court cases are more or less directly caused by S.T. This “voice to skull” technology must immediately be taken into consideration as an alternative for triggering a number of previously committed violent crimes and suicides.

During the development of BMI, software and network technologies are also computerized and long distance imaging of peoples cognitive behavior and perception. Material that is recorded in the computer that runs the real time simulation and creation of artificial intelligence (for initializing A.I. and computerized decision making.)

The image of the brain’s “machine-code” is probably the most comprehensive and advanced ever made. Cognitive behavior depicted and simulated, language and meaning of the words for the subjects are identified. Human perception and mapping how the brain handles information, the image of mathematics reached its perfection.

neuron wave A new research scandal and the fatal consequences for the wider community.

Around the clock the computerized study goes on using collective, artificial intelligence and self-learning systems. The victims testimonies tells us that you can sadly conclude unequivocally that the studies will not be completed until the victim in one way or another has been broken down and / or otherwise inactivated.

This provides power to simulate decomposition for the digital copy thus the fact that the criminal research will not end and is never disclosed.

Descriptions to the vulnerable people who eat psychotropic drugs due to their experiences the experiments and testing goes on, various medicines can affect the “test objects” (or guinea pigs) everything is recorded and compared with previous values (from the multi-annual copy of their real-time neural network and thus the registration of their behavior.)

Obviously this is a disgusting and illegal way to meet the advanced development of tomorrow’s medicine.

The “studies” have resulted in enormous damage to many subjects.The number of unrecorded victims/subjects is probably very high.Families are fragmented, the children of these families are suffering tremendously. One of the subjects have recently been hospitalized with a cracked skull, caused by disconnecting the balance system remotely. The accident occurred in public settings.

A new research scandal and the fatal consequences for the wider community.

To influence the balance system is another typical example of how technology is used against the victim to incapacitate them for society. Direct assassination attempts on several previous occasions orchestrated by the brain control deliberately strikes out the human balance system, Why this happens is likely as an alternative to the mentally ill to be treated (as “possessed”) or having a neurological disease.

brain world

Brain – copying with BMI and broadband access is definitely no longer a marginal research.Employers have over the years lost millions of dollars and cut-downs is to be expected due to excluded employees. Broken families, children who, years later can not study or work is now forced to seek psychiatric care.

Siblings, grandparents and their closest friends are suffering tremendously. Advanced Medical and hospital care due to study design is a mockery of health-care and doctors who are not familiar with this research. Insurance in the multi-million figures was raised through direct damage caused by brain control. Property for millions of dollars are lost.

Injuries and privacy intrusion of thousands of people is done through “volunteers” that serve as multimedia machines i.e node for the recording of all contacts they have, such as politicians, scientists, lawyers, friends, acquaintances, relatives, international business relations, etc..

Security codes, access codes, etc. is with the new technology no longer private.How can we know that people are not equipped with the new brain-machine interface which makes the person a multimedia application with a function as a live missile is already deployed as nodes in the political and financial world of illegal recording of their conversations with the world?

A sinister JOY seems to embrace the researchers and perpetrators over this superior and powerful tool for mind control and copying brainwave patterns. The tool is, without a doubt, a weapon of offense and stealth.

Implant technology in these forms should immediately be classified as a lethal weapon! It communicates directly with the brain’s neurons and can bring the entire neural nervous system to a halt.(Exclusion of balance can be immediately implied on a victim.)

The technology is now used for purposes of breaking down the persons psyche, with serious accusations, threats, mock execution, incitement to suicide, physical violence, e.g, decrease and increase heart rate, pain in and around the heart, severe chest pains, sudden and painful headaches, difficulty to breathe, tampering with rectum, prostate and muscles to name a few.

The macabre in the use of this technology is that subjects are exposed to these atrocities while the society is not legally able to influence the situation. Sweden is in this matter a lawless country where the researchers grossly exploit the situation.

brain war

Face mind

This ultimate humiliation has reached a whole new level. The researchers tries with the enforced communication and slow decomposition enslave people with exhortations to try to see the individual results of the brain control. One can calculate due to the nature and perennial perspective that there are naturally a unrecorded number of people who have been driven to madness and death with this technology.

There is nowhere the victims can hide or escape the access signals to their thoughts. Scientists simulate with computers, 24/7 365 days a year to break down the subjects and stop them from trying to understand what’s happening to them…

A series of grotesque roles played in order to manipulate the brain, threats and statements that are mixed with modern technology. It is also in the researchers’ strategy to make the picture unclear for the subjects if they attempt to get an overall picture of who the perpetrator is and the real goals of the research, a military strategy conducted by veterans and experts in the matters.

The disrespectful research performed and visualized in a 3D virtual game world in the researchers’ computers, with no ethical boundaries and human rights but with real living human beings as “avatars”. It is quite similar to the popular interactive game “The Sims‘, But this game delivers human reality-based measurement data for research.

Cover-up of brain monitoring technologies means that the crime is waterproof, human rights laws and manipulated by a hidden militant regime researcher with expertise in information technology. In addition, the researchers say they in the dialogue to be the police power which in itself is an extremely serious offense.

For the victims– former high-performance hard-working people with families, children, an orderly life and social contacts. People who all his life been performing taxpayers. Because of a work-related mental fatigue and time on medical therapy sessions with the scientists the opportunity to take advantage of the situation of persons for investigation and contemporary development of the new BMI and brain monitoring technologies.

As these technologies and opportunities are not announced, but several instances re-written, must be able to use knowledge and skills which are available. Sweden is a small country and the people engaged in this activity may not be so difficult to identify and stop.

The researchers in these studies have assumed the right that during the permanent reproduction of human neurotransmitters in the long term also destroy them and their life’s work. The issue is called for; How affected society to know that the violent crime and suicide has been performed in the Stockholm area and Europe in recent years and clearly diagnosed as being caused by mental illness with the voices in heads is an expression of pure brain research!

By: Magnus Olsson

nsa

HOW WE DO IT…

Awareness-raising campaigns to disseminate information and mobilise public opinion on the issue of covert technologies and techniques that enable the manipulation of human beings

Providing expert consultancy to key decision-makers on the creation of appropriate EU legislation to protect citizens from this kind of covert crimes.

Organising networking events (workshops, seminars, conferences) involving all actors concerned to exchange experiences and best practices for the establishment of clear ethical boundaries to strictly regulate the use of systems enabling the manipulation and control of human beings.

It is our philosophy that all men are equal before the law. Everybody’s right to life shall be protected. Nobody shall be subjected to torture or held in slavery. Any technologies and techniques capable of endangering the human physical and/or psychological health, to modify the individuals’ autonomy and affect their dignity should be strictly prohibited.

Read more about ICT – implants at the European Group on Ethics and New Technologies:

http://ec.europa.eu/…/european-group-eth…/docs/avis20_en.pdf

Terminology used in the document:

Cybernetics,,,,,, // Magnus Olsson

EPSC
ec.europa.eu
By: Magnus Olsson  (Sweden)

Humans Will Have Cloud-Connected Hybrid Brains by 2030, Ray Kurzweil Says

Humans Will Have Cloud-Connected Hybrid Brains by 2030, Ray Kurzweil Says

Hybrid-Brains-by-2030

So, you think you’ve seen it all? You haven’t seen anything yet. By the year 2030, advancements will excel anything we’ve seen before concerning human intelligence. In fact, predictions offer glimpses of something truly amazing – the development of a human hybrid, a mind that thinks in artificial intelligence.

Ray Kurzweil, director of engineering at Google, spoke openly about this idea at the Exponential Finance Conference in New York. He predicts that humans will have hybrid brains able to connect to the cloud, just as with computers. In this cloud, there will be thousands of computers which will update human intelligence. The larger the cloud, the more complicated the thinking. This will all be connected using DNA strands called Nanobots. Sounds like a Sci-Fi movie, doesn’t it?

Brain mind

Kurzweil says:

“Our hybrid thinking will be a combination of biological and non-biological thought processes.”

By the end of 2030, our thinking should be almost entirely non-biological and able to function much like an external hard drive – having the ability to backup information as with technology. It seems we keep pushing further the ability of the human mind.

Kurzweil believes one of the true characteristics of the being human is the ability to continually surpass knowledge.

“We will always transcend our limitations-it’s human nature.” says Kurzweil.

Kurzweil wasn’t 100% accurate in his future predictions, but he was close enough. In 1990, he predicted several things for the year 2009, including portable computers and eyeglasses with the built-in computer screen. He didn’t, however, hit the nail on the head with self-driven cars. It was much later, this year, to be exact, that the idea touched the edge of mainstream technology. He was 86% accurate in his predictions, which is astonishing in itself.

brain wheels

Technological Takeover

No worries, there will probably not be a massive takeover by artificial intelligence. We have accounted for this long ago in other theories. For instance, fire provides a way to cook, but we have managed somehow to keep from burning everything down. The same rules apply here. We have taken the necessary precautions to safeguard ourselves from these horrors.

However, we must still play it safe. Kurzweil reminds us:

“Technology is a double-edged sword. It has its promise and its peril.”

Smart Dust: Real-time Tracking Of Everything, Everywhere…

Smart Dust: Real-time Tracking Of Everything, Everywhere

DARPA logo

TN Note: DARPA is a driver of Technocracy in the 21st Century. Its creation of computerized microscopic sensors no larger than a spec of dust will surpass the Internet of Things (IoT) by orders of magnitude. Known as “Smart Dust”, an area can be blanketed to achieve 100% real-time monitoring of everything in every nook and cranny. Also, Smart Dust can be incorporated in fabric, building materials, paint or any other substance use in construction, decoration or wearables.

brain control

The year is 2035, and Sgt. Bill Traverse and his team of commandos are performing a “sweep and clean” operation through a portion of the war-torn Mexico City. Their job is to find any hidden pockets of resistance and flush them out and back through the neutral zone or eliminate them. The drones that provide surveillance overhead cannot offer much support in the twisting alleys and passageways of the sprawling metropolis and the helmet-based HUD systems that soldiers are equipped with are useless in a city where all technical infrastructure was destroyed years ago.

Sgt. Traverse isn’t navigating blind, though. He and his team use Dust, portable packets of sensors that float in the air throughout the entire city and track movement, biometric indicators, temperature change and chemical composition of everything in their city. The Dust sensors send information back to their HUD displays through a communications receiver carried by a member of the team. Traverse can tell, from the readings that Dust gives him, if there are people around the next corner and if they are holding weapons. His team can then proceed accordingly …

This scene of Sgt. Traverse and his merry men is a fiction. The concept of Dust is not.

The idea of the Internet of Things is so passé. The general concept of the Internet of Things is that we can put a sensor on anything and have it send data back to a database through the Internet. In this way we can monitor everything, everywhere and build smarter systems that are more interactive than ever before.

Putting sensors on stuff? Boring. What if the sensors were in the air, everywhere? They could monitor everything—temperature, humidity, chemical signatures, movement, brainwaves—everything.

The technology is called Smart Dust and it’s not quite as crazy (or as new) as you might think.

Smart Dust as a concept originated out of a research project by the United States Defense Advanced Research Projects Agency (DARPA) and the Research And Development Corporation (RAND) in the early 1990s. We use the military anecdote above because it was these military research groups that first conceptualized Smart Dust but the practical application of the technology can be applied to almost any industry. Dust in the fields monitoring the crops. Dust in the factories monitoring the output of machines. Dust in your body monitoring your entire state of well being. Dust in the forests tracking animal migration patterns, wind and humidity.

The entire world could be quantified with this type of ubiquitous sensor technology. But how does it really work?

READ MOORE:  http://technocracy.news/index.php/2015/10/22/smart-dust-real-time-tracking-of-everything-everywhere/

They Really Do Want To Implant Microchips Into Your Brain

They Really Do Want To Implant Microchips Into Your Brain

Michael Snyder
American Dream
Aug 2, 2012

Are you ready to have a microchip implanted into your brain? That might not sound very appealing to you at this point, but this is exactly what the big pharmaceutical companies and the big technology companies have planned for our future.

 

They are pumping millions of dollars into researching “cutting edge” technologies that will enable implantable microchips to greatly “enhance” our health and our lives. Of course nobody is going to force you to have a microchip implanted into your brain when they are first introduced. Initially, brain implants will be marketed as “revolutionary breakthroughs” that can cure chronic diseases and that can enable the disabled to live normal lives. When the “benefits” of such technology are demonstrated to the general public, soon most people will want to become “super-abled”.

Just imagine the hype that will surround these implants when people discover that you can get rid of your extra weight in a matter of days or that you can download an entire college course into your memory in just a matter of hours. The possibilities for this kind of technology are endless, and it is just a matter of time before having microchips implanted into your brain is considered to be quite common. What was once science fiction is rapidly becoming reality, and it is going to change the world forever.

But aren’t there some very serious potential downsides to having microchips implanted into our brains?

Of course there are.

Unfortunately, this technology is not as far off as you might think, and most people are not even talking about what the negative consequences might be.

According to a recent article in the Financial Times, the pharmaceutical company of the future will include a “bioelectronics” business that “treats disease through electrical signalling in the brain and elsewhere.”

Diseases such as diabetes and epilepsy and conditions such as obesity and depression will be will be treated “through electronic implants into the brain rather than pills or injections.”

These implants will send electrical signals to cells and organs that are “malfunctioning”. People will be totally “cured” without ever having to pop a pill or go under the knife.

It sounds too good to be true, right?

Well, the Financial Times says that British pharmaceutical giant GlaxoSmithKline is working very hard to develop these kinds of technologies. Moncef Slaoui, the head of research and development at GlaxoSmithKline, says that the “challenge is to integrate the work – in brain-computer interfaces, materials science, nanotechnology, micro-power generation – to provide therapeutic benefit.”

If a brain implant could cure a disease that you have been suffering from your whole life would you take it?

A lot of people are going to be faced with that kind of a decision in future years.

And this kind of technology is advancing very rapidly. In fact, some researchers have already had success treating certain diseases by implanting microchips into the brains of rats. The following is from a recent Mashable article….

Stroke and Parkinson’s Disease patients may benefit from a controversial experiment that implanted microchips into lab rats. Scientists say the tests produced effective results in brain damage research.

Rats showed motor function in formerly damaged gray matter after a neural microchip was implanted under the rat’s skull and electrodes were transferred to the rat’s brain. Without the microchip, rats with damaged brain tissue did not have motor function. Both strokes and Parkinson’s can cause permanent neurological damage to brain tissue, so this scientific research brings hope.

In addition, the U.S. government has been working on implantable microchips that would monitor the health of our soldiers and enhance their abilities in the field.

So this technology is definitely coming.

But it must be very complicated to get a microchip implanted into your brain, right?

Actually it is fairly simple.

According to an article in the Wall Street Journal, the typical procedure is very quick and it often only requires just an overnight stay in the hospital….

Neural implants, also called brain implants, are medical devices designed to be placed under the skull, on the surface of the brain. Often as small as an aspirin, implants use thin metal electrodes to “listen” to brain activity and in some cases to stimulate activity in the brain. Attuned to the activity between neurons, a neural implant can essentially “listen” to your brain activity and then “talk” directly to your brain.

If that prospect makes you queasy, you may be surprised to learn that the installation of a neural implant is relatively simple and fast. Under anesthesia, an incision is made in the scalp, a hole is drilled in the skull, and the device is placed on the surface of the brain. Diagnostic communication with the device can take place wirelessly. When it is not an outpatient procedure, patients typically require only an overnight stay at the hospital.

But is it really safe to have a device implanted into your head that can “talk” directly to your brain?

Many large corporations are banking on the fact that in a world that is always hungry for new technology that most people will not be bothered by such things.

For example, Intel is working on sensors that will be implanted in the brain that will be able to directly control computers and cell phones. The following is an excerpt from a Computer World UK article….

By the year 2020, you won’t need a keyboard and mouse to control your computer, say Intel researchers. Instead, users will open documents and surf the web using nothing more than their brain waves.

Scientists at Intel’s research lab in Pittsburgh are working to find ways to read and harness human brain waves so they can be used to operate computers, television sets and cell phones. The brain waves would be harnessed with Intel-developed sensors implanted in people’s brains.

The scientists say the plan is not a scene from a sci-fi movie, Big Brother won’t be planting chips in your brain against your will. Researchers expect that consumers will want the freedom they will gain by using the implant.

Once again, this is not something that will be forced on you against your will.

These big corporations are banking on the fact that a lot of people will want to get these brain implants.

Even now, some video game makers are developing headsets that allow users to play games using their brain waves rather than a joystick or a control pad.

Other companies want to make it possible to directly connect your brain to the Internet.

As I have written about previously, IBM is aggressively working to develop this kind of technology. The following is from arecent IBM press release….

IBM scientists are among those researching how to link your brain to your devices, such as a computer or a smartphone. If you just need to think about calling someone, it happens. Or you can control the cursor on a computer screen just by thinking about where you want to move it.

Scientists in the field of bioinformatics have designed headsets with advanced sensors to read electrical brain activity that can recognize facial expressions, excitement and concentration levels, and thoughts of a person without them physically taking any actions.

The potential “benefits” of such technology are almost beyond imagination. An article on the website of the Science Channel put it this way….

If you could pump data directly into your gray matter at, say, 50 mbps — the top speed offered by one major U.S. internet service provider — you’d be able to read a 500-page book in just under two-tenths of a second.

How would the world change if you could download a lifetime of learning directly into your brain in a matter of weeks?

The possibilities are endless.

But so is the potential for abuse.

Implantable microchips that can “talk” directly to the brain would give a tyrannical government the ultimate form of control.

If you could download thoughts and feelings directly into the brains of your citizens, you could achieve total control and never have to worry that they would turn on you.

In fact, you could potentially program these chips to make your citizens feel good all the time. You could have these chips produce a “natural high” that never ends. That would make your citizens incredibly dependent on the chips and they would never want to give them up.

This kind of technology has the potential to be one of the greatest threats to liberty and freedom in the history of mankind.

At first these implantable microchips will be sold to us as one of the greatest “breakthroughs” ever, but in the end they could end up totally enslaving us.

So I will never be taking any kind of a brain implant, and I hope that you will not either.

 
Similar/Related Articles
 
  1. Humans ‘will be implanted with microchips’
  2. Scientists Successfully Implant Chip That Controls The Brain
  3. Brain Implant Allows Paralyzed Woman to Control a Robot with Her Thoughts
  4. After The Government Microchips Our Soldiers, How Long Will It Be Before They Want To Put A Microchip In YOU
  5. Microchip Implant to Link Your Health Records, Credit History, Social Security
  6. Animal microchips linked to causing cancer
  7. Are Populations Being Primed For Nano-Microchips Inside Vaccines?
  8. Edible Microchips, Biometric Identity Systems And Mind Reading Computers
  9. New Implantable Microchips to Medicate Patients
  10. Intel Wants Brain Implants in Its Customers’ Heads by 2020
  11. British Court Orders Singer Get “Medical Implant” for Drug Addiction
  12. Hacking The Human Brain