Thursday, August 13, 2015

'Killer robots' with AI must be banned, urge Stephen Hawking, Noam Chomsky and thousands of others in open letter

Is it a coincidence that mainstream media is talking about banning Artificial intelligence as there is a growing buzz in the alternative media about it? 

According to physicist Harald Kautz Vella, who spent several years researching self replicating nanotechnologies found within geoengineering or chemtrails and Morgellons sufferers, we have been saturated in AI technology for decades. This technology uses heavy elements as well as artificial RNA to implant itself within the biology of an individual, for the purpose of tracking and transmitting biophotonic light energy from and to the host. 

Related Analysis of A.I. Influences on Earth and ET Civilizations | GoodETxSG NAZI "Alien Reproduction Vehicle"/ARV - Nazi Die Glocke - "Gold from Mercury Problem"

The exposure rates are 100% within the first world and heavily sprayed areas. While all this sounds ominous to say the least, the good news is these systems have been in place for a least 20 years, with only subtle suggestion as its strongest tool for manipulating consciousness. And there are techniques to unravelling intrusion into one's body and life. Like any subliminal suggestion, so long as we remain ignorant it can work against us, but once we develop an awareness of these influences, we can override them.

Related How to Protect The Body Against AI Assimilation (Morgellons) | 6 Bodily Tissues That Can Be Regenerated Through Nutrition

The reason AI, even in well intentioned applications, can be problematic is its absolute adherence to its programing. Unlike humans, who can change their programing (values, beliefs and world views) an AI is hardwired for what ever the original programmer wrote. The risk of any computer program falling short of real world challenges and integration is almost assured, hence programs need to be re-written and upgraded as unforeseen events transpire. In computer programing circles it is well know that a program does not make mistakes, it follows its programing to the letter, revealing the programers errors. It is our inability to properly anticipate events which makes using AI very risky.

Enter the AI drone or soldier debate. The military industrial complex has long desired a highly obedient yet adaptable soldier, able to execute commands without question and anticipate needs so as to change tactics of battle real time. Human beings are programable, but human nature allows us to change, providing an unending source of risk and error for military activities. For example, in the Civil War, 50% of soldiers froze when aiming at a living target, and in some cases instinctually missed. This was despite standard training and practice shooting. What was the problem? - empathy. 
"Lt. Col. Dave Grossman, a psychologist and professor of military science, looked at this evidence and concluded “that there is within most men an intense resistance to killing their fellow man. A resistance so strong that, in many circumstances, soldiers on the battlefield will die before they can overcome it.” In some ways this isn’t all that surprising. Very few people would seek out an opportunity to kill others. At the same time, you may find it hard to believe that it is sometimes impossible for soldiers to kill others even when their own lives are at risk." - The Psychology of Killing and the Origins of War
Empathy is 'hardwired' into living organisms, with increasing levels of actualization in higher order life. Modern science has acknowledged this in discovering the so called mirror neurons, which fires when an action is taken by an organism or when it is observed. This is why when we watch another getting hurt we react as if we have been hurt. This subconscious and unavoidable process is the foundation for empathy processing, which concordantly is the basis for morality. 

A computer program can emulate human behavior, the outward manifestation of experience, but it cannot replicate the inner experience of intuition, imagination and empathy. The ability to see another as oneself, to contemplate how one would feel if they were so treated provides the precursory data and awareness to develop an ethical point of view. But since AI has no such process, since it navigates the world by differentiating itself from other things, it has no basis to make a moral choice. 

Related Self Mastery and Discernment Are Essential To Avoid A.I. Enslavement | Bio-Technology Hybrids Open the Door to Extraterrestrials AI Robots Replacing Humanity

When one considers the notion of an AI drone or soldier, able to execute orders, fire weapons and destroy targets without empathy, the dilemma becomes clear. The following article reveals that we are much closer to this dream of morally bereft soldiers then we may think. And as several below mentioned, we would do well to ensure no such automated killing machine ever comes into existence. 

Undoubtedly, proponents of such technology will speak of the limitation of human casualties, that our boys over there are risking their lives. But when we consider the reasons for war and conflict, they pail in comparison to the cost of making death dealing machines. We need only look out into the world to see how acceptance of an idea, when unvalidated, leads to chaos and disorder. Any limited gains we may realize as a result of this technology will come at a heavy price. Perhaps we should ponder the allegories of science fiction, such as the Matrix and the Terminator series, to explore the possibilities of what releasing this technology could mean. 

Related Sophia Stewart Mother of The Matrix | Kerry Cassidy Interviews the Real Author of The Matrix & The Terminator
- Justin

Source - Independent

The letter claims that totally autonomous killing machines could become a reality within 'years, not decades'

More than 1,000 robotics experts and artificial intelligence (AI) researchers - including physicist Stephen Hawking, technologist Elon Musk, and philosopher Noam Chomsky - have signed an open letter calling for a ban on "offensive autonomous weapons", or as they are better known, 'killer robots'.

Other signatories include Apple co-founder Steve Wozniak, and hundreds of AI and robotics researcher from top-flight universities and laboratories worldwide.

The letter, put together by the Future of Life Institute, a group that works to mitigate "existential risks facing humanity", warns of the danger of starting a "military AI arms race".

These robotic weapons may include armed drones that can search for and kill certain people based on their programming, the next step from the current generation of drones, which are flown by humans who are often thousands of miles away from the warzone.

The letter says: "AI technology has reached a point where the deployment of such systems is - practically if not legally - feasible within years, not decades."

It adds that autonomous weapons "have been described as the third revolution in warfare, after gunpowder and nuclear arms".

It says that the Institute sees the "great potential [of AI] to benefit humanity in many ways", but believes the development of robotic weapons, which it said would prove useful to terrorists, brutal dictators, and those wishing to perpetrate ethnic cleansing, is not.

Such weapons do not yet truly exist, but the technology that would allow them to be used is not far away.  Opponents, like the signatories to the letter, believe that by eliminating the risk of human deaths, robotic weapons (the technology for which will become cheap and ubiquitous in coming years), would lower the threshold for going to war - potentially making wars more common.

Sentry robots like these are currently in use by South Korea along the North Korean border - but cannot fire their weapons without human input
Last year, South Korea unveiled similar weapons - armed sentry robots, that are currently installed along the border with North Korea. Their cameras and heat sensors allow them to detect and track humans automatically, but the machines need a human operator to fire their weapons.

The letter also warns of the possible public image impact on peaceful the uses of AI, which potentially could bring significant benefit to humanity. By building robotic weapons, it warns that a public backlash could grow, curtailing the genuine benefits of AI.

It sounds very futuristic, but this field of technology is advancing at a rapid rate, and opposition to the violent use of AI is already growing.

Physicist Stephen Hawking is one of the more famous signatories to the letter
The Campaign to Stop Killer Robots, a group formed in 2012 by a list of NGOs including Human Rights Watch, works to preemptively ban robotic weapons.

They are currently working to get the issue of robotic weapons on the table of the Convention of Conventional Weapons in Geneva, a UN-linked group that seeks to prohibit the use of certain conventional weapons such as landmines and laser weapons, which, like the Campaign hopes autonomous weapons will be, were preemptively banned in 1995.

The Campaign is trying to get the Convention to set up a group of governmental experts which would look into the issue, with the aim of having such weapons banned.

Earlier this year, the UK opposed a ban on killer robots at a UN conference, with a Foreign Office official telling The Guardian that they "do not see the need for a prohibition" of autonomous weapons, adding that the UK is not developing any such weapons.


Sign-up for RSS Updates:  Subscribe in a reader

Sign-up for Email Updates:

Delivered by FeedBurner

View and Share our Images
Curious about Stillness in the Storm? 
See our About this blog - Contact Us page.

If it was not for the galant support of readers, we could not devote so much energy into continuing this blog. We greatly appreciate any support you provide!

We hope you benefit from this not-for-profit site 

It takes hours of work every day to maintain, write, edit, research, illustrate and publish this blog. We have been greatly empowered by our search for the truth, and the work of other researchers. We hope our efforts 
to give back, with this website, helps others in gaining 
knowledge, liberation and empowerment.

"There are only two mistakes one can make along the road to truth; 
not going all the way, and not starting." - Buddha

If you find our work of value, consider making a Contribution.
This website is supported by readers like you. 

[Click on Image below to Contribute]

Support Stillness in the Storm