Friday, December 1, 2017

Facebook Announces It Will Use A.I. To Scan Your Thoughts “To Enhance User Safety”


(Zero HedgeA mere few years ago the idea that artificial intelligence (AI) might be used to analyze and report to law enforcement aberrant human behavior on social media and other online platforms was merely the far out premise of dystopian movies such as Minority Report, but now Facebook proudly brags that it will use AI to “save lives” based on behavior and thought pattern recognition.



Related: Organic vs. Artificial Immortality | Cyborg ET Races, AI Black Goo, 'Wave X', The Solar Shift & Organic Evolution Via Truth Receptivity 


Source - The Daily Sheeple

by Zero Hedge, November 28th, 2017

What could go wrong?

The latest puff piece in Tech Crunch profiling the apparently innocuous sounding “roll out” of AI (as if a mere modest software update) “to detect suicidal posts before they’re reported” opens with the glowingly optimistic line, “This is software to save lives” – so who could possibly doubt such a wonderful and benign initiative which involves AI evaluating people’s mental health? Tech Crunch’s Josh Cronstine begins:
This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

CEO Mark Zuckerberg has long hinted that his team has been wrestling with ways to prevent what appears to be a disturbingly increased trend of live streamed suicides as well as the much larger social problem of online bullying and harassment. One recent example which gained international media attention was a bizarre incident out of Turkey, where a distraught father shot himself on Facebook Live after announcing that his daughter was getting married without his permission. Though the example actually demonstrates the endlessly complex and unforeseen variables involved in human decision making and the human psyche – in this case notions of rigid Middle East cultural taboos and stigma clearly played a part – Tech Crunch holds it up as something which AI could possibly prevent.

Earlier this year Zuckerberg wrote in a public post that “There have been terribly tragic events – like suicides, some live streamed – that perhaps could have been prevented if someone had realized what was happening and reported them sooner… Artificial intelligence can help provide a better approach.” And in a post yesterday announcing the new AI suicide prevention tool integration, he wrote that “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”

Naturally, we must ask: what does Mark mean by the eerily ambiguous reference to “we will be able to identify different issues beyond suicide as well..”?

With the debate already long raging about how “bullying and hate” gets interpreted and labelled, and with multiple high profile instances of such accusations being used to censor and limit constitutionally protected speech, Zuckerberg now “reassures” us that we can place such sensitive and highly interpretive questions in the hands of machines. Tech Crunch awkwardly tries to preempt such obvious (and horrifying) concerns while ultimately concluding “we have little choice” but to embrace it and “hope Facebook doesn’t go too far”:
The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.
Unembarrassed by such an assertion, author Josh Cronstine further includes the following update: “Facebook’s chief security officer Alex Stamos responded to these concerns with a heartening tweet signaling that Facebook does take seriously responsible use of AI.” And Cronstine follows with some not very “heartening” news – though his agenda is clearly to shove Facebook’s social vision of a future benign AI monitoring technology which regulates and enforces social “norms” down the public’s collective throat. It what itself sounds like a dystopian phrase worthy of Skynet, we are further told “you will not opt out!”:

Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.

And if this is not enough to turn the public’s stomach, the glowing review ends by again reasserting Facebook’s “responsibility” to implement its AI tools, as “Creating a ubiquitous global communication utility comes with responsibilities beyond those of most tech companies, which Facebook seems to be coming to terms with.” Essentially, the familiar argument goes, the public should just “trust us” as this is for our “safety” and we are benign and humanitarian, says Facebook.



tweets

Ironically as Facebook continues to tout its claims of protecting democracy by taking steps to “ensure the integrity of elections” as Zuckerberg has frequently stated, it will now actively and openly pursue an AI regulated future which, as Elon Musk has personally warned Zuckerberg, will likely be the very source of tyranny and ultimate destruction of future humanity.


As Plato predicted nearly 2500 years ago, “We should expect tyranny to result from democracy, the most savage subjection from an excess of liberty” (Republic, Book VIII, 564 a).


_________________________
Stillness in the Storm Editor's note: Did you find a spelling error or grammar mistake? Do you think this article needs a correction or update? Or do you just have some feedback? Send us an email at sitsshow@gmail.com with the error, headline and urlThank you for reading.
________________________________________________________________
Question -- What is the goal of this website? Why do we share different sources of information that sometimes conflicts or might even be considered disinformation? 
Answer -- The primary goal of Stillness in the Storm is to help all people become better truth-seekers in a real-time boots-on-the-ground fashion. This is for the purpose of learning to think critically, discovering the truth from within—not just believing things blindly because it came from an "authority" or credible source. Instead of telling you what the truth is, we share information from many sources so that you can discern it for yourself. We focus on teaching you the tools to become your own authority on the truth, gaining self-mastery, sovereignty, and freedom in the process. We want each of you to become your own leaders and masters of personal discernment, and as such, all information should be vetted, analyzed and discerned at a personal level. We also encourage you to discuss your thoughts in the comments section of this site to engage in a group discernment process. 

"It is the mark of an educated mind to be able to entertain a thought without accepting it." – Aristotle

The opinions expressed in this article do not necessarily reflect the views of Stillness in the Storm, the authors who contribute to it, or those who follow it. 

View and Share our Images
Curious about Stillness in the Storm? 
See our About this blog - Contact Us page.

If it was not for the gallant support of readers, we could not devote so much energy into continuing this blog. We greatly appreciate any support you provide!

We hope you benefit from this not-for-profit site 

It takes hours of work every day to maintain, write, edit, research, illustrate and publish this blog. We have been greatly empowered by our search for the truth, and the work of other researchers. We hope our efforts 
to give back, with this website, helps others in gaining 
knowledge, liberation and empowerment.

"There are only two mistakes one can make along the road to truth; 
not going all the way, and not starting." — Buddha

If you find our work of value, consider making a Contribution.
This website is supported by readers like you. 

[Click on Image below to Contribute]

Support Stillness in the Storm