News

Jewish-Owned Facebook Develops Lying AI Bots

Facebook’s Mark Zuckerberg

Human-like chatbots are already used by Facebook to convince users to buy certain services.

FACEBOOK’S 100,000-strong bot empire is booming — but it has a problem. Each bot is designed to offer a different service through the Messenger app: it could book you a car, or order a delivery, for instance. The point is to improve customer experiences, but also to massively expand Messenger’s commercial selling power.

“We think you should message a business just the way you would message a friend,” Mark Zuckerberg said on stage at the social network’s F8 conference in 2016. Fast forward one year, however, and Messenger VP David Marcus seemed to be correcting the public’s apparent misconception that Facebook’s bots resembled real AI. “We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.” The bots are instead a combination of machine learning and natural language learning, that can sometimes trick a user just enough to think they are having a basic dialogue. Not often enough, though, in Messenger’s case. So in April, menu options were reinstated in the conversations.

Now, Facebook thinks it has made progress in addressing this issue. But it might just have created another problem for itself.

The Facebook Artificial Intelligence Research (FAIR) group, in collaboration with Georgia Institute of Technology, has released code that it says will allow bots to negotiate. The problem? A paper published this week on the R&D reveals that the negotiating bots learned to lie. Facebook’s chatbots are in danger of becoming a little too much like real-world sales agents.

“For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states,” the researchers explain. The research shows that the bots can plan ahead by simulating possible future conversations.

The team trained the bots on a massive dataset of natural language negotiations between two people (5,808), where they had to decide how to split and share a set of items both held separately, of differing values. They were first trained to respond based on the “likelihood” of the direction a human conversation would take. However, the bots can also be trained to “maximise reward”, instead.

When the bots were trained purely to maximise the likelihood of human conversation, the chat flowed but the bots were “overly willing to compromise”. The research team decided this was unacceptable, due to lower deal rates. So it used several different methods to make the bots more competitive and essentially self-serving, including ensuring the value of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘reinforcement learning’ and ‘dialog rollouts’. The techniques used to teach the bots to maximise the reward improved their negotiating skills, a little too well.

“We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”

So, its AI is a natural liar.

But its language did improve, and the bots were able to produce novel sentences, which is really the whole point of the exercise. We hope. Rather than it learning to be a hard negotiator in order to sell the heck out of whatever wares or services a company wants to tout on Facebook. “Most” human subjects interacting with the bots were in fact not aware they were conversing with a bot, and the best bots achieved better deals as often as worse deals.

The research team wants to follow up by experimenting with more reasoning strategies and increasing the bot’s novel language repertoire.

Facebook, as ever, needs to tread carefully here, though. Also announced at its F8 conference this year, the social network is working on a highly ambitious project to help people type with only their thoughts.

“Over the next two years, we will be building systems that demonstrate the capability to type at 100 [words per minute] by decoding neural activity devoted to speech,” said Regina Dugan, who previously headed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and worthy venture when aimed at “people with communication disorders”, as Dugan suggested it might be, if this were to become standard and integrated into Facebook’s architecture, the social network’s savvy bots of two years from now might be able to preempt your language even faster, and formulate the ideal bargaining language. Start practising your poker face/mind/sentence structure, now.

* * *

Source: Wired

Previous post

Fanatical Zionist Rupert Murdoch: Jewish or Not?

Next post

Wichita Massacre Killers: They Need to Die

Subscribe
Notify of
guest
2 Comments
Inline Feedback
View all comments
Anthony Collins
Anthony Collins
23 June, 2017 9:15 pm

“Jewish-Owned Facebook Develops Lying AI Bots.” Trust the Jews to unite low cunning with artificial intelligence!

Jason
Jason
29 June, 2017 8:08 pm

If the future wasn’t bleak enough for Whitey with the Genocide going on, now we’ll have to face AI… I wish this all was a nightmare I could wake up from!