How Will We Outsmart A.I. Liars?

During the summer season earlier than the 2016 presidential election, John Seymour and Philip Tully, two researchers with ZeroFOX, a safety firm in Baltimore, unveiled a brand new sort of Twitter bot. By analyzing patterns of exercise on the social community, the bot discovered to idiot customers into clicking on hyperlinks in tweets that led to probably hazardous websites.

The bot, referred to as SNAP_R, was an automatic “phishing” system, able to homing in on the whims of particular people and coaxing them towards that second after they would inadvertently obtain adware onto their machines. “Archaeologists believe they’ve found the tomb of Alexander the Great is in the U.S. for the first time:,” the bot tweeted at one unsuspecting consumer.

Even with the odd grammatical misstep, SNAP_R succeeded in eliciting a click on as usually as 66 p.c of the time, on par with human hackers who craft phishing messages by hand.

[Like the Science Times web page on Facebook. | Sign up for the Science Times publication.]

The bot was unarmed, merely a proof of idea. But within the wake of the election and the wave of concern over political hacking, pretend information and the darkish aspect of social networking, it illustrated why the panorama of fakery will solely darken additional.

“It would be very surprising if things don’t go this way,” said Shahar Avin, a researcher at the Center for the Study of Existential Risk at the University of Cambridge. “All the trends point in that direction.”

With these techniques, machines are also learning to read and write. For years, experts questioned whether neural networks could crack the code of natural language. But the tide has shifted in recent months.

Organizations such as Google and OpenAI, an independent lab in San Francisco, have built systems that learn the vagaries of language at the broadest scales — analyzing everything from Wikipedia articles to self-published romance novels — before applying the knowledge to specific tasks. The systems can read a paragraph and answer questions about it. They can judge whether a movie review is positive or negative.

This technology could improve phishing bots such as SNAP_R. Today, most Twitter bots seem like bots, especially when you start replying to them. In the future, they will respond in kind.

The technology also could lead to the creation of voice bots that can carry on a decent conversation — and, no doubt one day, will call and persuade you to divulge your credit-card information.

These new language systems are driven by a new wave of computing power. Google engineers have designed computer chips specifically for training neural networks. Other companies are building similar chips, and as these arrive, they will accelerate A.I. research even further.

Jack Clark, head of policy at OpenAI, can see a not-too-distant future in which governments create machine-learning systems that attempt to radicalize populations in other countries, or force views onto their own people.

“This is a new kind of societal control or propaganda,” he said. “Governments can start to create campaigns that target individuals, but at the same time operate across many people in parallel, with a larger objective.”

Source link

Get more stuff like this

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.