Home Technology News Today Hacking and AI: Moral panic vs real problems

Hacking and AI: Moral panic vs real problems

261

On August 4th, seven different artificial intelligence systems competed against each other to see which was best at hacking. The Cyber Grand Challenge was sponsored by DARPA (the US Defense Advanced Research Projects Agency) and held at Def Con, in a vast ballroom remade to resemble an e-sports broadcast. Of course, because we’re talking about about AIs made to seek out security vulnerabilities (and either patch or exploit them), some were inclined to scream “Skynet!” and run for the hills.

Okay, they didn’t literally run for any hills. But the EFF wrote a very panicked blog post warning of the dangers to come if an AI trained to hack wasn’t parented properly. The histrionic post made a few headlines, but missed the point of the competition entirely. If the AI playing Def Con’s all-machine Capture The Flag had feelings, they would’ve been very hurt indeed.

The seven different AI agents were projects of teams that hailed from around the world, coming together to compete for a $2 million purse. Partnering with Def Con, DARPA pit the rival development teams against each other in a CTF, where the programs had to beat each other at reverse engineering unknown programs, probing the security of opponent software, applying patches and shoring up defenses.

This is all the kind of stuff usually done by human hackers. Basically each AI program would be hacking into things, and deciding how to fix bugs. It also means the AI needs to be good at attacking, too.

That’s what got the EFF’s panties in a collective bunch. And insofar as AI goes these days, their pants-based discomfort is shared by more than a few people currently worried about Skynet and computers calling them Dave and telling them to calm down. The EFF cautioned:

“We are going to start seeing tools that don’t just identify vulnerabilities, but automatically write and launch exploits for them. Using these same sorts of autonomous tools, we can imagine an attacker creating (perhaps even accidentally) a 21st century version of the Morris worm that can discover new zero days to help itself propagate.”

The EFF’s post selectively crafts an argument that a set of standards or policies, must be created as soon as possible by AI researchers before things spin horribly out of control. As if people with bad intentions follow rules. This kind of mindset — where the most nightmarish visions of opening Pandora’s AI Box demand we wrap ourselves in some sort of moral life jacked before dipping a toe in the proverbial water — is not unique to the EFF. For instance, this exact kind of moral panic about pseudo-sentient machines has been unspooling in the press over the past year about AI and sex dolls, or sexbots.

Just as the EFF post wants “moral and ethical” policies for those who create AI hackers, The Campaign Against Sex Robots calls for eerily similar standards in sexbots for fear of AI companions unleashing “violence and victimization” on humans.

In early 2015, robot ethics researcher at De Montfort University Kathleen Richardson, along with Erik Billing of the University of Skövde in Sweden, launched the campaign. Its mission: to raise awareness about how the combination of AI and sex dolls “are potentially harmful and will contribute to inequalities in society.” For CASR, sex dolls with AI are “getting us ready for a new, different kind of sexual relationship where anything goes.”

Unfortunately for both sectors of the reigning AI morality police, sentient hacking machines and highly inventive sexbots are a long ways off. Our (hopefully) smart and kinky droids are predicted to become a reality around 2050, and the same goes for any kind of program that you could legitimately call an AI hacker.

Still, if all you had to go on were the warnings of dangerous AI seen in the headlines, you’d think DARPA’s Cyber Grand Challenge was a wild free-for-all where smart programs got smarter under the incautious and malfeasant hand of the US government. Far from it. The CGC had a careful framework in which the software essentially acted like an antivirus program on steroids and competed in its own version of American Idol. Apologies to everyone hoping for drama, but it was just a glorified bug-finding competition.

The winning team, ForAllSecure, a spin-off from Carnegie Mellon, explained the goals and limitations of its AI best in their video on DARPA’s Cyber Grand Challenge website. In it, they talk about their platform in direct relation to strengthening consumer — and yes, government — security. The hope is that their AI will do a whole lot of securing and patching automatically, of things old and forgotten, as well as things poorly made. Their platform could also return its results to create security scorecards.

In their Reddit IamA after winning the CGC with their program “Mayhem,” ForAllSecure explained, “Our hope is to use Mayhem to check the world’s software for bugs. It’s really hard to get to the level of a security expert to be able to analyze your own software or form your own opinions on security of software. But if we had a system that could automatically do those tasks, everyone would be safer online.”

ForAllSecure’s responses were a stark contrast to the EFF’s dire warnings of someone using something like Mayhem to find vulns and attack connected Barbies and tea kettles, or an AI running amok all on its own. The team said, “Imagine if you could take your smart fridge (or whatever), which was written by some sketchy company who put in the minimum amount of security research possible, and apply fairly good binary-hardening techniques.”

They added, “The same could apply to all the government or whatever people who are still using super old code. If you could go in and add stack canaries, CFI, etc. without breaking it, that could be awesome.”

Competing teams line up before the DARPA Cyber Grand Challenge at Def Con 24.

Winning the CGC means that Mayhem is now considered the best platform of its kind in the world. When asked how close to Skynet we are, ForAllSecure replied, “Our system is still very much an instance of artificial ‘special’ intelligence, rather than artificial ‘general’ intelligence. We’ve taught it how to find bugs, exploit them, and patch. It doesn’t have the ability to teach itself more things than that.”

Still, like all things to do with government research and development, the capabilities here for warfare are real. We’re already in a digital arms race; we now know we passed that point long ago, so any discussions about heading off stockpiles of vulns or attacks at the pass are moot.

And ForAllSecure had some pretty wise words on the importance offense. “If you focus only on defense, you’ll find you are always playing catch-up, and can never be ahead of attackers. If you make sure to focus on pushing the state of the art on both sides, you’ll do a lot better improving the state of cyber security.”

Perhaps what the ethics cops at the EFF should’ve got their ticket books out for was researcher Davi Ottenheimer’s keynote the day before the Cyber Grand Challenge, “Great Disasters of Machine Learning.” In it, he took a very close look at the Tesla Autopilot incident that killed a man in May, when his Model S sedan slammed into a tractor-trailer in Florida. Ottenheimer concluded that what had actually happened was the car’s AI made a critical decision at the last moment before the crash, essentially deciding to save the car — valuing itself over the life of the driver. It seems to me that this intersection of marketing hype, billionaire hubris, “intelligent” software, and promises of consumer safety is a much clearer and pressing problem than “let’s make rules because there are bad guys.”

Perhaps the Cyber Grand Challenge deserves a little blame itself for igniting this moral panic. Maybe it was a little bit of a victim of its own hype. Press sure loved making hay out of AI warfare, and whoever put the Matrix-meets-Fox Sports stage sets and “back to you, Jim” livestream together certainly had a flair for the dramatic. The gigantic, gilded Paris Las Vegas room was packed with rows of filled seats. There were even press boxes for talking heads, from which confused announcers stumbled over hacker names live on camera, awkwardly trying to make a lively event out of computers sitting there like obedient little dust collectors.

After the bot-on-bot sort-of violence of DARPA’s automated CTF, Mayhem went on to challenge actual humans in a similar competition at Def Con.. And don’t worry: it lost.

Image: Ann Hermes/The Christian Science Monitor via Getty Images (Cyber Grand Challenge teams).

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here