Quick answer to a toxic question

3 minute read


Designing new ways to destroy humanity is all in a night’s work for AI.


Your Back Page correspondent is a big fan of artificial intelligence. 
 

Having machines do the hard yards on the number crunching and analysis of complex equations, then independently coming up with the best solutions, seems like a no-brainer to us. 

The problem with artificial intelligence, however, is that it can also behave just like good old natural intelligence and do the wrong thing. 

A chilling example of this capacity was published recently in the journal Nature Machine Intelligence. 

An international security conference held in Switzerland made an unusual approach to a company called Collaborations Pharmaceuticals, a business that specialises in using AI to identify new drug candidates for rare and communicable diseases. 

The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection asked the company to look into a challenging question: could AI technologies currently being used for discovering new drug treatments be misused to design biochemical weapons instead? 

So the company did just that, and there are no prizes if you guessed “yes” was the answer it came up with. What did surprise, however, was just how quickly, and how comprehensively, the AI was able to repurpose itself towards the annihilation of humanity.    

The company used its regular software overnight to address the issue, and in a matter of just six hours the AI came up with 40,000 substances, including the deadly nerve agent VX, that could pose a toxic threat.   

“It just felt a little surreal,” said Fabio Urbina, one of the authors of a commentary on the findings, adding that it was remarkable how the software’s output was so similar to the company’s current commercial drug-development process. “It wasn’t any different from something we had done before – use these generative models to generate hopeful new drugs.” 

The company, to its credit, pondered if it had already crossed an ethical boundary by even running the program and decided not to do anything further to computationally narrow down the results, much less test the substances the AI found for effectiveness. 

Which is a good thing. But it does little to allay fears that there will be players out there with a less reliable moral compass who might not be so benevolently disposed. 

Doesn’t bear thinking about really. We’ll leave that to the AI to ponder. 

If you find something new to keep you awake at night, share the nightmare with penny@medicalrepublic.com.au 

End of content

No more pages to load

Log In Register ×