Search GSSD

Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures

Abstract: 
While the article does not directly focus on cybersecurity, it draws many parallels between the problems faced in AI and cyber space. The main takeaway is that no system can be 100% secure, and as a result, the newer realm of "AI Safety" faces an open problem in attempting to secure future AI systems. There is a lack of understanding and recognition of AI Safety in the legal system, similar to how there are international disagreements as to what cybersecurity is. The topic of human safety in the presence of the Internet and the machine has yet to be standardized. The article addresses an interesting idea, which is that (in both AI and cybersecurity) a human attacker is more likely to be the cause of some system failure, rather than a bug in the system itself. For instance, in the case of intelligent machines, it's unlikely that we'd see a revolution of artificial intelligence as the cause of human fatalities, but rather an attacker purposely designing a malicious AI machine as the cause. The question then becomes, how do researchers continue to advance AI, with the knowledge that an adversary has access to the same technology? Cybersecurity faces the same question. We can, for example, enhance password security with better hardware and encryption algorithms, but at the same time, an attacker also has access to the same level (or better) hardware and knowledge of those algorithms, which eases the process of finding security vulnerabilities. Key words (as listed in the article): AI Safety, Cybersecurity, Failures, Superintelligence.
Author: 
Roman V. Yampolskiy
Institution: 
University of Louisville
Year: 
2018
Domains-Issue Area: 
Dimensions-Problem/Solution: 
Region(s): 
Datatype(s): 
Collections
Events