Search GSSD

LEARNING TO PROTECT COMMUNICATIONS WITH ADVERSARIAL NEURAL CRYPTOGRAPHY

Abstract: 
This academic paper describes an experiment in which two AI models, Alice and Bob, are tasked to learn to communicate without a third model, Eve, understanding the communication. This task is given without any prior knowledge of cryptographic algorithms, etc. Even the researchers could not discern the meaning of Alice and Bob’s communications. This essentially replicates a cybersecurity ecosystem, particularly one that outperforms humans (the researchers themselves) demonstrating the importance of AI in the future for cyber security. "We ask whether neural networks can learn to use secret keys to protect informa- tion from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an adversary. Thus, a system may consist of neural networks named Alice and Bob, and we aim to limit what a third neural network named Eve learns from eavesdrop- ping on the communication between Alice and Bob. We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train end-to-end, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals." Key words Cryptography, Artificial Intelligence, Cybersecurity, Adversary, Multiagent System
Author: 
Martin Abadi and David G. Andersen
Year: 
2018
Domains-Issue Area: 
Dimensions-Problem/Solution: 
Region(s): 
Country: 
United States
Datatype(s): 
Theory/Definition