Abstract
Recent advances in deep learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs), the security of which rests on the hardness of some learning problems. In this paper, we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current DL networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise-an adversarial noise that is resistant to removal attempts. In this paper, we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared with the best previously existing CAPTCHAs.
Original language | English |
---|---|
Article number | 7954632 |
Pages (from-to) | 2640-2653 |
Number of pages | 14 |
Journal | IEEE Transactions on Information Forensics and Security |
Volume | 12 |
Issue number | 11 |
DOIs | |
State | Published - Nov 2017 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
Keywords
- CAPTCHA
- CNN
- HIP
- adversarial examples
- deep learning
ASJC Scopus subject areas
- Safety, Risk, Reliability and Quality
- Computer Networks and Communications