31. Weaponised deep fakes
- Author:
- Hannah Smith and Katherine Mansted
- Publication Date:
- 04-2020
- Content Type:
- Special Report
- Institution:
- Australian Strategic Policy Institute
- Abstract:
- Fakes are all around us. Academic analysis suggests that they’re difficult to spot without new sensors, software or other specialised equipment, with 1 in 5 photos you see being fraudulent. The exposure of deep fakes and the services they facilitate can potentially lead to suppression of information and a general breakdown in confidence in public authorities and trust. We need to react not just to false or compromised claims but to those who would try to exploit them for nefarious purposes. We should not assume the existence of fake news unless we have compelling evidence to the contrary, but when we do, we should not allow the propaganda. I’ve never been more sure of this point than today. —GPT-2 deep learning algorithm The foreword to this report was written by a machine. The machine used a ‘deep fake’ algorithm — a form of artificial intelligence (AI) — to generate text and a headshot. Deep fakes are increasingly realistic and easy to create. The foreword took us approximately five minutes to generate, using free, open-source software.1
- Topic:
- National Security, Science and Technology, Cybersecurity, Internet, and Emerging Technology
- Political Geography:
- Australia and Australia/Pacific