University of Richmond
In this paper I present a study of the impact of deepfake technologies in digital modes of communication. I illustrate that deepfakes make novel use of photographic documents as tools for the violation of individual privacy and in doing so offer a fundamentally different challenge to concepts of visual authority. Digital surveillance and “big data” systems monitor and record the observed actions and behaviours of people. Algorithmic methods extend surveillance to predictions of behaviours that people may possibly, in the future engage in. I situate deepfakes as a further violation of privacy applied to actions and behaviours that have not and never will occur. By establishing deep-learning based forgeries as an extension of existing tools of digital surveillance, I aim to provide a framework for the critical analysis of a new digital technology that is well positioned to become ubiquitous in the very near future.
If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.
Hosted at Carleton University, Université d'Ottawa (University of Ottawa)
Ottawa, Ontario, Canada
July 20, 2020 - July 25, 2020
475 works by 1078 authors indexed
Conference cancelled due to coronavirus. Online conference held at https://hcommons.org/groups/dh2020/. Data for this conference were initially prepared and cleaned by May Ning.
Conference website: https://dh2020.adho.org/
Series: ADHO (15)