Discrimination in AI and DeepFakes
Published:
Introduction
Artificial intelligence (AI) algorithms are being used in our day-to-day tasks, from search engines, email filtering to content recommendation. AI media coverage can get us to believe in two scenarios. A positive scenario where AI automates all the tasks and solves all problems, and a doomsday scenario where AI takes over humanity [1]. However, this coverage rarely engages in a constructive conversation about the realistic dangers that come with AI and how AI might impact us in the context of society, politics, economics, gender, race, sexual-orientation, social class, and so on [1]. One aspect of AI’s impact on our societies is the consolidation of the existing power dynamics. Research has shown that AI-based systems are prone to reproducing harmful social biases [34]. Which in turn leads to the consolidation of dysfunctional social structures that favour historically advantaged people, like favouring men over women for STEM related jobs [2]. Deepfake videos and the AI systems that make them are another manifestation of this consolidation of power, with the potential risks they impose, especially on women.
Deepfake applications use off-the-shelf AI algorithms to generate fake content. AI algorithms like generative adversarial networks (GAN), variational autoencoders (VAE), and long short-term memory (LSTM) are used in training deepfake applications to swap the persons’ faces in two different videos, or to copy the facial expressions of one person in one video onto the person in the other video. The open-source deepfake applications, FaceSwap and DeepFaceLab, use VAE algorithms [5]. These deepfake applications, similar to other AI-based systems, don’t actually “learn” anything about the task they are supposed to do. But rather, they learn spurious correlations between a group of variables present in the training datasets [6]. AI-based companies claim that their systems make the right decisions, but with no guarantees that they do that for the right reasons. Hence, the term “Black box” is used to describe AI-based systems.