Discrimination in AI and DeepFakes

Published:

Introduction

Artificial intelligence (AI) algorithms are being used in our day-to-day tasks, from search engines, email filtering to content recommendation. AI media coverage can get us to believe in two scenarios. A positive scenario where AI automates all the tasks and solves all problems, and a doomsday scenario where AI takes over humanity [1]. However, this coverage rarely engages in a constructive conversation about the realistic dangers that come with AI and how AI might impact us in the context of society, politics, economics, gender, race, sexual-orientation, social class, and so on [1]. One aspect of AI’s impact on our societies is the consolidation of the existing power dynamics. Research has shown that AI-based systems are prone to reproducing harmful social biases [34]. Which in turn leads to the consolidation of dysfunctional social structures that favour historically advantaged people, like favouring men over women for STEM related jobs [2]. Deepfake videos and the AI systems that make them are another manifestation of this consolidation of power, with the potential risks they impose, especially on women.

Deepfake applications use off-the-shelf AI algorithms to generate fake content. AI algorithms like generative adversarial networks (GAN), variational autoencoders (VAE), and long short-term memory (LSTM) are used in training deepfake applications to swap the persons’ faces in two different videos, or to copy the facial expressions of one person in one video onto the person in the other video. The open-source deepfake applications, FaceSwap and DeepFaceLab, use VAE algorithms [5]. These deepfake applications, similar to other AI-based systems, don’t actually “learn” anything about the task they are supposed to do. But rather, they learn spurious correlations between a group of variables present in the training datasets [6]. AI-based companies claim that their systems make the right decisions, but with no guarantees that they do that for the right reasons. Hence, the term “Black box” is used to describe AI-based systems.

In this article, we first discuss the risks of bias, and discrimination in AI algorithms in general. Biased AI systems here means “computer systems that systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others” [49]. Then we have a closer look at discrimination in deepfake videos, deepfake applications, and deepfake detection models.

Discrimination in AI

There is a large body of literature on bias and discrimination in AI algorithms and AI applications [34, 35]. This led to the emergence of Responsible AI as a new research area that aims at discussing problems with the current AI algorithms and possible ways to build more reliable and responsible AI algorithms. Among the most discussed problems is unfairness in AI, which leads AI-based systems to make discriminative decisions between people based on attributes like race, gender, religion, etc. One of the most well-known examples of discriminative decisions made by an AI-based system is the COMPAS algorithm. COMPAS is a risk assessment tool that tries to measure the likelihood that a criminal becomes a recidivist, a term used in legal systems to describe a criminal who reoffends. In 2016, ProPublica found that black defendants are more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while the latter were more likely than Black defendants to be incorrectly flagged as low risk [21]. Similar examples of AI discrimination can be found against the LGBTQ community [22], older people [23], Muslims [24], Jews [26], and people with disabilities [25].

There are many reasons behind the bias in AI algorithms. One of these reasons is that the researchers and engineers who build AI algorithms lack the social and historical contexts of the data of these AI algorithms [27]. For example, when AI developers/researchers collect data from social media platforms to train AI algorithms without considering the social context, from which we know that hate speech on social media platforms specifically targets marginalized groups. This results in AI algorithms that generate offensive content towards these marginalized groups [28,29].

Another reason behind the bias in AI algorithms is that AI researchers and developers build AI systems on top of existing discriminative systems [27]. When AI recommendation systems infer information about a person’s ethnicity from their names, posts or social network to personalize their recommendations. They use ethnicity as a proxy for individuality. By this, AI algorithms perpetuate the racist view that people who belong to a specific group must have similar preferences.

Accountability – or rather the lack of it – is another reason behind the bias in AI algorithms. Lack of accountability [27], especially when it comes to big tech companies that build commercial AI algorithms, allows tech companies to get away with creating oppressive AI systems and claim that these are just “glitches”. In fact, big tech companies sell their AI algorithms as black boxes without explaining how their models make decisions. Back in 2018, The Justice League, a group of AI ethicists and activists, launched the Safe Face pledge as an opportunity for organizations to make public commitments towards mitigating the abuse of facial analysis technology and to ensure that computer vision models do not discriminate between people based on their skin colour [31]. However, no major tech company was willing to sign it [30].

Another reason behind the discriminatory behaviour in AI algorithms is that the developers of these systems are mostly white, heterosexual, able-bodied white men. As argued in [27], this is one reason why AI algorithms don’t work for everyone in the intended way. For instance, facial recognition systems that only work with people with light skin [32]. There is also a lack of diversity when it comes to the targeted customers of the AI algorithms. Since most of these technologies are expensive to buy, the developers of these algorithms focus on the customers who can assumingly afford it, which are mostly White, able-bodied, heterosexual men [30].

Furthermore, the lack of public awareness of biases in AI algorithms is a result of mathematical and statistical terminology and jargon being used in the discourse that most non-specialists can’t understand. This lack of understanding of how AI systems work and their limitations has people to over-trust AI systems and increases Technochauvinism which is considering computers and computational methods that can provide better solutions to any problems [33].

The lack of public awareness and Technochauvinism is one important reason important institutions like schools and justice systems rely on AI-based systems to make important decisions even though they poorly understand the technology and how it works [30, 33]. We saw the risk that comes with using COMPAS on justice systems in New York. Similarly, there are risks with implementing educational AI technologies in schools. Among these risks are the biases in AI grading systems. In 2020, the UK used a direct center level performance (DCP) algorithm, which was developed by experts not trained using AI, for grading A-Level students and resulted in assigning poorer grades to students in state-funded schools and better grades to students in private schools [48]. There is also a risk of biased grading when the grading systems are trained on data in a specific language or dialect that is not the first language or dialect of some students. It has been already demonstrated in the literature that AI systems discriminate between people based on their dialects [37]. This risk is aggravated by the fact that AI systems are not transparent and can’t provide explainable reasons behind their grades [45, 46]. This does not mean that there are no biased teachers and that their grading could also be discriminative [48]. However, with Technochauvinism and the assumption that AI systems are objective and neutral, we could run the aforementioned problems. There are also risks that educational AI technologies could be used to reinforce dominant ideologies of behavioural appropriateness which deem the behaviour of ethnic minorities, LGBTQ, disabled and other marginalized students inappropriate [46].

These types of discrimination in AI systems exist in all applications of AI systems, including deepfake applications. In the next section, we discuss different aspects of discrimination in deepfake videos, deepfake applications, and deepfake detection models.

Discrimination and Deepfakes

Deepfake videos are widespread online. A study from 2023 found that the deepfake phenomena is growing rapidly with the number of deepfake videos online reaching 95.820 with a 550% increase since 2019 [36]. One of the reasons for the spread of deepfake videos is the accessibility to the deepfake applications. For example, the source code of Swapface, which is a popular deepfake application, is available online. The source code is not only available but also popular [7]. For non-technical users, there are some computer applications and services [7]. The use of these services is increasing, as Google searches for “free voice cloning software” rose 120% between 2023 and 2024, according to Google Trends [8].

On mainstream social media platforms, the social function of deepfake videos is to mock authorities and people in power, based on a study of 1413 manipulated videos on YouTube and TikTok [9], which is critical for freedom of speech. However, deepfake videos also pose a risk to the society as it spreads disinformation, blackmail, bullying, harassment, fake news, and financial fraud and scams [8]. But these risks increase significantly for women. Especially in pornographic deepfake videos, which are 96% of all the deepfake videos online [7]. Another study found that 100% of pornographic deepfakes videos featured female celebrities, primarily from the USA and Korea [9]. Even the non-pornographic deepfake videos target mainly women. One study investigated the demography of deepfake videos from the top five deepfake pornography websites and top 14 deepfake YouTube channels that are not pornographic [7]. The study found that 100% of the pornographic deepfake videos are targeted at women and 39% of the YouTube deepfake videos are targeted at women.

Since AI algorithms are used in building deepfake applications, we see that AI discrimination is also reflected in deepfake applications. This is manifested in the good performance of deepfake applications on videos with female subjects in comparison to male subjects. For example, the deepfake application, DeepNude, which enables users to “strip” photos of clothed women, allowing the application to create versions of the original clothed photos but with naked body parts. This application does not perform well on photos of men in comparison to photos of women because the AI algorithm that the application is built on is trained mainly on images of women [7]. This underlies how AI is used to consolidate gender power dynamics and how sexism is deeply grounded in our societies and gets reproduced by AI algorithms.

Most social media platforms have banned deepfake profiles, where a profile picture and text are generated using AI. Social media platforms moderate user profiles by removing the deepfake ones using a mix of automatic detection methods and manual detection methods. Automatic detection methods rely on the use of AI systems to spot a deepfake profile, while manual detection relies on humans to detect deepfake profiles. The misclassification of a real profile as fake might result in banning or removing a user’s profile. This might lead to economic harm if the profile is used to promote their professional services. It can also lead to indirect harm that might result from the loss of social capital [10].

Similarly, there are tools that use AI to automatically detect deepfake videos [11,12]. Researchers from META released the Deepfake Detection Challenge (DFDC) [13] to encourage researchers to develop better tools to detect deepfake videos and images. in 2020, Microsoft announced the release of Microsoft Video Authenticator [14]. Microsoft video authenticator is a tool that can analyze still photos or videos to provide a confidence score that the media is artificially manipulated. For videos, it can provide the confidence score in real-time for each frame as the video plays.

Since there are bias and discrimination issues with different AI systems, including Computer vision, which is the AI research discipline related to video and image AI technologies, the same issues can be found in the automatic systems that detect deepfake videos or images. For example, a study found that almost one third of videos in the dataset that is commonly used for training deepfake detection models, FaceForensic++, feature female Caucasian subjects [15]. After balancing the training dataset to test the models’ generalizability, the study found that the performance of the deepfake detection model on videos that feature subjects with darker skin was the worst. With the error rate increasing 22 times from videos that feature Caucasian male subjects (error rate 0.3%) and videos that feature African male subjects (6.7%) [15]. The study also found that videos that feature African female or Asian female are more likely to be mistakenly labelled as fake in comparison to videos that feature Caucasian men [15]. These findings also show that videos that feature females are more likely to be mistakenly labelled as fake [15]. Another study found that demographic attributes (e.g., age, gender, and ethnicity) and non-demographic attributes (e.g., hair, skin, etc.) impact the performance of deepfake detection models [16, 17]. This means that a deepfake detection model that can detect fake videos that feature young blond Caucasian women might not be able to detect fake videos that feature older women of colour.

What makes it particularly difficult to detect fake videos is that AI algorithms and deepfake applications are getting better at generating videos that don’t have any distortions. Which means that the current methods used to detect fake videos, like spotting distortions in fake videos, are not functional any more. This makes it difficult not just for automated AI deepfake detection models but also for humans. For example, in a study where the researchers studied the ability of 210 human subjects to detect fake videos, the researchers found that people can’t reliably detect deepfakes and that neither raising their awareness nor using financial incentives improved their detection accuracy. The researchers also found that studied subjects tend to mistake deepfake videos as authentic videos rather than the other way around [18]. This means that the task of spotting a fake video is a very challenging one, not only for deepfake detection models but also for humans. Another study found that people’s accuracy in detecting deepfake videos varies by demographics, and that people’s accuracy improves when they are classifying videos that match their demographics [19]. This brings the element of bias in humans, which is likely to be transferred to AI deepfake detection models by using biased human provided labels in the training dataset.

These findings suggest that it is difficult to stop deepfakes from spreading on social media through merely detection. And the efforts of social media platforms to detect and remove deepfakes are not performing well for marginalized groups like women and people of colour – the same group of people that are being targeted by deepfake videos in the first place. Humans are not necessarily better or less biased at detecting deepfakes either.

This means that to solve the challenge with deepfakes in particular and the problem with mis- and disinformation in general, we need to think beyond Technosolutionism to find solutions apart from developing AI tools to solve a problem that was created by AI tools. On the other hand, we can’t solely rely on human content moderators to spot deepfakes or mis- and disinformation as they are not only unreliable but content moderation is a daunting task that could cause Psychological issues for the content moderators [39, 40].

Since humans are not reliable in detecting deepfakes and current AI deepfake detection models are discriminatory, one has to think of other possibilities. A first step forward could be to improve the existing AI detection models by including the social and historical contexts in the collected dataset used to train AI deepfake models. For instance, It should be taken in consideration that, historically, women of colour are more sexually objectified than white women [43, 44]. This means that, it is crucial to train an AI deepfake detection model, an inclusive dataset with a balanced representation of people from different demographics.

For a more long-term solution, we need stricter policies and regulations that can effectively stop the spread of deepfake applications, or at least make it not as easy to use. For example, the GitHub repository of DeepFaceLab was disabled by GitHub [20]. However, that came a bit late after it was used to create 95% of deepfake videos as claimed by the Deepfake Forums and Creator Community [42] and still needs to be adopted by many other companies like Apple-store which hosts the DeepFaceLab - Face Swap Editor [41]. Holding people accountable for the videos they generate and creating safe internet spaces rather than building a space of hate and mis- and disinformation to push engagement and gain profit is another important step. Similarly, it is essential to implement policies that protect our data online from being mass scraped and used to train any AI algorithm.

Conclusion

In this article, we discussed bias and discrimination in AI algorithms, AI applications in general and deepfake applications in particular. We also demonstrated that humans are not reliable to detect fake videos. And that AI deepfake detection models are discriminatory. We then discuss possible ways to mitigate the problem with deepfakes like being more inclusive and use more representative datasets to train the AI deepfake applications as also the AI detection models. That goes hand in hand with implementing stricter regulations on different fronts like protecting our data, copyrights, and fighting mis- and disinformation.

References:

  1. https://www.elgaronline.com/edcollchap/book/9781803928562/book-part-9781803928562-5.xml
  2. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
  3. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3705658
  4. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
  5. https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/
  6. https://efatmae.github.io/files/publications/2021/sigir_2021.pdf
  7. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
  8. https://www.security.org/resources/deepfake-statistics/
  9. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759677
  10. https://dl.acm.org/doi/pdf/10.1145/3613904.3641999
  11. A survey of face manipulation and fake detection. arXiv preprint arXiv:2001.00179, 2020.
  12. The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR), 54(1):1–41, 2021.
  13. https://ar5iv.labs.arxiv.org/html/2006.07397
  14. https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/
  15. https://www.ijcai.org/proceedings/2021/0079.pdf
  16. Analyzing Fairness in Deepfake Detection With Massively Annotated Databases https://ieeexplore.ieee.org/abstract/document/10438899
  17. Deepfake: Classifiers, Fairness, and Demographically Robust Algorithm https://ieeexplore.ieee.org/abstract/document/10581915
  18. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8602050/
  19. https://www.nature.com/articles/s44260-024-00006-y
  20. https://github.com/iperov/DeepFaceLab/blob/master/README.md
  21. https://www.propublica.org/article/ how-we-analyzed-the-compas-recidivism-algorithm.
  22. Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities https://dl.acm.org/doi/10.1145/3461702.3462540
  23. AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies https://link.springer.com/article/10.1007/s00146-022-01553-5
  24. https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim
  25. How Could Equality and Data Protection Law Shape AI Fairness for People with Disabilities?https://dl.acm.org/doi/10.1145/3473673
  26. https://www.unesco.org/en/articles/new-unesco-report-warns-generative-ai-threatens-holocaust-memory
  27. On the Origins of Bias in NLP through the Lens of the Jim Code https://arxiv.org/abs/2305.09281
  28. SOS: Systematic Offensive Stereotyping Bias in Word Embeddings https://aclanthology.org/2022.coling-1.108/
  29. Systematic Offensive Stereotyping (SOS) Bias in Language Models https://arxiv.org/abs/2308.10684
  30. Race After Technology https://www.ruhabenjamin.com/race-after-technology
  31. https://www.wired.com/beyond-the-beyond/2019/01/safe-face-pledge/
  32. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
  33. More Than a Glitch https://mitpress.mit.edu/9780262548328/more-than-a-glitch/
  34. Biases in Large Language Models: Origins, Inventory, and Discussion https://dl.acm.org/doi/10.1145/3597307
  35. Ethics and discrimination in artificial intelligence-enabled recruitment practices https://www.nature.com/articles/s41599-023-02079-x
  36. https://www.securityhero.io/state-of-deepfakes/#:~:text=The%20total%20number%20of%20deepfake,of%20all%20deepfake%20videos%20online.
  37. Twitter Universal Dependency Parsing for African-American and Mainstream American English https://aclanthology.org/P18-1131/
  38. https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence
  39. Behind the Screen: Content Moderation in the shadow if social media https://yalebooks.yale.edu/book/9780300261479/behind-the-screen/
  40. Content Moderation: The harrowing, traumatizing job that left many African data workers with mental health issues and drug dependency https://data-workers.org/fasica/
  41. https://apps.apple.com/us/app/deepfacelab-face-swap-editor/id1568914185
  42. https://www.deepfakevfx.com/downloads/deepfacelab/
  43. Revisiting the Jazebel Stereotype: The impact of Target Race on Sexual Objectification https://journals.sagepub.com/doi/10.1177/0361684318791543
  44. OpinionL Society needs to stop sexualizing Latina women https://www.statepress.com/article/2021/02/spopinion-society-needs-to-stop-sexualizing-latina-women
  45. https://urfjournals.org/open-access/beyond-traditional-assessment-exploring-the-impact-of-large-language-models-on-grading-practices.pdf
  46. https://d1wqtxts1xzle7.cloudfront.net/115274703/2105-libre.pdf?1716614482=&response-content-disposition=inline%3B+filename%3DConfronting_Structural_Inequities_in_AI.pdf&Expires=1728486104&Signature=VfIoc~qNQ4f~6h~Ohs7G8BtY8vHMgoP0aNb8YOaIGpJl4tfarsGjlCnYGRDQZzEicat5mzfID9ZN1W756Z8pVpwzWUGvGwPPexF-VTJQYtOB352nIFgDHaDtOSD7u3ReE3dXI96Eg3o3pyynrzj2FRze5eCvC0nu3Tvg85kpoaZf4WiB15DpFnMoPmr5b1V5M-LXZLKY1gilSzrTh4sikeZ3JAl~sgqAHxqkWn3wK6lZ2CcfbV4CXuozUxBbb-6VJFuuPytF7djk7M0rQEdKdIgwH547nbionEXkwjf1QZgROAUfc2q7iYV4XbXudxd5BF8VHLJ26Zkpb5G7S-~t0A__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA
  47. https://link.springer.com/article/10.1007/s40593-021-00285-9#ref-CR112
  48. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2018.00481/full
  49. https://aclanthology.org/Q18-1041.pdf