PREMIUM
Ask a Tech

Exploring the shadows of AI: Navigating the ethical minefield of digital exploitation and deepfakes

Digital duality: The stark reality of authenticity versus deception in the age of deepfakes.

In a previous column, I highlighted the issue of scammers using AI-generated voice imitations to defraud people by impersonating their family, friends and other loved ones. Today, I want to delve into the equally alarming topic of deepfakes.

Here is a personal anecdote to illustrate. Around a decade ago, my family visited New Zealand, but unfortunately, my sister couldn’t join us. We took a cherished family photo during the trip, which felt incomplete without her to us. To remedy this, I digitally added her image to the photo. Despite my efforts, the result was noticeably awkward due to mismatched lighting and other inconsistencies, but it was still meaningful to my mother.

The advancement of AI technology has dramatically improved the ability to seamlessly integrate individuals into photos or videos, making it incredibly difficult to discern alterations. This capability is astonishing yet unsettling, significantly when misused. For example, the renowned artist Taylor Swift was a victim of non-consensual, sexually explicit deepfakes, leading to significant public outrage and proposed legislative action by the United States government to curb the distribution of such content.

The potential harm of this technology extends beyond celebrities to ordinary individuals, including children and loved ones, who could be targeted and exploited online. Given the visual nature of how we consume content online, especially among younger audiences, the risk of digital manipulation poses a significant threat to privacy and mental wellbeing, as people often take what they see online at face value. This situation underscores the urgent need for awareness and regulatory measures to protect individuals from such invasive and harmful uses of AI.

The threat of blackmail in the digital age is a pressing concern. Imagine receiving a message from an anonymous individual threatening to release compromising content across various online platforms unless a ransom is paid. Such a scenario is undoubtedly distressing, as societal norms often lead to snap judgments without considering the authenticity of the content. A significant challenge lies in the sluggish response of social media platforms to remove such content. Once images are circulated online, the damage is often irreversible. Reporting these incidents to law enforcement presents its own set of challenges, given the anonymity afforded by the internet makes it difficult to trace the source of the attack.

Bullying is a significant concern for children in schools, and the misuse of technology can exacerbate this issue. Young individuals are increasingly judged by their online presence, and even content that isn’t explicitly harmful — such as a video where a young person speaks negatively about peers — can be damaging when spread online. The situation becomes even more serious when explicit content involving minors is involved. In such cases, reporting the matter to authorities can lead to more effective outcomes and potentially result in criminal charges.

AI technology holds great potential for entertainment, as demonstrated by its application in productions such as Star Wars-related series made by Disney. It used digital effects to create a younger version of Luke Skywalker, aligning his appearance with the narrative timeline. However, this innovative use of technology has sparked debates within the entertainment industry. Many actors are raising concerns about the use of their digital likenesses in movies and TV shows without their consent, especially when it pertains to characters they’ve portrayed. These concerns are rooted in issues of creative rights and ownership, highlighting a complex intersection between technology and intellectual property.

I deliberated considerably before composing this article, mindful that it might inadvertently provide malicious individuals with ideas for exploiting others. However, I believe it’s crucial to shed light on the darker aspects of AI technology and the potential harm it can inflict. Understanding the negative implications of AI is important for everyone, as it can lead to a more empathetic and less judgmental response to victims of such targeting. By discussing these issues openly, we aim to foster a more informed and compassionate community.

As always, I hope you enjoyed the content, and if you have any suggestions or want to reach out, I’m always available at askatech@mmg.com.au