The Dark World of Artificial Intelligence

The Dark World of Artificial Intelligence

AI is a dangerous tool that if misused, can lead to widespread mistrust in society and change the true narrative

February 17, 2023

It’s loved and hated. We embrace it, but we also curse its invention. It makes our lives easier, but we fear it. “It” is artificial intelligence (AI), something that is predicted to dominate financial markets and change the world as we know it. So what is AI, exactly? As The Oxford Dictionary defines it, AI is the simulation of human intelligence processes by machines, especially computer systems. It is something relatively new, invented only seventy years ago or so, and we have only scratched the very surface of what it can do. Already though, it can perform operations that require immense detail and precision. It does things faster than the average human and delivers consistent results. Used in healthcare and transportation, AI is projected to save tens of thousands of lives every year.

As Oxford Dictionary defines it, AI is the simulation of human intelligence processes by machines, especially computer systems.

Underneath all of this, however, is something dangerous. All good things can be exploited, and AI is no exception. Deep fakes, or when AI is used to digitally to alter one’s face or body, is one of the ways people misuse AI. This issue is becoming more and more prevalent, with more people getting their hands on this high-end technology. Because this is all so new, the majority of the people that are getting access to this AI are the ones with disposable income, and they can use this to manipulate and change the population’s views towards minority groups. 

In September 2022, CCTV footage recording the death of a young woman at the hands of Iran’s morality police led to massive protests in Iran and around the world. Similarly, bystanders’ videos of George Floyd’s tragic death sparked massive protests against police brutality. These powerful recordings spread through social media, bringing people together for a cause they believe in, ultimately leading to widespread protest. These spontaneous movements are an intricate part of any society and lead to corrective measures in democracy and laws. But deep fake videos are used as an attempt to change the real story. Regimes or individuals against a certain movement can propagate deep fake videos, sowing doubts in the people’s minds about the truth of the real video. These videos are widely used in courts as proof, allowing the juries to make decisions based on the fabricated footage. The advent of deep fake videos makes it hard for jury members to trust the videos of bystanders. Additionally, deep fakes can be used in war. A dangerous deep fake video of Ukraine’s President was released in March 2022, falsely telling his troops to surrender their forces and spreading confusion across Ukrainian forces.

Currently, governments are trying to fight the spread of these fake videos. In June 2022, the European Union published the “Code of Practice on Disinformation,” which calls for large platforms like Meta, Google, Microsoft, Twitter, and TikTok, to curb the spreading of malicious deep fakes and disinformation. Platforms that do not carry out risk mitigation measures could incur massive fines of up to 6% of their global turnover.

In the US, Defense Advanced Research Projects Agency (DARPA) has programs devoted to the detection of deep fakes. These programs will develop tools that will automatically detect, attribute, and characterize deep fakes. This work is in progress and the DARPA requested $28.9 million for this program for the fiscal year 2023.

Meta, formerly Facebook, has created Deep Fake Detection Challenge (DFDC), and the goal of the challenge is to spur researchers around the world to build innovative new technologies that can help detect deep fakes. In 2020, Microsoft released a tool called Video Authenticator, which provides a score reviewing the likelihood of a video being digitally manipulated by AI. Similar research is happening in other large platforms like Google. So far, all these tools are still in their infancy, and there is no platform-agnostic solution for this. 

People must be aware that these AI systems exist, and they should have tools on how to verify and certify genuine videos. This includes checking the video metadata and paying attention to the faces, glares, glasses and other identifying features. Users who post sensitive videos should be required to submit information to prove the content as authentic, and social media platforms should enforce this restriction. Malicious deep fakes are no doubt harmful to society and it is on us to work together to mitigate the risks.

 

The Monarch • Copyright 2024 • FLEX WordPress Theme by SNOLog in