It’s no secret that AI has had a heavy impact on not only the advancement of technology, but also social media, and its ability to both positively and negatively affect our youth. Primarily on social media, AI has provided new forms of graphic art, and editing tools to help with content creation. However, with great power comes great responsibility, AI has also allowed for different fabrication techniques, with the ability to misrepresent certain people or groups, and impact their reputations.
Laws on AI generated content throughout the media are typically very light, resulting in many social media platforms failing to monitor deceiving content, with poor moderation and a higher focus on revenue. With these downfalls, these tech companies fail to protect victims of AI-generated abuse.

According to statistical data from (Arvix.org), A poll directed towards a wide range of people, including the majority being women, reported 2.2% of those respondents being victimized by synthetic intimate imagery, while 1.8% admitted to perpetration.
As previously mentioned, Enforcement is inconsistent, leaving victims without a meaningful resource, further posing a direct threat to journalistic integrity and democracy. Therefore, I believe it’s important that there is stricter enforcement in place as to what can be uploaded to the internet.
Moreover, a lack of moderation in detecting manipulated media hovers at around 55% , according to (ScienceDirect.com), Tech platforms have proven to be unwilling or unable to enforce unfiltered content despite stronger moderation tools.
When it comes to who sees this content, it tends to be many people in the younger generations of people, including the majority being children and young women. Exposure to this content can be detrimental, as it contains false information, violence, and generally just a lack of diversity.
An experiment conducted by MCASA, researched deepfakes, and its ability to take the internet into a phase of uncertainty with harmful content, showed a statistic, noting 96% of deepfake videos remain in a diminishing nature to well known leaders and younger women.
These intentionally diminishing videos are especially harmful, because what you’re looking at isn’t even technically real. These videos are created through various sites where any user can type a prompt and generate a video at the tip of their fingertips, making it far too easy to make this content.
As said earlier, these deepfake videos tend to reach its intended audience, unfortunately, manipulating viewers into thinking what they’re seeing is real, often creating a string of mixed emotions like confusion, discomfort and sadness.
There needs to be more enforcement as to who can upload videos using AI, and a proper moderation check before anything goes public.
Governments must implement stronger regulations on deepfakes and non-consensual imagery because current laws and regulations on AI technology are inefficient, leaving victims vulnerable, and leading to distrust in the media. Unchecked growth of deepfakes and non-consensual imagery presents not only a personal safety crisis but also a societal threat to truth and trust.