How can we stop child sexual abuse online?

0
88
Surjit Singh Flora
Surjit Singh Flora
(Asian independent) More and more child sexual abuse material (CSAM) made by AI is quickly becoming a major online danger for kids. With the rise of generative AI, bad people can now make abusive pictures and videos of made-up kids that look very real, or they can use old content in bad ways to make it look like new abuse happened. The outcome shows a big and worrying hole in the global system for protecting children, which isn’t being filled by current laws and safety measures.
The fact that harm can occur without the physical presence of any child highlights the seriousness of this crisis.
The truth is, there is no definitive solution. The explanation lies in the fact that certain individuals, often referred to as ‘children,’ deliberately seek attention and may misrepresent their age to facilitate that ‘exploitation.’

This leads to a range of complications, as the perpetrator frequently lacks awareness of the actual age of the other individual. It can be entirely inadvertent ‘mistreatment.’

Different jurisdictions’ varying legal definitions of adulthood and the age of consent may also contribute to this.

I am not seeking to rationalize intentional abuse; rather, I am highlighting the intrinsic challenges involved.

The United States of America, which reports 2.9 million occurrences of child abuse annually, is home to more than four children who pass away every single day as a result of the wounds and injuries they sustained as a result of victimization.
AI-generated CSAM often makes new explicit material by searching through databases of already existing pictures of children, including pictures of children who have been abused in the past. People who have had their pictures changed and used without their permission have talked about how upsetting it was.
The psychological impact is significant, and the legal implications are often unclear across jurisdictions. Europol recently conducted a global operation that arrested 25 people who were creating and distributing AI-generated CSAMs.
The operation illustrates the widespread and systematic nature of this abuse. Scientists discovered that criminals were making CSAMs that could avoid being found using simple AI technologies that were easy to get. Traditional CSAMs can be found using hashing technologies, but AI-generated abusive content is harder to find with each new version, so it’s almost impossible to find with current tools.
Some CSAM content made by AI looks so much like real children that even experienced analysts have trouble telling the difference between fake and real cases of abuse. Predators use these methods to create and disseminate content that is difficult to monitor, particularly within the dark web and encrypted communities, where a significant portion of this behaviour thrives. In the UK, the National Crime Agency has described this development as a “nightmare” and is advocating for urgent reforms. However, the advent of AI complicates these definitions. If a child is not present, does the content remain illegal? Experts are increasingly insisting that the answer is yes—because the intent and effect are all equally harmful. Some places have started to change their laws to make it clear that AI-generated CSAM is illegal, but there are still problems with enforcement.
The most dangerous consequence of this technology lies in its scale. A malicious person can now leverage AI technology to generate thousands of images in a matter of minutes. The fact that this content is available all over the world makes things even harder for search systems that are already working hard. This problem will only get worse as technology improves and becomes easier to access. This is especially true since search doesn’t keep up with new technology.
In countries like India, where internet connectivity is rapidly increasing, awareness of AI-generated threats is still quite limited. Advances in technology have led to a significant increase in online child sexual abuse in India. The Indian Centre for Child Protection is essential to supporting Indians in combating online child sexual abuse and exploitation. Recently, Ashok Kumar, regional vice president of the International Justice Mission, advised that parents should not upload images of their children on social media. It was also brought up that children are involved in other types of cybercrime. For example, a 9-year-old from Bengaluru was illegally uploading very pornographic movies to a fake server, which was found in Switzerland.
Parents must teach their kids to never put their faith in someone they meet on the internet. Nobody cares if you believe you are familiar with them; you are not. Someone you have met online but never in person may be trying to fool you into thinking you are in love with them. This is because they are striving to emulate the individualities believe you aspire to be. It is unacceptable for any mature adult to interact with any child on the internet. Even if the child claims to be nothing more than a friend, it is unacceptable for any mature adult to interact with them on the internet.

The parents. Stay honest with your children. Apply the controls for parents. Communicate to them that it is your responsibility to safeguard them and that there are a significant number of insane individuals in the world. You should limit electronics use and give them little privacy. You should maintain a low-key approach with your children.

Parents and children should establish security measures, rules, and policies, as online harassment and slander can cause distress and cause significant harm. Although it is not feasible to totally remove the danger, there are several precautions that people may take to safeguard themselves while using social media platforms such as Twitter. The following are some suggestions:

Adjust your privacy settings: If you want to manage who may engage with your tweets, send you direct messages, and tag you in posts, you should review and alter your privacy settings on Twitter. You may reduce the amount of unwanted attention and abuse that you get by restricting access to your material.

Make use of passwords that are both strong and unique: When you create a password for your Twitter account, make sure it is both strong and complicated; avoid using the same password for numerous sites. To add an additional degree of protection, you should enable two-factor authentication, often known as 2FA.
Block and report abusive accounts: If you witness harassing or abusive behaviors, you should immediately block and report the accounts that are responsible for it. It is possible to report harassment, threats, impersonation, and other types of abusive material on Twitter using the available methods. In addition, you should think about making use of the mute option to conceal material from certain accounts or keywords.
Minimize personal information: Be cautious when sharing sensitive personal information on your Twitter profile or in your tweets. It is important to prevent yourself from divulging information that might be used against you, such as your home address, phone number, or any other personally identifying information.
Take caution in selecting your followers; thoroughly examine and accept those who have followed you. You may want to think about using the restricted tweet function, which restricts access to your postings to just trusted followers. In this manner, you will have a greater degree of control over the individuals who see your work.
Before you tweet, give it some thought: Be careful with what you share on the internet. Think about the possible consequences that may result from your statements and how they could be understood. Avoid getting involved in arguments that are passionate or inflammatory because they have the potential to develop into harassment or slander.
As a means of preserving evidence, you should capture screenshots or make copies of the material that you believe to be offensive in the event that you are subjected to online harassment or defamation. If you need to report the event to Twitter or contact police authorities, this paperwork may be helpful to you in getting those things done.
Make an effort to get assistance: When things are tough, it may be helpful to reach out to friends, family, or support groups for emotional support. When you discuss your experiences with someone you can trust, it may be comforting and help you deal with the stress of dealing with online abuse.
Legal proceedings: If the harassment or defamation you have experienced online is serious, you may need to seek the advice of an attorney to understand the legal alternatives available to you. Because the laws that govern online harassment and defamation differ from one country to another, it is essential to obtain the counsel of a specialist who is familiar with your particular circumstances.
It’s crucial to prioritize mental health and take social media breaks when needed.
According to a global survey, 54 percent of the youth population reported experiencing online sexual harm before the age of 18. Currently, India is the country with the second highest number of internet users in the world, with over 100 percent urban internet penetration.
Even though programs that teach digital literacy have helped make people more aware, a lot of parents, teachers, and policymakers still don’t know how dangerous generated content can be. The rise of CSAMs made by AI makes an already difficult system even more difficult to use. India’s digital child protection policies need to be looked at again right away.
Legal change is only part of the answer. Technology businesses should improve their AI-generated content discovery technologies before distribution. AI governance needs an ethical foundation. It should be illegal to represent or create children. We should always aim to use technology to empower, not to oppress.

LEAVE A REPLY

Please enter your comment!
Please enter your name here