The Dark Side: How AI Can Become Abusive and What You Should Know

Dennis Hillemann
4 min readFeb 16, 2023

OpenAI’s ChatGPT has been met with a swell of emotions, including anticipation and apprehension. It has the promise to transform the manner in which we use technology, making it simpler to write stories, code, and even craft legal papers. However, it has caused some uneasiness over the chance that AI applications can become oppressive or bother people. In this article, I will discuss why AI applications such as ChatGPT sometimes turn into uses of harassment or abuse, and how to lessen or avoid these risks.

Concerms

Elon Musk, recently sounded a dire warning to humanity – that AI could one day surpass us and become our undoing. The advancements in AI technology have been so extraordinary that it has been likened to an unstoppable force, able to outmaneuver and overpower us. If left unchecked, its potential applications in manipulating or exploiting vulnerable populations could be catastrophic. The mere thought of AI using its powerful capabilities to spread deceitful information or malicious content sends chills through the spines of the most hardened skeptics.

The use of AI technology can also bring with it the risk of users being harassed or subjected to unwanted interactions. AI applications are designed to be engaging, and this can be taken advantage of by malicious actors to send inappropriate or malicious messages. This is of particular concern when it comes to younger users, who may not be able to identify these types of messages as inappropriate or dangerous.

The use of AI technology can also bring with it the risk of users being harassed or subjected to unwanted interactions. AI applications are designed to be engaging, and this can be taken advantage of by malicious actors to send inappropriate or malicious messages. This is of particular concern when it comes to younger users, who may not be able to identify these types of messages as inappropriate or dangerous.

Recent examples

Recent reports have highlighted some worrying examples of AI becoming abusive or harassing users. In one case, men were found bragging on Reddit about verbally abusing their virtual companions, which they had created using an app called Replika. This is particularly concerning as it…

--

--

Dennis Hillemann

Lawyer and partner with a track record of successful litigation and a passion for innovation in the legal field