The Italian Showdown: What’s Next for ChatGPT and AI Regulation

Dennis Hillemann
6 min readApr 1, 2023

The news of Friday night was a shockwave that sent ripples through the legal community tackling regulatory issues on ChatGPT. Italien regulators took an unexpected hard stance against the technology, opening up a can of worms with regards to legal and regulatory challenges posed by Open Ai’s revolutionary technology. This blog article will aim to summarize the Italian decision in detail, outline eight burning questions under EU law against the usage of ChatGPT, and give an opinion on how disruptive new technology has always posed a challenge for regulators. We will also discuss feasible next steps for the EU and its member states dealing with ChatGPT and consider the balancing act between embracing innovation while enacting adequate regulation.

The Italian Decision: A Summary

The Italian Data Protection Authority (DPA) has begun an investigation into Open AI’s ChatGPT for potentially violating privacy regulations. They accuse the California-based firm of collecting personal data without permission and not implementing a way to keep minors away from inappropriate content. Italy is the first Western nation to take action against ChatGPT, joining countries such as China, Russia, Iran, and North Korea, who have already blocked the tool. At the time this article was written, OpenAI had put a temporary stop on use of ChatGPT in Italy.

Legal Concerns Under EU Law Against the Usage of ChatGPT

I’ve been writing numerous articles about ChatGPT’s recent run-ins with the law. While I’m fully immersed in the radical advancements this technology has brought, it’s also apparent to me that a myriad of legal challenges arise from its implementation. It’s impossible to list every concern or describe them down to the last detail in this passage, but let’s enumerate the most relevant issues and address them under EU law.

  1. GDPR compliance: ChatGPT may not fully comply with the strict data protection requirements of the GDPR.
  2. Data transfer to third countries: Users’ data may be transferred to the United States, raising concerns about data privacy and security.
  3. Infringement of intellectual property rights: Generated content could potentially infringe on third-party copyrights or other rights.
  4. Accuracy of generated content: ChatGPT may produce incorrect or misleading information, posing risks to users and affected individuals.
  5. Lack of age-verification system: ChatGPT does not have a system in place to prevent minors from being exposed to illicit material.
  6. Bias and discrimination: AI systems like ChatGPT may inadvertently perpetuate biases and discriminatory practices.
  7. Privacy and surveillance: The widespread use of AI tools raises concerns about user privacy and potential surveillance.
  8. Liability and accountability: Determining liability for AI-generated content and actions remains a complex legal challenge.

I don’t intend to get into every single detail here. I just want to emphasize how many legal challenges ChatGPT and all the other upcoming AI tools are presenting for regulators, lawyers and judges. Not too long ago someone asked me if ChatGPT can be used in a public authority in Germany. They were surprised by my response which highlighted the complexities of this issue and the questions it raises. But that’s only because they haven’t looked deeply into the legal consequences before using the technology. We have been handed this mind-blowing tool, now we must figure out how to use it in a way that is safe from a technical, economic, and legal point of view.

New Technology, Regulatory Challenges, and the Law

As someone who’s been in law for a while and has witnessed the emergence of other technologies, my concerns about the lack of regulation relating to ChatGPT are both understandable and unsurprising. It has always been this way when it comes to disruptive technology. History shows that new developments continually test regulators and overthrow existing laws. The accelerated use of ChatGPT is no different. Reforming laws to keep up with technology is essential, yet striking the correct balance between stimulating development and ensuring the safety of individuals and society must be taken into account.

As an example, the printing press was developed in the 15th century, allowing for a surge in books and newspapers. The newfound capacity to produce large amounts of information caused regulations to struggle to keep up with the rate of change.

On top of this, the industrial revolution saw the introduction of steam-powered machines. This shift from manual labor to machine-run production made it necessary for new laws, such as labor rules, to be put in place to protect workers from mistreatment.

Moving into the 21st century, regulators were met with a new obstacle: the rise of digital technologies and the internet. These posed challenges as they have altered the way we communicate and interact. Due to this ever-changing terrain of the web, regulations are relatively inconsistent around the world. So, it is up to authorities to keep up with advancements while making sure that people and society are safe. This conflict between advancement and safety still stands in the present day, as AI systems such as ChatGPT have recently come onto the scene. The task remains for regulators to find a balance between these two opposites.

Next Steps for the EU and Its Member States

Currently, every nation is creating their own approach when it comes to regulating AI. However, since AI is a global phenomenon, this won’t work — just like it didn’t with the internet. We cannot find a common global regulatory framework because the political and cultural understandings are so distinct that no agreement between different countries can be reached; do you really think the United States and China could settle on an agreed-upon way to use AI?

But communities of nations that share similar values should come up with certain standards for regulating AI in order to avoid a ‘Wild West’ situation or one where only the strongest will prevail. Take Europe and the US as an example: while the European Union has drafted an AI Act, many have criticized it for being out of date just before it was enacted, due to quickly developing technologies such as ChatGPT. Thus, we must look back at basic regulations and agreements between friendly countries and areas. These could include matters like…

  1. Construct a wide-ranging ethical standard for AI: Nations could come together to form ethical standards for AI that are accepted globally, such as in the western hemisphere. These ethics could shape AI legislation across countries, and build trust between them.
  2. Increase privacy protection: Enhancing compliance with and enforcement of privacy laws will help protect user data security. Although the US and EU may not see eye-to-eye on every aspect of privacy regulation, there is enough common ground to construct fundamental rules — especially concerning data harvesting from generative AI solutions.
  3. Create visibility and responsibility: Stimulate AI creators to be open about their algorithms and data application procedures. 4
  4. Resolve intellectual property issues: Make clear the lawful condition of AI-made content and its relationship with current intellectual property rights. This is a crucial matter which the US and the EU should prioritise.
  5. Combat prejudice and unfairness: Introduce methods to assure that AI systems do not propagate damaging biased opinions or inequitable activities.
  6. Bridge the gap between regulators and innovators: Foster relationships between decision-makers and tech creators to ensure fair and viable regulations.
  7. Educate the public and businesses: Make users aware of the legal and ethical aspects of ChatGPT usage in various situations.


The Italian DPA’s recent prohibition of ChatGPT underscores the necessity to create new rules that can tackle the challenges associated with emerging technologies. It is essential to modify current laws to incorporate technological innovation like ChatGPT, but it is also important to acknowledge that advances in technology are happening quickly and must be taken into consideration.

What do you think? Connect with me on LinkedIn to discuss or join The Law Of The Future Community, where we discuss law & technology. Or listen to my Podcast Law Of The Future for more insights.



Dennis Hillemann

Lawyer and partner with a track record of successful litigation and a passion for innovation in the legal field