In a move that has surprised the tech world, Italy has announced that it is banning the use of ChatGPT, a popular AI language model, over privacy concerns.
ChatGPT, which was developed by OpenAI, has been used by millions of people around the world to generate text responses to a variety of queries, from trivia questions to philosophical musings. However, its use has been controversial, with concerns raised about the potential misuse of personal data and its impact on privacy.
On Friday, the Italian data protection agency announced that it would immediately block the chatbot from collecting Italian users’ data while authorities investigate OpenAI.
The investigation comes after the chatbot experienced a data breach on March 20, which jeopardized some users’ personal data, such as their chat history and payment information. According to OpenAI, the bug that caused the leak has been patched.
But the data breach was not the only cause for concern in the eyes of the Italian government. The agency questioned OpenAI’s data collection practices and whether the breadth of data being retained is legal. The agency also took issue with the lack of an age verification system to prevent minors from being exposed to inappropriate answers.
OpenAI has been given 20 days to respond to the agency’s concerns, or the company could face a fine of either $21 million or 4% of its annual revenue.
In recent years, Italy has taken a strong stance on data privacy, with the introduction of strict new laws and regulations designed to protect citizens’ personal information.
Italy is considered the first government to temporarily ban ChatGPT in response to data and privacy concerns. But similar fears have been mounting across the world, including the U.S.
Earlier this week, the Center for AI and Digital Policy filed a complaint with the Federal Trade Commission over ChatGPT’s latest version, describing it as having the ability to “undertake mass surveillance at scale.”
The group asked the FTC to halt OpenAI from releasing future versions until appropriate regulations are established.
“We recognize a wide range of opportunities and benefits that AI may provide,” the group wrote in a statement.
“But unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge.”