top of page

Ban of ChatGPT in Italy, will the chatbot cease to exist in the coming future?

On march 30th 2023, the Italian Data Protection Authority, ordered the US company Open AI LLC to temporarily stop ChatGPT’s processing of personal data relating to individuals located in Italy. This measure by the Italian nation has ignited a spark among other European countries to study if harsher measures need to be taken to rein in the wildly popular chatbots and whether to coordinate such actions.

On march 23rd 2023, CNBC reported that Sam Altman, CEO Of OpenAI disclosed a bug that allowed some users of its popular AI chatbot ChatGPT to view messages from others. However, later the CEO tweeted, “We had a significant issue in ChatGPT due to a bug in an open-source library, for which a fix has now been released and we have just finished validating.” As a result of this fix, users were no longer able to view chat histories for ChatGPT conversations that took place between 1am and 10am. Pacific time on Monday, march 20, 2023.

The Italian Data Protection Authority, also known as Garante, highlighted this issue while also expressing its worry over a lack of age restrictions on the chatbot, and how it can serve factually incorrect information in its responses. However, Italy isn’t the only country that has been expanding regulations in the AI horizon. Several other governments have been coming up with their own rules for AI. It should be considered that every regulation that comes into being will have a touch of generative AI in it. Generative AI refers to a set of AI technologies that generate new content based on prompts from users. It is more advanced than previous iterations of AI, thanks in no small part to new large language models, which are trained on vast quantities of data.

Inclusion of AI in our activities has increased at a fast pace, which has made it difficult for governments to bring out an effective AI regulation. It enables in performing tasks in an automated way which were previously performed by humans. This aspect has also raised a concern among regulators that AI will pose threat for job security and equality. This has made government to start thinking about how to deal with general purpose system such as Open AI’s ChatGPT, with several nations even considering to put a temporary ban on it.

Recently, U.K. announced its plans for introducing regulations in the AI. In the discussion held, the government asked regulators to apply existing regulations to AI. Even though the proposals which the government introduced did not mention ChatGPT by name, it did outline key principles for companies to follow when using AI in their products, including safety, transparency, fairness, accountability, and contestability. Legal analysts have already held that Britain will certainly not introduce any restriction on ChatGPT, or any kind of AI for that matter. Instead, the nation is aiming to ensure that companies are developing and using AI tools responsibly and giving users enough information about how and why certain decisions are taken.

Moving on, in the U.S., any proposal in relation to oversighting AI technology hasn’t been brought. The country’s National Institute of Science and Technology put out a national framework that gives companies using, designing or deploying AI systems guidance on managing risks and potential harms. However, the framework runs on a voluntary basis, implying the firms would face no consequences even if they do not meet these rules.

If we consider nations such as Russia, Iran, China or North Korea, ChatGPT isn’t available in any of these countries and also in other various countries with heavy internet censorship. We cannot consider that the app has been blocked here, as OpenAI doesn’t allow users in the country to sign up. If we take example of China, the nation has been building its own AI alternatives. Companies including Baidu, Alibaba etc, have already announced plans for ChatGPT rivals. China has been keen to ensure its technology giants are developing products in line with its strict regulations. In march 2023, Beijing introduced first-of-its-kind regulation on so-called deepfakes, synthetically generated or altered images, videos or text made using AI.Chinese regulators previously introduced rules governing the way companies operate recommendation algorithms. One of the requirements is that companies must file details of their algorithms with the cyberspace regulator.Such regulations could in theory apply to any kind of ChatGPT-style of technology.




Follow Global Lawyers Association for more news and updated from International Legal Industry.




bottom of page