News

AI regulation too strict: Is ChatGPT threatened with extinction in the EU?

A regulation to regulate AI-based programs such as ChatGPT is to come into force in the EU. However, from the point of view of the head of OpenAI, Sam Altman, the whole thing goes a step too far in the first draft.

Too strict rules for ChatGPT in the EU?

The topic of “artificial intelligence” has not only been dominating the tech world for months. Instead, software such as ChatGPT is assigned a great importance for the whole society. Some even compare the machine learning-based technology to historical milestones such as industrialization or the invention of printing. It is therefore not surprising that politicians, as well as the tech industry, are calling for comprehensive rules to be established for the use of artificial intelligence. Even one of the great minds behind ChatGPT is of this opinion. Sam Altman, head of OpenAI, has been of the opinion for a long time that politics must react with appropriate regulations.

No sooner said than done. The EU did not have to be told twice. Accordingly, the community of states unceremoniously drew up an AI regulation. However, from the point of view of the AI healer, this is a step too far. After the expert has taken a look at the first draft, his verdict is scathing. Thus he called the regulation in the context of a panel at the University College London on Wednesday simply “over-regulation”. Nevertheless, he expressed openness to complying with the planned regulations. However, if this is not possible, the company is not afraid to “shut down operations,” according to the OpenAI chief.

Altman points to “technical limitations”

In a recent interview with Time magazine, Altman clarified his views on rules for artificial intelligences. For example, he emphasized that they are of great importance, but furthermore made clear that there are “technical limits” to them. For example, over-regulation, as he recognizes in the EU’s current draft, may limit programs like ChatGPT in their functionality to such an extent that normal use would no longer be possible. This is due to the fact that, on the basis of the EU draft, ChatGPT is software with a high risk potential. However, this does not go hand in hand with a ban on the AI solution. Instead, the classification ensures that OpenAI must meet many requirements about its system. Among other things, this includes the submission of a risk assessment. Malicious tongues may claim that the current draft underlines the classic European regulatory frenzy.

Others, however, are likely to see the regulation as a correct and important step toward keeping artificial intelligence in check. As the proposed regulations are currently being debated globally, Altman suggests a compromise. He advises both the EU and the U.S. to take a middle course of the two different approaches. In his view, the end result would have to be a regulation that is characterized by “traditionally European” strict requirements, but also by the U.S. border setting, which experience has shown to be quite free. If this is not the case, it will be a mammoth task not only for OpenAI to bring everything into line. Developers who want to use the AI solution as the basis for their work will then also be confronted with a major obstacle. After all, even small tech companies think globally these days and simply don’t have the resources to develop for two different markets.

EU Parliament backs comprehensive review

The background to all this is a draft law developed by the EU Parliament a few days ago. From a collaboration of various committees, this produced the first cornerstones for an AI regulation. In the regulation, quite strict rules can be read for artificial intelligences that have been developed to process large amounts of data. The operators of such applications are to ensure that risks to social cornerstones such as democracy, security, the rule of law and fundamental rights are minimized accordingly. Under certain circumstances, the operators of services such as ChatGPT may also be required to have controls carried out by independent experts.

On top of that, the regulation stipulates that companies like OpenAI should be very transparent when it comes to the basis of their AI. In particular, proprietary training data is to be documented in detail and made available to the public. Altman apparently has no objection to the transparency itself. Quite the contrary. During his speech in London, he emphasized that users of programs like ChatGPT should be given extensive access. If he has his way, a new authority should be founded for this purpose. Its task should then be to thoroughly test basic models of artificial intelligences such as ChatGPT.

While this is currently only being discussed in the rest of the world, such an investigation is already in full swing in the EU. For example, the European Data Protection and Privacy Authority (EDSA) is currently looking into the AI baseline with a task force set up specifically for this purpose. The reason for setting up the discussion platform was a temporary ban on ChatGPT imposed by the Italian data protection authority. Subsequently, a ban in Germany was also discussed. In the meantime, however, the service is once again available throughout the EU. Nevertheless, the task force would like to exchange views on a common strategy for dealing with such AI base models.

Is the EU losing out on ChatGPT and co.?

Experts agree that AI solutions such as ChatGPT will shape not only the tech world but also our society in the coming years. Particularly in the area of work, such basic models could ensure far-reaching transformation. Microsoft showed what possibilities this means for the common user for the first time during its developer conference. The Windows Copilot is based on the same technology as ChatGPT and is available to its user at any time. If the practical helper really delivers what Microsoft promises, the investment of $10 billion from the US tech company will definitely have paid off. At the same time, OpenAI with ChatGPT is just the beginning. It is no secret that other well-known giants from the tech world are also working on their solutions. These include Google, but also Amazon.

It is therefore all the more important that the EU perhaps reconsiders its currently planned very strict regulations. The release of Google Bard makes it clear how this is received by the developers responsible. The chatGPT alternative of the search engine giant recently went online in 180 countries. However, this does not include a single EU country. Apparently, Google sees the current discussion in the EU Parliament as problematic. From the point of view of Sundar Pichai, Google CEO, a release is simply too risky at the moment. After all, the uncertain legal situation ensures that one does not want to senselessly launch the in-house AI. Not yet, at least. According to the search engine giant, it would like to launch Bard in our country after the final AI regulation comes into force. Not only the special AI rules are to be adhered to.

Google wants to comply with laws

On top of that, Google promises that it wants to comply with other regulations such as the Digital Services Act (DSA), which came into force in October 2022, and the General Data Protection Regulation. At least, this is what Pichai announced during a meeting with EU Internal Market Commissioner Thierry Breton. Google’s statements seem somewhat implausible in view of the various data protection and competition violations that the tech company has been guilty of in the EU in recent years. In January, for example, the U.S. company received a warning from the German Federal Cartel Office due to the questionable processing of user data. A real remedy for this problem will probably only come with a successor to the Privacy Shield. In this regard, the EU and the USA want to strive for a common line on the topic of data protection. The EU Commission is currently working on the corresponding appropriateness decision.

Collaboration between politics and the tech industry

As stated in a statement by Thierry Breton to the experts of TechCrunch, efforts are being made behind the scenes to work closely with experts from the tech industry. After all, they know best what scope AI base models will have in the near future as well. Furthermore, he expressed that they would like to have reached an agreement with Google to be able to use Bard on European computers not only after the AI regulation has come into force. This clearly shows the balancing act that EU policy has to perform. On the one hand, they want to protect the population from mass violations of data protection and copyright. On the other hand, Brussels is of course also aware that AI will be the topic that will strongly shape our society in the future.

They don’t want to be left behind and, above all, they don’t want to make themselves unattractive to important, money-earning tech companies. Accordingly, an agreement has been reached on negotiating a so-called AI pact as a temporary solution. This is to come into force more quickly than the AI Regulation and also establish rules of conduct for developers of basic AI models. However, the whole thing will only take place on a voluntary basis. Once again, cooperation between the EU and the USA will play a major role. Together, they want to set up standards for artificial intelligence. This was announced by Margrethe Vestager, Vice President of the European Commission responsible for digital affairs, at the G7 summit in Hiroshima.

Simon Lüthje

I am co-founder of this blog and am very interested in everything that has to do with technology, but I also like to play games. I was born in Hamburg, but now I live in Bad Segeberg.

Related Articles

Neue Antworten laden...

Avatar of Basic Tutorials
Basic Tutorials

Gehört zum Inventar

11,051 Beiträge 2,732 Likes

A regulation to regulate AI-based programs such as ChatGPT is to come into force in the EU. However, from the point of view of the head of OpenAI, Sam Altman, the whole thing goes a step too far in the first draft. Too strict rules for ChatGPT in the EU? The topic of „artificial intelligence“ … (Weiterlesen...)

Antworten Like

Back to top button