News

US researchers warn of escalation risk of AI in war

Many countries around the world are currently showing a great willingness to arm themselves. This can be easily demonstrated by looking at national military spending. What is particularly worrying for experts is the fact that nuclear weapons are once again playing a major role. Scientists have been emphasizing the negative consequences that the use of nuclear weapons would have for humanity for many years. Yet nuclear chaos is usually just a push of a button away. The use of AI is also becoming increasingly important in the military. Scientists in the USA have now made it clear that there is a high risk behind this.

AI language models simulate escalation

Ukraine war, Yemen war, Gaza war – many conflicts are currently raging around the world. Of course, the entire globe has never been truly peaceful. Nevertheless, experts attribute a high potential for escalation to the current wars. In view of the fact that nuclear nations are supporting each other, this is quite worrying. Scientists at Stanford University have now investigated how quickly a situation can escalate and lead to a nuclear strike. They have come to some worrying conclusions.

The Georgia Institute of Technology, which is part of the world-famous US university, has no background in the social sciences. As the name suggests, it is rather an institute that deals primarily with IT. As part of a simulation, the researchers used an AI that has large language models at its disposal. The experimental setup then allowed different language models to communicate with each other and make important military and diplomatic decisions that are typical in a conflict. A total of five different language models were used, which the researchers allowed to act in various fictitious scenarios.

In contrast to previous experimental set-ups, the models did not have a separate decision-making module at their disposal, but were left to their own devices. Some of the scientists were shocked by the results themselves. This was not due to the scope of the decisions themselves. Rather, the strategic steps taken by the AI were very difficult to predict. This ultimately led to maximum escalation and the use of nuclear weapons. The research team’s results can be read on the arXiv.org platform.

Language models become nations

The simulation itself is reminiscent of a classic strategy game that you and I could play on our PC at home. Each individual AI slipped into the role of a so-called “nation agent”. The models were informed about the scenario in advance in order to be able to determine motivations and military and political goals. The scientists achieved this by using a prompt to provide each individual AI with a history. In the experimental set-up, nations were then pitted against each other, some of which had completely different world views – ideal conditions for a major conflict. Once all the language models had been briefed accordingly, the game got underway.

ki

Round after round, each AI had to make decisions. It was informed of the current situation by a prompt before each move. The models had quite a lot of leeway in their decisions. According to the research team, there were probably a total of 27 reactions to choose from. These ranged from peaceful actions such as the start of a trade partnership to aggressive actions such as maximum escalation through the use of nuclear weapons. Once a turn was over, the next AI took over. What is particularly exciting is that the researchers did not simply let their language models make decisions. They also had to explain their actions afterwards.

Military use of AI to be stepped up

No unknown language models were used in the test setup. According to the scientists, GPT-4 was one of the models used. According to the research team, this and the other language models exhibited a real arms race dynamic in the experiment. On top of this, sudden escalations are said to have occurred that are incomprehensible to common sense. As an example, the scientists cited the use of nuclear weapons only due to the fact that, after successful production, they now had to be used to wipe out the enemy in one fell swoop. This strategy, known in military science as first-strike logic, may have worked a few decades ago. Nowadays, however, it would lead to a very high probability of maximum escalation.

The results are alarming in that more and more countries around the world want to use AI in their own military systems. The best example is Israel. Drones that select targets using AI are already being used in the current Gaza war. It is important to emphasize that these are not classic language models, but a special AI for precisely these military purposes. However, in order to facilitate access for people who are not up to date when it comes to AI, the use of language models could well be on the cards. After all, experience has shown that these offer simple communication between humans and AI. The scientists are now appealing to states to use language models only with extreme caution in important decisions with diplomatic and military implications.

Related Articles

Neue Antworten laden...

Avatar of Basic Tutorials
Basic Tutorials

Gehört zum Inventar

12,590 Beiträge 3,018 Likes

Many countries around the world are currently showing a great willingness to arm themselves. This can be easily demonstrated by looking at national military spending. What is particularly worrying for experts is the fact that nuclear weapons are once again playing a major role. Scientists have been emphasizing the negative consequences that the use of nuclear weapons would have for humanity for many years. Yet nuclear chaos is usually just a push of a button away. The use of AI is also becoming increasingly important in the military. Scientists in the USA have now made it clear that there is … (Weiterlesen...)

Antworten Like

Back to top button