Only a few years ago this question could be sold with laughter and emotion of the arms. Today? It’s not that fun anymore. When AI enters our everyday life more and more boldly (from writing emails, to telling us what to see in the evening) the world’s armies also implement AI into operational activities.
Okay, but is we really threatened with the future with Terminator? I check who is up to the AI on the front, what these systems can already do … and what can happen when we give artificial intelligence too much power.
Who is testing AI in the army?
Let’s start without surprises, because the US has been trying to be seen as the most modern army of the world for years. The Pentagon does not hide its plans – from drones with its own “reason”, through face recognition systems, to so -called “Decision-Making AI” helping commanders choose the best attack scenario.
What about the greatest economic rival of the US, China? Of course they implement artificial intelligence in the army. And with a flourish. In the Middle Kingdom develops, among others The program of autonomous boats and aircraft, which not only learn, but also can independently “assess the situation” (read – track and destroy the goal). What’s more, in recent weeks, social media has circulated vral videos presenting advanced military robots using AI. From drones-chiefs (it’s not fake), to humanoid robots on wheels, able to overcome the area very diverse for topography.
Russia is also trying to implement AI in the army. In 2021 (i.e. a year before the full-scale entrance of the Russian army to Ukraine), the Russian army boasted the system of Uran-9 combat robots, although tests in Syria showed that Russian and sometimes “loses the signal” and “shoots not where you need”.
In Europe? A little quieter, but Great Britain, France, Sweden and Germans invest in the so -called “Digital defense” – mainly for detecting cyber attacks and managing data from the battlefield.
Aircraft that flies alone – Centaur system tests, i.e. autonomous fighters in action
If you remember Starscream from transformers, then the Swedish Saab is working on something like that. Of course, it’s not about a humanoid robot capable of transforming into a plane, but an autonomous fighter.
Work on the Beyond project has been going on for several months. A modified Gripen E fighter is used in the tests, which was equipped with an artificial intelligence system called Centaur. Although there is still a pilot on the tested aircraft that watches over safety, the simulations carried out in this way are very promising.
Centaur during a simulated fight with an hostile fighter integrated in real time data from on -board sensors. Thanks to this, the system used them to control the flight and perform appropriate air maneuvers. It is worth adding that Centaur suggested a flight to the pilot when he should (and from what distance) to start long -range missiles to effectively reach the enemy with them.
Different areas of the use of AI in warfare
Artificial intelligence does not have to immediately conduct an offensive with a rifle in his hand (or rather with a plot on the boom). For now, the spectrum of applications is wider than it seems:
- Combat drones – Flying machines that can identify goals and eliminate them. Sounds disturbing? Because it is so. Such drones are controlled by the algorithm and behave like one organism. A bit like fish nebulars that are able to simultaneously change direction or react to threats.
- Cyberwar – Artificial intelligence can detect anomalies in real time, automatically react to burglary into systems and at the same time conduct their own attacks.
- Logistics and data analysis – anticipating enemy movements, optimization of supply routes, modeling of battle simulations. It’s less spectacular, but crucial.
- Command support systems – AI does not make a decision alone (yet), but she can analyze thousands of scenarios in a few seconds and advise: “If you attack here, the probability of success is 78%.”
Sounds like a board game? That’s right – the war begins to resemble a very brutal game of chess, in which a man holds figurines, but AI suggests movements.
Is this just a theory or our real future?
Here we come to the point: AI is already in the war. In 2021, the Karg-2 drone, produced by the Turkish company STM, was to autonomously kill a man in Libya. Without an order, without a pilot, he simply decided it was a “enemy.” The UN then began to beat the alarm. Only a year later Russia attacked Ukraine – the war lasting over 3 years is a training ground for new solutions based on AI.
A great example of this is the record sniper shot of the Ukrainian soldier using the Aligator rifle. Thanks to the drones and AI algorithms (which calculated the appropriate parabola of the bullet flight, taking into account the weather conditions), the sniper hit a Russian soldier from four kilometers.
But is the use of AI in the army just positives? Of course, autonomous systems can limit losses among soldiers and reduce chaos on the battlefield. What if the error makes the algorithm? Then we will not blame the “bad order” or “improper data”. What about hacking such a system? If someone takes control of AI running a fire, then we have a recipe for real chaos, especially if we manage to take control of the algorithm in many places at once.
Truth is always the first victim of the war
Okay, but what if we consider that the use of artificial intelligence in hostilities is not only drones or autonomous planes? Then it begins to get interesting, because Genai is also effectively used by the special forces of specific countries. Pouring the Fake News network has become a way to control social moods, build specific narratives and stir the population today.
Such action in the shade can be more effective today than ever before. A great example of this is the last events regarding peace conversations in Washington, between Russia, Ukraine and representatives of EU countries. The meeting on August 18 has not even started yet, but portal X flooded the wave of generated graphics showing e.g. sitting politely in the corridor of EU leaders or Volodimir Zełyński kneeling from Ursula von der Leyen and handing her ring.
AI generative tools are so advanced today that the creation of such a Deep Fujka takes literally a few moments. Special services are well aware of this, which is why the war also goes on on social media – changing the narrative, approach to a given case, or turning one or the other. Since AI is used in this way today, what will happen in say 10 years? Geopolitics will certainly be dominated by artificial intelligence.
Mental experiment – what can the war run by artificial intelligence look like?
Imagine the year 2045. Conflict for access to water in Central Asia. None of the parties sends people to the front. Instead:
- Thousands of drones are patrolling boundaries, searching the area and scan faces.
- Autonomous tanks move like ants according to GPS data and order management system orders.
- AI analyzes satellites, predicts enemy movements and makes decisions about the attack – before a man realizes that something is happening.
- And in the background there is a quiet cyberwar – energy systems, communication satellites, and even factories are the target of digital sabotages.
- Combat androids open fire to hostile combat units – also androids.
And no one even formally declared war, because there are no soldiers on the border – there are an autonomous AI system, which are not citizens and do not conduct action against citizens of a foreign country. Sounds like sci-fi? Two decades ago, the opportunity to talk to artificial intelligence, like ZZ to another person (e.g. with the help of Voice Mode in ChatgPT-5), it also sounded, and today it is our reality.
Is AI in the army a war to click?
The more common use of AI in the army is not surprising – after all, it is military technology that always goes to a civilian, not the other way around. However, the fact that artificial intelligence algorithms may soon decide on the life and death of soldiers.
On the other hand, are human decisions much more cruel? Maybe properly trained AI will be much more humanitarian in their decisions than people? News from modern battlefields will slowly dispel these doubts.
