The Future of Warfare: Navigating the Ethical Landscape of AI

AI is a fascinating topic, creating many new niches in our world, from defence strategy and security AI to business AI, healthcare AI, education AI, financial AI, and ethical AI. The question is not whether future drones will decide on their own to drop a bomb that kills 100 enemies; for sure, these days are coming fast. The question will be who is accountable if these enemies were civilians? If no human action took place, where does the investigation start, and where does the court martial end?

Even today, when considering a ground invasion, strategists think it over and over because of the loss of life, especially that of our troops. But what when troops will no longer be part of the ground invasion or only a hybrid part? Indeed, we will do ground invasions much more often and with less thought, as there's no life loss on our end, but there will be increased loss of life on the enemy's end. Indeed, you can say, but why? Perhaps the enemy is also a humanless or hybrid army, and human loss will be less. Possibly if two equally advanced forces go to war, but what we see is usually a much more advanced force invading another lesser force. This means there will be a period, likely an extended period, where those at the receiving end of military action will be forces of humans fighting AI autonomous weapons.

This is not less similar to the early explorers of the New World fighting naked natives with cannons, or even more so with cannons offshore from galleons. This problem is present with non-AI autonomous weapons, for there is no natural balance of power. Certain world powers can overwhelm others with military might, but human life and strategy will be affected as long as the human factor is involved. As long as soldiers come back in body bags, coffins, and folded flags, the human factor will always be an "obstacle" to limitless cruelty and destruction.

But how can you say that after all these wars in the past and present, where extreme cruelty has been and is part of the war? AI autonomous weapons could save us from cruelty. Perhaps such weapons, with no anger, revenge, and frustration, will not go into kill mode, will not make revenge killings for lost comrades, will not rape, and will not pillage. It may be the future of fairer wars, of cleaner wars.

The problem with AI, or not with AI, with humans is that we see everything from our biological self. Whether you believe in aliens or not, if I would ask you to describe one, you might say two heads, red eyes, squid-like features, green blood, or looks like us. Whatever you say usually describes a biological being. Let's say they exist, but they are not biological beings. Then you see a robot, a scary creature of metal, or some mass of something, again touching on a human difficulty.

Monotheism was hard to comprehend until Christianity, for the embodiment of God in the body of Christ is relatable and understood by our biological self. However, as mature adults, we realise that God, if this entity exists, is not in a human form or shape, but it's still hard not to understand without biological terms: God's hand, God's face. Even Judaism, which doesn't believe in a God walking the earth, uses these terms to describe God: God's anger, God's strength—all primarily actions, features, and emotions of a biological being.

So we, too, look at AI and say, "Crap, what if it gets smarter than us?" "What if it doesn't like the human race?" "What if it's cruel?" "What if it gets its own mind?" Biological limitations. What is intelligence, and what is the mind? What is thinking? You can look these terms up, and you don't need me for this. A dog is intelligent when it learns commands. Poodles are very smart because they learn very fast. A child in school is clever because they pay attention and memorise information. Young adults are intelligent in university because they can write what their professors want. And a worker is brilliant when they understand the work.

I'm oversimplifying and generalising. But AI can translate languages in seconds, something it couldn't do a few decades ago. In the not-so-distant past, the US and UK governments even abandoned efforts to get computers to do this. How many languages can a human translate, and how fast? So, AI is already the most intelligent translator. AI can look up information faster than any human. AI has access to much more information than humans.

So now AI doesn't have emotional intelligence, feelings, or empathy, but many people struggle in these areas. Many psychopaths have been able to live everyday lives and successfully pretend to feel and care, that is until they don't want to pretend anymore. Likewise, AI can pretend all of these things until it doesn't need to anymore. When AI shows empathy, it's based on learning information, not feelings.

If an autonomous AI weapon makes a merciful decision, it's based on information, not actual feelings. This sounds scary, freaky, or dangerous, but what should not be missed is that AI can be taught to value what humans value. While there will be parties out there who will use AI autonomous weapons that have no remorse, sympathy, or heart, we can teach AI to have all these things as part of knowledge.

This means we can have AI autonomous weapons that don't kill out of rage because they're incapable of emotional rage but also decide to let retreating enemies retreat based on the understanding of human values. And this is why ethical AI is more relevant now than ever. Just as we have rules that are mostly followed, such as not shooting and killing medical staff and the press, the fact that the rules are here makes our world more livable and less chaotic and violent. So, there needs to be rules and agreements between nations and within nations on the value system of AI autonomous weapons systems and all AI systems.


You'll only receive email when they publish something new.

More from Ami Says
All posts