Recently, under a post about the fate of truth in 2023, I included (BTL) a link to a bot-written article on the issues around the use of AI in the military; and commenter Jim opined:
Most probably serious development is going on to specialise in legal argument, military strategy, economic planning.
I said we'd come back to military applications, which do indeed proceed apace. Many in the military (in many countries) hope for AI and other computer-based solutions to a whole range of challenges - not least because they think they'll be a lot cheaper, in both blood and treasure. Of course, from time to time the 'smart', asset-lite approach comes badly unstuck at the hands of determined infantrymen, as Ukraine has taught the lazy Putin and his attempt at a 'smart' lightning campaign last year. (He seriously fell for the plan to persuade Ukrainian officers to surrender by texting them all a threatening message on their personal 'phones, FFS.)
That said, just as fast-learning computers will cheerfully thrash anyone at chess nowadays, they will also thrash any fighter pilot in simulated dogfights. Yup, they can get right inside any human decision-loop, and destroy the mortal sluggard with remorseless efficiency. Who doesn't want drones like that?
But of course, issues arise about less-than-perfect "decision making" even by these computerised paragons. How are they to be employed? One answer is: in some applications, perfection never has been expected, and isn't necessary - just a very good probability of turning in a very good result in very quick time. The rest is just, well, fortunes of war - casualties & all. Always has been.
Here's an example from my (somewhat dated, but recently updated) personal experience. It's trivial enough in its own way, and won't surprise anyone: but it carries some lessons. Starting around the time of the First Gulf War, a phenomenon that was long anticipated became real: total and utter data overload in the sphere of aerial imagery (inter alia). More imagery was being produced, of areas of very high significance (e.g., known to harbour Scud missile launchers), than could be fully processed by the available highly-skilled analytic personnel. Enter software that could run its unblinking eye over the coverage and identify + highlight straight lines, rectangles and circles that are of a size likely to be of interest. There aren't too many of them occurring in nature: they'll almost always be man-made. The human could then zoom in on what the computer had found, and discard those that were false positives. Had the computer missed a few? Maybe - but so what: they were never going to be found anyway! And the next version of the software would be better.
To make a long story short, many learning-cycles later, computers nowadays have been trained to identify from aerial imagery, and put a name to - with high success rates - specific types of equipment: any ship or aircraft whatsoever, and quite a lot of vehicles. Does camouflage still work? Of course it does, sometimes. Are there sometimes errors? Yup (and the humans are quite prone to this, too.) But 95% accuracy (say), carried out very fast, is a very acceptable result: like Patton said: a good plan executed violently now, is better than a perfect plan next week. (In due course, 96% will be better ... they're learning all the time.)
That's just one micro application in the military. The point is: perfection is not required. In the fog of war, something very good and very fast is highly desirable. AI is going to provide a lot of that.