Wednesday 17 May 2023

AI applications in the military

Recently, under a post about the fate of truth in 2023, I included (BTL) a link to a bot-written article on the issues around the use of AI in the military; and commenter Jim opined: 

Most probably serious development is going on to specialise in legal argument, military strategy, economic planning.

I said we'd come back to military applications, which do indeed proceed apace.  Many in the military (in many countries) hope for AI and other computer-based solutions to a whole range of challenges - not least because they think they'll be a lot cheaper, in both blood and treasure.  Of course, from time to time the 'smart', asset-lite approach comes badly unstuck at the hands of determined infantrymen, as Ukraine has taught the lazy Putin and his attempt at a 'smart' lightning campaign last year.  (He seriously fell for the plan to persuade Ukrainian officers to surrender by texting them all a threatening message on their personal 'phones, FFS.)

That said, just as fast-learning computers will cheerfully thrash anyone at chess nowadays, they will also thrash any fighter pilot in simulated dogfights.  Yup, they can get right inside any human decision-loop, and destroy the mortal sluggard with remorseless efficiency.  Who doesn't want drones like that?

But of course, issues arise about less-than-perfect "decision making" even by these computerised paragons.  How are they to be employed?  One answer is: in some applications, perfection never has been expected, and isn't necessary - just a very good probability of turning in a very good result in very quick time.  The rest is just, well, fortunes of war - casualties & all.  Always has been.  

Here's an example from my (somewhat dated, but recently updated) personal experience.  It's trivial enough in its own way, and won't surprise anyone: but it carries some lessons.  Starting around the time of the First Gulf War, a phenomenon that was long anticipated became real: total and utter data overload in the sphere of aerial imagery (inter alia).  More imagery was being produced, of areas of very high significance (e.g., known to harbour Scud missile launchers), than could be fully processed by the available highly-skilled analytic personnel.  Enter software that could run its unblinking eye over the coverage and identify + highlight straight lines, rectangles and circles that are of a size likely to be of interest.  There aren't too many of them occurring in nature: they'll almost always be man-made.  The human could then zoom in on what the computer had found, and discard those that were false positives.  Had the computer missed a few?  Maybe - but so what: they were never going to be found anyway!  And the next version of the software would be better.

To make a long story short, many learning-cycles later, computers nowadays have been trained to identify from aerial imagery, and put a name to - with high success rates - specific types of equipment: any ship or aircraft whatsoever, and quite a lot of vehicles.  Does camouflage still work?  Of course it does, sometimes.  Are there sometimes errors?  Yup (and the humans are quite prone to this, too.)  But 95% accuracy (say), carried out very fast, is a very acceptable result: like Patton said: a good plan executed violently now, is better than a perfect plan next week.  (In due course, 96% will be better ... they're learning all the time.) 

That's just one micro application in the military.  The point is: perfection is not required.  In the fog of war, something very good and very fast is highly desirable.  AI is going to provide a lot of that.

ND

15 comments:

Caeser Hēméra said...

The passive uses won't be a problem, but an absolute boon.

It's the more active uses we have to worry about, we've already had the probable attack by a Kargu drone _without_ human decision making being involved, and the worry is that without placing some rules we're going to start seeing plenty of unintended consequences, along with being careful how we ask the AI bots to do things.

Of course the rules won't be followed by everyone, but we get to hope they get hung by their own petard and that we've good enough defences.

With the costs of targeted LLMs and drones falling, I have to wonder how long it'll be before we see cities adopting defences, this decade? Next decade?

Anonymous said...

Somewhat OT but it seems to be drying in Ukraine, the Somme-like conditions which made off-road movement very difficult are improving.

On topic a drone can do things that would make a pilot pass out from the g forces.

The tide will soon turn now the wunderwaffen Patriots have arrived ;-)

Nick Drew said...

Anon - yes, but in the AI-vs-pilot trials they normalise for g-force (i.e. prohibit the AI from those tight turns etc). Apparently the AI deploys novel tactics - just like in chess and Go.

Wildgoose said...

As far as Go (Baduk/Wei-chi) is concerned, the tactics aren't just "novel", they are ground-breaking - and AlphaGo's beating of Lee Sedol was simply sensational. AlphaGo itself had been trained on huge numbers of games played by top human players and was itself then surpassed by AlphaZero which was trained by simply playing itself. The extremely aggressive contact-heavy AI style of play has forced the top players to completely re-evaluate how they play.

ChatGPT4 may lack an "inner voice", but the same appears to be true of plenty of human beings as well.

So far, they are clever tools. But we are seeing plenty of signs of the arrival of an Alien Intelligence, not from the stars, but on our computer hardware.

I don't think the first thing we should do is hand it our weapons...

Nick Drew said...

I had it patiently explained to me recently, why ChatGPT fabricates completely made-up references and links when asked to write an article - it's because the predictive-text logic recognises there should typically be something like references or footnotes at the end: so it sticks in some (plausible-looking) refs / footnotes. The articles it's been trained on all look like that: so that's what it churns out.

"But it isn't trying to deceive or mislead you - it's just doing its "what-might-come-next" predictive-text thing: IT ISN'T THINKING !"

the trouble with this attempt to be reassuring, is that kids do exactly the same when they are "playing at writing an article" or similar. They, too, look at the exemplars they have to hand, and notice they all come adorned with (in the case of articles) footnote-y things, or whatever - so they put some [meaningless] footnote-y things in, too.

And those very same kids eventually learn what a footnote or a reference really is, and then they put in "proper" ones ... hey, they're LEARNING

Nigel Sedgwick said...

As Anonymous comments at 1.40pm: "On topic a drone can do things that would make a pilot pass out from the g forces." Yes; but so could and can the air-to-air missiles that have been in use for some decades.

And I struggle to follow Nick Drew's comment: "[yes,] but in the AI-vs-pilot trials they normalise for g-force (i.e. prohibit the AI from those tight turns etc)." Surely the enemy will not so limit it's AI, which is what matters against humans.

Best regards

Nick Drew said...

NS Of course! - hence my comment in the post: ""

what I was recounting was the calibration of the AI against human pilots on what one might term a level playing field (a rather inept idiom for aerial dogfights, but there we are)

of course, they don't stop there: they will turn the IA loose against the enemy at full-strength, g-force and all

Bill Quango MP said...

1977.
When ChatbotAI was voiced by Robert Vaughn.
Terrific clip from a classic 70’s film.

https://youtu.be/EAFvwN50um8

Nick Drew said...

Here's an interesting piece in the FT, if you can penetrate the Wall

https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2?sharetype=blocked

Anonymous said...

https://static.rusi.org/403-SR-Russian-Tactics-web-final.pdf

RUSI on changing (improved) Russian military tactics.

jim said...

The FT article quite interesting. Begs question 'what is intelligence'. There were a lot of attempts at deductive logic analysis used as a means of 'understanding' but that did not do the trick. But Google pointed the way to something like today's AI - a massive statistical analysis machine. Which may be a bit like how the human brain works - I doubt we are doing trigonometry in our heads as we drive down the M40 - more like some sort of guessing/experience game.

Which ties in with an intuitive idea of 'intelligence'. Intelligent people tend to get the right or a good answer a bit quicker than others. The real test has to do with situations - good exam technique is not by itself a marker of intelligence but does deliver a rich statistical background which may increase the chances of a good answer coming out of the grey slop inside a skull.

So, if your grey slop is fairly quick and well fed with nutritious experiences then you might with luck be regarded as intelligent. Do the same job with silicon and put it in a missile and you might have a dangerous toy.

Elby the Beserk said...

Blogger randomly deleting my posts again... let's see

Bill Quango MP said...
1977.
When ChatbotAI was voiced by Robert Vaughn.
Terrific clip from a classic 70’s film.

https://youtu.be/EAFvwN50um8

8:39 pm
=======================

Not to mention Hal 9000 in 2001: A Space Odyssey from 1968. The company I worked for for 25 years - GEAC (Library systems) built their own machines. The successor to the Geac 8000 was of course the Geac 9000. Ran like the blazes...

jim said...

The education trade seems in a bit of a twitch about AI. ChatGPT could probably get a 2.1 degree whatever the subject, so what are poor old teachers and lecturers to do - let alone £400k vice chancellors. Indeed any diligent human student could probably do a perfectly good degree course using YouTube lectures. The only snag being there is no money in it for those who dish out the piece of paper.

Those who can, do. Those who can't, teach. Those who can't teach, teach teachers.

But what does seem a bit weak/missing is some kind of 'implication engine'. These do exist for certain subjects but a really good implication engine for politics and military force might be a good thing - except for those who inhabit the Pentagon and the MoD and the finance industry. Probably not long to wait.

Bill Quango MP said...

There is a very interesting observation in Max Hasting’s book on the Cuban missile crisis.

Curtis LeMay. Maxwell Taylor. US pentagon top brass. Both incredibly successful WW2 veterans. Think General Patton if you don’t know who they were.

Both were seriously proposing whacking the missiles in Cuba and dealing with whatever followed from the Soviet reaction,.World War, was a serious option.

Hastings notes that although the meetings, ( which are all on tape) sound like Dr Strangelove, and the military seem to be warmongering madmen, it was precisely because BOTH sides had these sorts of tough characters in their top ranks, that a nuclear threat was credible. And hence, a deterrent was credible.

In other words, Theresa May’s ‘I’m leaving without a deal..I am..just you watch..this is your last chance to give me a little bit of a deal..no? ..well..I’m going now..in three….in two…one and three quarters…one and a half..I mean it..” fooled nobody.

But Johnson’s , was just credible enough not to risk the EU refusing to negotiate.

Can A.I replicate this? The illogical logic?

Is A.I.

jim said...

A decent implication engine - or a half competent diplomat - would have seen that putting missiles in Turkey (and Italy) would provoke a response. But JFDI - it looks macho and plays well at home. Of course it was worth a try and all that hard ball stuff from LeMay et al was so much BS. Because a nuclear bomb is a wonderful thing - until you get one back. Very very bad for future voting prospects especially over such an obvious ploy, not a good look at all. So the US was always going to have to withdraw, whether she knew it or not. But it played well and looked good and macho.

Which leaves a worry over Ukraine, we won't do anything overt just in case. But one day someone might say a nuke will fix this - just a little one. That would take us over the psychological threshold - like only putting it in a little way. If we stuck to one round (or two) of tit for tat then I'm sure the US would do a Uvalde if it possibly could. Then we are in trouble. Having done it once (or twice) we might like to do it again, after all nothing really really bad happened the last time.

As for Brexit, even a pocket calculator could see it was a stupid idea with no bargaining position. Johnson just took the batteries out and said - look no response, we won.