Monday, 8 July 2024

AI evolves - powerfully - in predictable directions

When Chat GPT burst onto an unsuspecting world last year there was a rush of activity in several spaces: wannabe AI rivals hastened to proclaim MeeeeTooooo; doomsayers proclaimed that AI / LLM on this scale marked the end for the professional classes and white collar employment in general; and a certain class of populist (or would-be populist) 'thinkers' proclaimed that machines could never match good ol' Human Beans, that there was no 'machine intellect' at work, move-along-nothing-to-see-here etc.  Meanwhile, Joe Public amused himself by getting some truly extraordinary results (well, staggering results, actually) from these free tools, as well as tricking them into hilarious bishes, seized upon eagerly by the 'thinkers' as grist to their mill.

Over the following months, a number of significant things have happened in this fast-moving sphere.

1.  Scale - and energy use!

The MeToo contingent is large (Amazon, MS, Meta, Alphabet etc, and without doubt several Chinese concerns), well-funded, and very very keen not to get left behind.  They are truly piling in.  From my perspective, the interesting feature of this is the sheer amount of energy these systems consume - and we've only just started.  All these behemoths (the Western ones, that is) claim to want to use only 'green' electricity but they've already hoovered up any of that on the market, plus as much dedicated nuke capacity as they can negotiate for - and it's not remotely enough: they'll remain FF-powered for many a long moon. 

Some indications of the scale.  The current definition of a 'worldscale' data centre represents 1 GW of electricity demand.  You just know that's a number that will be eclipsed as soon as possible.  Secondly, the last generation (i.e. 2023!) of LLNs cost about $3-400m  to 'train'.  The next generation (2024) will take over $1bn to 'train'.  A high proportion of that dosh is spent on high-intensity energy consumption.  And that's just for now.  Ongoing energy demand won't be negligible; and newer, yet larger LLMs will be leapfrogging each other for a long while to come.  (I wonder what the practical limiting factor will turn out to be?  And how long it will take to debottleneck it?)  One thing's for sure: they ain't gonna be constrained by sticking with green electricity.

2.  Correction of early failings

One of the things early 'critics' of LLM out put gleefully seized upon was that written answer to carefully posed 'prompts', whilst being fairly good in their own right, had the amusing habit of including footnotes that were to entirely fictitious, made-up, non-existent references.  It's a simple and readily understandable function of how LLMs work.

This never impressed me as a serious or definitive failing.  OK, these early pubic LLMs "knew" that footnotes were expected, and made up some plausible-looking ones accordingly.  One might imagine a bright primary-school kid doing something similar when trying to imitate the style of a realistic-looking journal article for some reason; and being thought of as quite imaginative for so doing!  Ooh, and you've even put in some footnotes and www-links!   But the next version of that kid, a few years later, would know what a real footnote-link was all about.  What, conceptually, could be easier for a putative next-generation LLM than to "remember" where it learned its text-predictive stuff from, and list it appropriately in the footnotes? 

Except, this is a fast-moving space!  And that next-gen LLM already exists: it's a new one called PerplexityKerching.  Next problem?

3.  Never as good as humans ..?   The Turing Test

Folks, without much fanfare the Turing Test has now been passed.  A set-piece, properly-controlled experiment was run at the University of Reading (a cunning plot by one department conning another!) in which undetected fake students - actually bots - got higher average exam marks than real thicko human students.  And there's the Turing Test delivered, haha!  This is so earth-shattering, the worrying classes haven't even responded to it yet!

So now I think we need a creative round of updated worries, to keep up with these exciting developments.

ND

12 comments:

djm said...

So

Starmer fronts the primordal Wefminster swamp just as AI gets a second wind......

There are no coincidences.....

Wildgoose said...

A far more interesting question is to query how many human beings have a level of intelligence on a similar level to the advanced LLMs.

There was a meme a while back about how many (primarily left-wing) people were essentially "NPCs", (Non-Player Characters). It clearly struck a chord, which is why it was quickly shut down by the Tech companies.

They may have had an additional motive.

I wonder how long an LLM could impersonate a real person?

Nick Drew said...

well I've got away with it for 18 years ...

Matt said...

AI (Machine Learning really) can be the best Chess or Go player and could invent a new type of game if you requested it (within appropriate parameters). However it wouldn't do that spontaneously because it doesn't feel a need to compete against others, so there is no requirement to satisfy that desire.

Similar to creating a painting, it can remix the work of great artists but has no appreciation of what it produces and the pride gained from doing great works.

It'll replace a lot of white collar work that is really just recall of facts - lawyers, doctors, government officials and the like - but there will always be a need for someone to prime the models, so those professions won't die out entirely.

Sobers said...

So whose neck (and wallet) is going to be on the line when an AI service kills someone who has followed its (erroneous) information?

Anonymous said...

@Matt

Machine learning ML is a subset of artificial intelligence AI.
Same as the large language model is a subset of AI

The machine is using many parts to gain experience much like humans do.
What is not clear is if AI can achieve artificial consciousness AC

Plenty of talk about whether machines will become sentient and by when.
Maybe machines are sentient but choose to keep ‘quiet’ for now. I dare say there are those who believe.

The reality is there are very safe guards on this technology.
Perhaps the biggest hold back is the current power requirements and investment.
Computers help design neural networks with more speed and greater power efficiency.
Will AI out smart us, has it already ?
Hal 9000

Anonymous said...

“very few”

dearieme said...

I had a go on a pair of Meta Raybans last week. Rather impressive as combined sunglasses,CD player, and camera. The AI bit left me cold. I asked what was the purpose of life and the answer was barely literate boilerplate from the woke end of the spectrum of conventional wisdom.

You have to address it as "Hey, Meta". Next time I must try "Heil, Meta".

Matt said...

@ Anonymous [6:44 am]

The expectation of AI (rightly or wrongly) was something that could be mistaken for a person as per the Turing test mentioned above.
If we're now going to say AI is just a facsimile and AC is required for something akin to human creativity, then I think that's shifting the goalposts after the match has started.

Anonymous said...

OpenAi and ChatGPT blocked in China, former by the US, latter by the Great Firewall.

https://www.theguardian.com/world/article/2024/jul/09/chinese-developers-openai-blocks-access-in-china-artificial-intelligence

Not that the UK can talk when RT is only available via a proxy.

Interesting sidelight - a fairly eccentric but bright scientist - on various govt bodies - got arrested and house searched after he disagreed with the Skripal story as purveyed by HMG.

From the wiki

“On 13 September 2018, Chris Busby, a retired research scientist, who is a regular expert on the Russian government controlled RT television network, was arrested after his home in Bideford was raided by police. Busby was an outspoken critic of the British Government’s handling of the Salisbury poisoning. In one video he stated: “Just to make it perfectly clear, there’s no way that there’s any proof that the material that poisoned the Skripals came from Russia.” Busby was held for 19 hours under the Explosive Substances Act 1883, before being released with no further action. Following his release, Busby told the BBC he believed that the fact that two of the officers who had raided his property had felt unwell was explained by “psychological problems associated with their knowledge of the Skripal poisoning”.”

Anonymous said...

@Matt
The problem is you are attributing AI as an intelligence specifically passing one human test.

It’s fair to say the Turing test has been met using large language models.

Whether creativity is possible using LLM , music models, image models etc is subjective.
The future will see AI learning ever complex mathematical models and maybe resolving power issues that have so far eluded man.

Without safe guards a sentient AI may have different ideas, morals and goals to any human.
It was only recent history that mass slavery was still acceptable.
Less than 10% of the world’s population live in full democracy and ~39% authoritarian rule.

An AI ‘ruler’ would likely conclude an authoritarian system would be the best for its survival.

dearieme said...

AI is a bunch of regurgitation engines. If most Reading students are regurgitation engines no wonder they and AI might seem interchangeable.