Artificial Intelligence - it's about where the Buck stops
Big Tech and governments are hyping AI - time to look at what's behind it.
More than two years ago, Ian Hogarth contributed a long-read piece to the Financial Times Weekend published on April 15/16, 2023, under the headline “We must slow down the race to God-like AI”.
I found the article alarming. Luckily, I commited my thoughts to writing shortly after reading it. Here is what I wrote in 2023:
Ian Hogarth was a co-founder, in 2007, of Songkick, a concert discovery service which is today owned by Warner Music Group. At the time, the start-up was backed by Y Combinator, a blue-blooded accelerator counting Airbnb and Dropbox among its alumni. Another Y Combinator alum is Sam Altman, its CEO from 2014 until 2020 when Altman left to focus entirely on his role as CEO of Open AI, creators of ChatGPT. Hogarth is a co-founder of Plural Platform, and he has told the FT that he has invested in 50 AI start-ups.
Having invested in only one AI start-up, my entrepreneurial pedigree is on the humble side. I decided to scrutinize Hogarth’s article in some detail when I noticed that he uses the phrase “God-like AI” seriously, not in an effort to make fun of the megalomania of certain researchers and businesspeople. It appeared that Hogarth is interested in an emotional rather than a rational debate of the subject.
According to Hogarth, the term Artificial General Intelligence (AGI) “usually refers to a computer system capable of generating new scientific knowledge and performing any task humans can”. I beg to differ. To begin with, the scientific community is most likely unable to agree on what “creation of new scientific knowledge” is. As far as AGI is concerned, “A universal algorithm capable of performing usefully in a wide variety of environments” is a much more common description, falling well short of “everything that humans can do”.
The article also claims that the important question about AGI within the community is “how far away in the future this development might be”. More likely, this is not “the”, but one question asked by those that no longer doubt that AGI is possible. More than a few experts are still pondering the question if AGI is in fact achievable at all – even when they think of AGI as not quite capable of “everything that humans can do”.
The global AI community believes that AGI may or may not be created and is not sure whether it would be catastrophic or beneficial or anything in between. There is neither a consensus that AGI is possible nor agreement that it absolutely needs to be developed.
What needs to be borne in mind is that, if AGI can be developed, it will be developed. If we have reason to believe it is detrimental to humankind’s well-being, then robust measures are required to either prevent its creation – with slim chances of success – or to tightly control its use.
If a significant part of the general public believe that researchers and entrepreneurs are striving to create a “God-like” power and are sure to succeed in short order, that will automatically lock out all persons with sincere religious feelings from any rational and civilized debate of AGI. This constituency alone would be far too many missing from such debate. Now remember that a huge number of people have a hard time understanding the modern world and have made a habit of taking to superstitions and social-media-stoked conspiracy theories. All these people will also stay away from a sober discussion of AI, both groups together forming a solid majority in favor of a massive witch-hunt of “evil elements, tendencies and endeavors”. That would certainly not be a good basis for decision-making on any development of any importance to humankind.
For good measure, the article claims that “God-like AI” could be a force ……that could usher in the obsolescence or destruction of the human race” and that “a few companies…...have no oversight…..are running towards a finish line without an understanding of what lies on the other side.”
The article correctly points out that brute computing force has been a contributor to recent AI successes, with the number of floating-point operations used in training AI models increasing by a factor of 100 million over 10 years.
The sad anticlimax of the article is the part where we are assured that the Chinese Communist party is unlikely to allow the development of AGI because it could “become more powerful than Xi Jinping”. It is implied that Xi considers himself God and will not tolerate “other Gods before him”.
The article also charts “paths to disaster”. I will try to use them for one possible mode of evaluation, a legal perspective.
Path to disaster number one is a hypothetical AI system “that the UN tasks with reversing ocean acidification” that ends up releasing a self-multiplying catalyst that sets in motion a chemical reaction that gobbles up the world’s oxygen, slowly and painfully killing humanity. Let’s assume for a moment that this was technically possible. The UN would have to wire a computer with machinery that could provide any chemicals that the AI decides is useful for its endeavor. A first point worth observing is that whoever hooks up the machine with the computer and powers it up is responsible for all consequences. That is where “the buck stops” if you want to put it in the words of the plain-spoken American Presidents Truman. It is easy to imagine a Truman-like presidential personality with his hand hovering over a big red button pointing out that “If this goes badly, the buck stops right here”. Unfortunately, they don’t make politicians like they used to anymore. Today’s politicians dedicate a very significant part of their time and abilities to “passing bucks” wherever they can. And that is the real risk that we must be aware of. Interested parties will try to “pass the buck” and hold “God-like”, invisible powers lurking in vast machines responsible for what is clearly a human perpetrator’s crime.
So here is a first hint at “cui bono”, to whose advantages such ideas might be popularized. If it becomes acceptable to blame machines, even in a legal sense, for crimes committed or damages inflicted, that would certainly be welcomed by whoever has “bucks to pass”. An example that comes to mind is how the car industry successfully started a discussion of “AI-ethics” along the lines of whether an AI-governed car should rather crash into a pregnant woman or an old man when faced with a choice between the two. The real answer, of course, is easy: the machine had better not kill either and if it does, culpable parties will be found and punished, and the car’s manufacturer is the prime suspect. It is easy to see who would much rather blame it on a short-circuited ethics algorithm. It is a pity this has not long been exposed for what it is: bullshit.
Hogarth goes on to cite some sober voices, like Timnit Gebur of the Distributed AI Research Institute, who criticizes that AI should not be hyped and that its misuse should be regulated directly for what it is, for example lack of transparency, lack of accountability, exploitative labor practices.
In summary, Ian Hogarth wants the reader to believe that, while there may be a safe variety of AI which can be safely developed and should be regulated strictly like any other activity with potential benefits but risks, there is also “God-like AI”. He further wants us to believe that a merciless race to “God-like AI” is on and the competitors will tumble helter-skelter rather than proceed over the finishing line really soon. He points to his own investment in a company called Anthropic, which is also working on lines of inquiry that may lead to or assist the creation of “God-like AI” and he describes how, although dedicating “42 percent of its team” to AI alignment, the company is “locked in the same race”, apparently with no chance to escape without government help.
Therefore, so he claims, governments of the world must unite to pass legislation that will stop the further development of “God-like AI”.
A month after publication of the article, there is not a single significant media outlet that has not warned of the dangers of AI and called for governments to step in.
It looks very much like governments will fall over themselves to accept the invitation.
Even the attempt to define the kind of AI that will henceforth be illegal will fail miserably. The legislation will further erode the dignity of the judiciary institutions of our countries, it will create confusion and corruption, and it will create winners and losers.
Expect the advocates of regulation to be among the winners.
So I concluded two years ago.
My next article will look at where we stand today.
