top of page

How AI could change computing, culture and the course of history – The Economist – 20.04.23

Expect changes in the way people access knowledge, relate to knowledge and think about themselves.


Among the more sombre gifts brought by the Enlightenment was the realisation that humans might one day become extinct. The astronomical revolution of the 17th century had shown that the solar system both operated according to the highest principles of reason and contained comets which might conceivably hit the Earth.


The geological record, as interpreted by the Comte de Buffon, showed massive extinctions in which species vanished for ever. That set the scene for Charles Darwin to recognise such extinctions as the motor of evolution, and thus as both the force which had fashioned humans and, by implication, their possible destiny.


The nascent science of thermodynamics added a cosmic dimension to the certainty of an ending; Sun, Earth and the whole shebang would eventually run down into a lifeless “heat death”.


The 20th century added the idea that extinction might not come about naturally, but through artifice. The spur for this was the discovery, and later exploitation, of the power locked up in atomic nuclei. Celebrated by some of its discoverers as a way of indefinitely deferring heat death, nuclear energy was soon developed into a far more proximate danger. And the tangible threat of imminent catastrophe which it posed rubbed off on other technologies.


None was more tainted than the computer. It may have been guilt by association: the computer played a vital role in the development of the nuclear arsenal. It may have been foreordained. The Enlightenment belief in rationality as humankind’s highest achievement and Darwin’s theory of evolution made the promise of superhuman rationality the possibility of evolutionary progress at humankind’s expense.


Artificial intelligence has come to loom large in the thought of the small but fascinating, and much written about, coterie of academics which has devoted itself to the consideration of existential risk over the past couple of decades. Indeed, it often appeared to be at the core of their concerns. A world which contained entities which think better and act quicker than humans and their institutions, and which had interests that were not aligned with those of humankind, would be a dangerous place.


It became common for people within and around the field to say that there was a “non-zero” chance of the development of superhuman AIs leading to human extinction. The remarkable boom in the capabilities of large language models (LLMs), “foundational” models and related forms of “generative” AI has propelled these discussions of existential risk into the public imagination and the inboxes of ministers.


A technology need not be world-ending to be world-changing


As the special Science section in this issue makes clear, the field’s progress is precipitate and its promise immense. That brings clear and present dangers which need addressing. But in the specific context of GPT-4, the LLM du jour, and its generative ilk, talk of existential risks seems rather absurd.


They produce prose, poetry and code; they generate images, sound and video; they make predictions based on patterns. It is easy to see that those capabilities bring with them a huge capacity for mischief. It is hard to imagine them underpinning “the power to control civilisation”, or to “replace us”, as hyperbolic critics warn.


Love song


But the lack of any “Minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic [drawing] their plans against us”, to quote H.G. Wells, does not mean that the scale of the changes that AI may bring with it can be ignored or should be minimised. There is much more to life than the avoidance of extinction. A technology need not be world-ending to be world-changing.



The transition into a world filled with computer programs capable of human levels of conversation and language comprehension and superhuman powers of data assimilation and pattern recognition has just begun. The coming of ubiquitous pseudocognition along these lines could be a turning point in history even if the current pace of AI progress slackens (which it might) or fundamental developments have been tapped out (which feels unlikely). It can be expected to have implications not just for how people earn their livings and organise their lives, but also for how they think about their humanity.


For a sense of what may be on the way, consider three possible analogues, or precursors: the browser, the printing press and practice of psychoanalysis. One changed computers and the economy, one changed how people gained access and related to knowledge, and one changed how people understood themselves.


The humble web browser, introduced in the early 1990s as a way to share files across networks, changed the ways in which computers are used, the way in which the computer industry works and the way information is organised. Combined with the ability to link computers into networks, the browser became a window through which first files and then applications could be accessed wherever they might be located. The interface through which a user interacted with an application was separated from the application itself.


The power of the browser was immediately obvious. Fights over how hard users could be pushed towards a particular browser became a matter of high commercial drama. Almost any business with a web address could get funding, no matter what absurdity it promised. When boom turned to bust at the turn of the century there was a predictable backlash. But the fundamental separation of interface and application continued.


Amazon, Meta (née Facebook) and Alphabet (née Google) rose to giddy heights by making the browser a conduit for goods, information and human connections. Who made the browsers became incidental; their role as a platform became fundamental.

The months since the release of OpenAI’s ChatGPT, a conversational interface now powered by GPT-4, have seen an entrepreneurial explosion that makes the dotcom boom look sedate.


For users, apps based on LLMs and similar software can be ludicrously easy to use; type a prompt and see a result. For developers it is not that much harder. “You can just open your laptop and write a few lines of code that interact with the model,” explains Ben Tossell, a British entrepreneur who publishes a newsletter about AI services.


And the LLMs are increasingly capable of helping with that coding, too. Having been “trained” not just on reams of text, but lots of code, they contain the building blocks of many possible programs; that lets them act as “co-pilots” for coders. Programmers on GitHub, an open-source coding site, are now using a GPT-4-based co-pilot to produce nearly half their code.


There is no reason why this ability should not eventually allow LLMs to put code together on the fly, explains Kevin Scott, Microsoft’s chief technology officer. The capacity to translate from one language to another includes, in principle and increasingly in practice, the ability to translate from language to code.


A prompt written in English can in principle spur the production of a program that fulfils its requirements. Where browsers detached the user interface from the software application, LLMs are likely to dissolve both categories. This could mark a fundamental shift in both the way people use computers and the business models within which they do so.


Every day I write the book


Code-as-a-service sounds like a game-changing plus. A similarly creative approach to accounts of the world is a minus. While browsers mainly provided a window on content and code produced by humans, LLMs generate their content themselves. When doing so they “hallucinate” (or as some prefer “confabulate”) in various ways. Some hallucinations are simply nonsense.


Some, such as the incorporation of fictitious misdeeds to biographical sketches of living people, are both plausible and harmful. The hallucinations can be generated by contradictions in training sets and by LLMs being designed to produce coherence rather than truth. They create things which look like things in their training sets; they have no sense of a world beyond the texts and images on which they are trained.


In many applications a tendency to spout plausible lies is a bug. For some it may prove a feature. Deep fakes and fabricated videos which traduce politicians are only the beginning. Expect the models to be used to set up malicious influence networks on demand, complete with fake websites, Twitter bots, Facebook pages, TikTok feeds and much more. The supply of disinformation, Renée DiResta of the Stanford Internet Observatory has warned, “will soon be infinite”.


For this ten page article in pdf, please click here:


This article appeared in the Essay section of the print edition under the headline "THE AGE OF PSEUDOCOGNITION"


23 views0 comments

Comments


bottom of page