The chatbot GPT-4 has produced more viable commercial ideas more efficiently and more cheaply than US university students
In all the frenzied discourse about large language models (LLMs) such as GPT-4 there is one point on which everyone seems to agree: these models are essentially stochastic parrots – namely, machines that are good at generating convincing sentences, but do not actually understand the meaning of the language they are processing. They have somehow “read” (that is, ingested) everything ever published in machine-readable form and create sentences word by word, at each point making a statistical guess of “what one might expect someone to write after seeing what people have written on billions of webpages, etc”. That’s it!
Ever since ChatGPT arrived last November, people have been astonished by the capabilities of these parrots – how humanlike they seem to be and so on. But consolation was drawn initially from the thought that since the models were drawing only on what already resided in their capacious memories, then they couldn’t be genuinely original: they would just regurgitate the conventional wisdom embedded in their training data. That comforting thought didn’t last long, though, as experimenters kept finding startling and unpredictable behaviours of LLMs – facets now labelled “emergent abilities”. Continue reading...
http://dlvr.it/SwYrDR
When it comes to creative thinking, it’s clear that AI systems mean business | John Naughton
September 25, 2023
0