It’s easy to be fooled by the mimicry, but consumers need transparency about how such systems are used
The Google engineer Blake Lemoine wasn’t speaking for the company officially when he claimed that Google’s chatbot LaMDA was sentient, but Lemoine’s misconception shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate? It led Lemoine to leak secret transcripts from the program, resulting in his current suspension from the organisation.
Google is decidedly leaning in to that kind of design, as seen in Alphabet CEO Sundar Pichai’s demo of that same chatbot at Google I/O in May 2021, where he prompted LaMDA to speak in the voice of Pluto and share some fun facts about the ex-planet. As Google plans to make this a core consumer-facing technology, the fact that one of its own engineers was fooled highlights the need for these systems to be transparent.
Emily M Bender is a professor of linguistics at the University of Washington and co-author of several papers on the risks of massive deployment of pattern recognition at scale Continue reading...
http://dlvr.it/SSDPBs
Human-like programs abuse our empathy – even Google engineers aren’t immune | Emily M Bender
June 15, 2022
0