The AI arms race heats up as Meta makes a deal with Microsoft while its Cupertino competitor toils away on ‘Apple GPT’. Plus, Twitter’s X-tinction
• Don’t get TechScape delivered to your inbox? Sign up for the full article here
The AI summer is well and truly upon us. (This gag may not play as well for readers in the southern hemisphere.) Whether we call this period the peak of the “hype cycle” or simply the moment the curve goes vertical will only be obvious in hindsight, but the cadence of big news in the field has gone from weekly to almost daily. Let’s catch up with what the biggest players in AI – Meta, Microsoft, Apple and OpenAI – are doing.
Apple
The iPhone maker has built its own framework to create large language models — the AI-based systems at the heart of new offerings like ChatGPT and Google’s Bard – according to people with knowledge of the efforts. With that foundation, known as “Ajax”, Apple also has created a chatbot service that some engineers call “Apple GPT”.
In recent months, the AI push has become a major effort for Apple, with several teams collaborating on the project, said the people, who asked not to be identified because the matter is private. The work includes trying to address potential privacy concerns related to the technology.
We’re now ready to open source the next version of Llama 2 and are making it available free of charge for research and commercial use. We’re including model weights and starting code for the pretrained model and conversational fine-tuned versions too.
Starting today, Llama 2 is available in the Azure AI model catalog, enabling developers using Microsoft Azure to build with it and leverage their cloud-native tools for content filtering and safety features. It is also optimized to run locally on Windows, giving developers a seamless workflow as they bring generative AI experiences to customers across different platforms
In a study titled “How is ChatGPT’s behavior changing over time?” published on arXiv, Lingjiao Chen, Matei Zaharia, and James Zou, cast doubt on the consistent performance of OpenAI’s large language models (LLMs), specifically GPT-3.5 and GPT-4. Using API access, they tested the March and June 2023 versions of these models on tasks like math problem-solving, answering sensitive questions, code generation, and visual reasoning. Most notably, GPT-4’s ability to identify prime numbers reportedly plunged dramatically from an accuracy of 97.6 percent in March to just 2.4 percent in June. Strangely, GPT-3.5 showed improved performance in the same period.
AI researcher Simon Willison also challenges the paper’s conclusions. “I don’t find it very convincing,” he told Ars. “A decent portion of their criticism involves whether or not code output is wrapped in Markdown backticks or not”… So far, Willison thinks that any perceived change in GPT-4’s capabilities comes from the novelty of LLMs wearing off. After all, GPT-4 sparked a wave of AGI panic shortly after launch and was once tested to see if it could take over the world. Now that the technology has become more mundane, its faults seem glaring.
Willison agrees. “Honestly, the lack of release notes and transparency may be the biggest story here,” he told Ars. “How are we meant to build dependable software on top of a platform that changes in completely undocumented and mysterious ways every few months?” Continue reading...
http://dlvr.it/Ssl40q
TechScape: Will Meta’s open-source LLM make AI safer – or put it into the wrong hands?
July 26, 2023
0