Apple boosts spending on creating ChatGPT-like technology


Generative AI could be used to bolster Siri’s abilities



Apple has boosted its budget for developing artificial intelligence, emphasizing creating conversational chatbot features for Siri — allegedly spending millions of dollars daily on research and development.

In May, it was learned that Apple had been recruiting more engineers to work on generative AI projects. While the company didn’t make an official forward-looking statement, CEO Tim Cook said that generative AI is “very interesting.”

But, the story of Apple’s foray into generative AI starts much earlier than May. Four years ago, Apple’s head of AI, John Giannandrea, formed a team to work on large-language models (LLMs), the basis of generative AI chatbots like ChatGPT.

Apple’s conversational AI team, the Foundational Models team, is led by Ruoming Pang, who previously worked at Google for 15 years. The team has a significant budget, and trains advanced LLMs using millions of dollars daily. Despite having only 16 members, their advancements rival those of OpenAI, which spent over $100 million to train a similar LLM.

According to The Information, at least two other teams at Apple are working on language and image models. One group focuses on Visual Intelligence, generating images, videos, and 3D scenes, while another works on multimodal AI, which can handle text, images, and videos.

Currently, Apple is planning to integrate LLMs into Siri, its voice assistant. This would allow users to automate complex tasks using natural language, similar to Google’s efforts to improve their voice assistant. Apple believes its advanced language model, Ajax GPT, is better than OpenAI’s GPT 3.5.

Ultimately, incorporating LLMs into Apple products has its challenges. Unlike some competitors who use a cloud-based approach, Apple prefers running software on-device for better privacy and performance. However, Apple’s LLMs, including Ajax GPT, are quite large, which makes it difficult to fit them onto the iPhone due to their size and complexity.

There are precedents for shrinking large models, such as Google’s PaLM2, which comes in different sizes, including one suitable for devices and offline use. While it’s unclear what Apple’s plans are, the company could opt for smaller LLMs for privacy reasons.

In May, Internal documents and anonymous sources leaked details of Apple’s internal ban on ChatGPT-like technology and the plans for its own LLM.



Source link

Previous articleMac users hit by Atomic Stealer malware via malicious Google Search ads
Next articleStarfield ‘Potato Mode’ mod lets the hottest game run on a toaster