Apple wants to bring ChatGPT-like AI directly to your iPhone

By:

Published on:

Apple is gearing up to transform the smartphone world by embedding powerful AI capabilities akin to ChatGPT directly into iPhones. This development hints at a future where iPhones will not only be smart but also exceptionally intelligent, thanks to large language models (LLMs).

While most tech giants, including Google and Microsoft, rely heavily on cloud-based AI models, Apple is carving a unique path. The company’s strategy focuses on an “On-Device” approach, enabling AI tools to operate directly on iPhones. This innovative direction was detailed in a research paper titled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory,” published on December 12, which sheds light on Apple’s ambitious plans.

Apple’s strategy to run AI models directly on devices, an approach that surfaced in October, marks a departure from the prevalent cloud-based models. While open-source models like Meta’s Llama 2 or the Stable Diffusion variants have been device-compatible, giants like ChatGPT and Microsoft’s AI assistant Copilot, based on OpenAI’s GPT-4 models, still rely on cloud computing. Google’s Bard also follows this cloud-based approach. However, Apple’s on-device strategy could set it apart from its competitors, offering a unique blend of privacy and efficiency.

This advancement is set to significantly boost the capabilities of Siri, Apple’s voice assistant, and introduce a range of sophisticated AI-driven features. Expectations include enhanced real-time language translation, improved AI functionalities in photography, and augmented reality. Moreover, this technological leap sets the stage for complex AI assistants and chatbots to operate directly on iPhones.​

The timing of Apple’s entry into the generative AI race remains uncertain. Despite the buzz generated by ChatGPT, Apple has maintained a veil of secrecy over its plans. However, internal sources suggest that the company was taken aback by the ChatGPT hype and is now exploring ways to integrate generative AI into a wide array of apps, with Siri potentially being a major beneficiary. The introduction of these features in 2024 remains speculative.

Apple’s intensified focus on AI is evident from its recent research activities. As reported by Ars Technica, the “LLM in a Flash” paper is the second significant publication from Apple in a short span, indicating a ramped-up effort in AI research. Apple is also negotiating with various press publishers to use their news archives as training material for its AI models, hinting at a comprehensive and ambitious AI strategy.

Apart from that, Apple is reportedly working on its own generative AI model named “Ajax,” intended to rival the capabilities of OpenAI’s GPT-3 and GPT-4. With 200 billion parameters, Ajax suggests an advanced level of AI sophistication. Complementing this, Apple launched the MLX framework, which could form the basis for AI-driven generative applications, utilizing Apple’s hardware for processing.

Vishak
Vishak
Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here