Home > News > Hardware

Apple AI Unleashes a Game Changer! Claims On-device Model Outperforms GPT-4

Hei Bai Wed, Apr 03 2024 09:12 AM EST

On April 2nd, reports surfaced in the media suggesting that Apple's research team has introduced a groundbreaking model called ReALM, which purportedly surpasses GPT-4 in certain aspects, and can run directly on devices.

ReALM comes in various sizes with parameter counts of 80M, 250M, 1B, and 3B, all boasting remarkably compact footprints, making them suitable for on-device deployment, such as on smartphones and tablets.

The primary focus of ReALM's research is enabling AI to recognize referential relationships between various entities mentioned in text, such as names of individuals, locations, organizations, and more.

Entities in the paper are categorized into three types:

  • On-screen Entities: These refer to content currently displayed on the user's screen.
  • Conversational Entities: These pertain to content related to the ongoing conversation. For instance, if a user says "call mom," the contact information for "mom" becomes a conversational entity.
  • Background Entities: These are entities that may not be directly related to the user's current operation or on-screen content, such as music currently playing or an upcoming alarm.

The paper suggests that despite the proven prowess of large-scale language models across various tasks, their potential remains underutilized when it comes to resolving referential issues for non-conversational entities like on-screen or background entities.

ReALM presents a novel approach, showcasing comparable performance of its smallest model to GPT-4, while significantly surpassing GPT-4 with larger models.

This research holds promise for enhancing Apple's Siri assistant on devices, enabling Siri to better understand and address contextual inquiries from users. S29d916ba-8e88-42a9-9c3c-e230e61f50b0.png