Apple’s Ferret-UI helps AI use your iPhone

By

Ferret-UI
Ferret-UI might help AI systems like Siri understand and work with mobile-device screens.
Photo: Pexels-Tracy Le Blanc

Apple’s new Ferret-UI multimodal large language model could help artificial intelligence systems better understand mobile screens like the one on your iPhone, according to a research paper released Tuesday.

Among those potentially benefitting from this? Perhaps the much-maligned Siri voice assistant will do more for you on mobile devices. And maybe visually impaired users and developers who need to do user interface testing will benefit, too.

Apple’s Ferret-UI could help AI like Siri better understand mobile-device screens

Apple released a paper entitled, “Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs.” It discusses MLLMs, which are similar to text-based large language models behind the likes of ChatGPT, but also include images, audio and video.

The paper does not reveal likely uses of the research. But it seems reasonable to speculate that it could help with interpreting and improving mobile UIs, and maybe it could even fuel a more capable Siri when asked to carry out tasks on mobile devices.

Helping AI figure out and work with mobile UIs

The researchers noted MLLMs tend not to do well interpreting the user interfaces on mobile devices’ small screens.

They added that Ferret-UI brings a better understanding through multiple new capabilities, including reasoning:

In this paper, we present Ferret-UI, a new MLLM tailored for enhanced understanding of mobile UI screens, equipped with referring, grounding, and reasoning capabilities.

Much of the work in place revolves around helping Ferret make it easier for AI systems to gather the details:

Given that UI screens typically exhibit a more elongated aspect ratio and contain smaller objects of interest (e.g., icons, texts) than natural images, we incorporate “any resolution” on top of Ferret to magnify details and leverage enhanced visual features. Specifically, each screen is divided into 2 sub-images based on the original aspect ratio (i.e., horizontal division for portrait screens and vertical division for landscape screens).

Both sub-images are encoded separately before being sent to LLMs. We meticulously gather training samples from an extensive range of elementary UI tasks, such as icon recognition, find text, and widget listing. These samples are formatted for instruction-following with region annotations to facilitate precise referring and grounding.

Outdoing open-source UI MLLMS and GPT-4V

Then comes the “reasoning” part. The researchers said Ferret showed “outstanding comprehension of UI screens” and could act on them. And researchers’ benchmarking showed Ferret did better than “most open-source UL MLLMs” and also beat GPT-4V on “elementary UI tasks”:

To augment the model’s reasoning ability, we further compile a dataset for advanced tasks, including detailed description, perception/interaction conversations, and function inference. After training on the curated datasets, Ferret-UI exhibits outstanding comprehension of UI screens and the capability to execute open-ended instructions.

For model evaluation, we establish a comprehensive benchmark encompassing all the aforementioned tasks. Ferret-UI excels not only beyond most open-source UI MLLMs, but also surpasses GPT-4V on all the elementary UI tasks.

Newsletters

Daily round-ups or a weekly refresher, straight from Cult of Mac to your inbox.

  • The Weekender

    The week's best Apple news, reviews and how-tos from Cult of Mac, every Saturday morning. Our readers say: "Thank you guys for always posting cool stuff" -- Vaughn Nevins. "Very informative" -- Kenly Xavier.