A team of AI researchers at Apple claims that their AI system, Reference Resolution As Language Modeling (ReALM), can outperform GPT-4 on some kinds of queries. They have published a paper on the arXiv preprint server describing their new system and its new information-gathering abilities.
Over the past couple of years, LLMs such as GPT-4 have dominated the computing landscape as companies have competed to improve their products and gain more users. Apple has been noticeably lagging in this area—its Siri digital assistant has not added much in the way of artificial intelligence.
In this new effort, the team at Apple claims that its ReALM system is not just an attempt to catch up—it is a product that outperforms all the other LLMs currently available to the general public on certain types of queries.
In their paper, the team at Apple explains that their LLM provides more accurate answers to user questions because it has the ability to use ambiguous on-screen references and to access both conversational and background information. Put another way, it can look at the user's screen as part of its search process, looking for clues regarding what the user was doing before they posed the query.
To read more, click here.