One of the potential reasons for Apple's generative AI struggles may be that it prioritizes user privacy a lot more than the likes of OpenAI and Google. Apple doesn't gather any user data to train its large language models or LLMs (though it has trained its models on free text on the web), and relies heavily on synthetic data to produce AI text from prompts and from existing writing.
In basic terms, the idea is that AI-generated, synthetic text will be compared to a selection of actual writing from users, stored on Apple devices—but with several layers of protection in place to prevent individual users from being identified, or any personal correspondence being sent to Apple. The approach essentially grades synthetic text by comparing it against real writing samples, but only the aggregated grades get back to Apple.
All of this information is encrypted as it's transferred, and comparisons are only made on devices where users have opted into Device Analytics on their gadgets (the option can be found in Privacy & Security > Analytics & Improvements in Settings on iOS, for example). Apple never knows which AI text sample was picked by an individual device, only which samples have better rankings from all the devices pinged.
Genmoji and other tools
Apple will test its AI outputs against user data without looking too closely. Credit: AppleA simpler version of the same approach is already being used by Apple to power its Genmoji AI feature, where you can magic up an octopus on a surfboard or a cowboy wearing headphones. Apple aggregates data from multiple devices to see which prompts are proving popular, while applying safeguards to ensure unique, individual requests aren't seen or tied to specific users or devices.
Similar techniques will soon be used in other Apple Intelligence features, Apple says. Those features will include Image Playground, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence, which have all been among the first Apple AI capabilities to actually make it out to devices.
Meanwhile, Apple's rivals in the AI space aren't showing any signs of slowing down—and have fewer scruples about using text written by their users to train their AI models further. In recent days we've seen Microsoft push out a range of updates for Copilot (including Copilot Vision and file search), Google add video generation to Gemini, and OpenAI upgrade the memory capabilities of ChatGPT.
Read More Details
Finally We wish PressBee provided you with enough information of ( How Apple Plans to Improve Its AI Models While Protecting User Privacy )
Also on site :