“Foundation”: A Safer AI Model by Apple
A leak revealed Apple’s proprietary artificial intelligence model, showcasing the company’s advancements in the field.
-
Apple Announces Date for Unveiling New iPhones
-
By 2028… Apple will be able to self-repair any screen scratches
The “Foundation” model from the American tech giant is regarded as the most secure in design and highly competitive with both Meta’s programs and GPT-4.
Numerous headlines had previously claimed that Apple was falling behind in the AI race because it lacked its own dedicated program.
The company sparked the debate itself by showcasing “Siri” integrated with “ChatGPT” at its latest annual conference.
-
After 10 years of work, “Apple” abandons its electric car manufacturing project
-
Why did Apple’s sales decline in China?… The apple is losing its allure
This led to speculation that OpenAI was fully powering “Apple Intelligence,” but that is not the case.
“Apple Intelligence” is a broad marketing term that encompasses a range of new AI features, such as writing, text generation and summarization, as well as image creation, among many others.
All of these features are powered by “Foundation,” as explained in an academic research paper authored by more than 150 Apple employees who evaluated the model.
-
Apple releases an update to address the iPhone 15 overheating issue
-
With the upcoming iPhone update, Apple is abandoning these devices
Smarter Than You Think
In a human evaluation test, 1,393 prompts were entered into the “Apple Foundation” model and other competing models.
The results showed that Apple lagged slightly behind “GPT-4” but outperformed “Mistral” and “GPT-3.5” more than 50% of the time.
Benchmarks indicate that “Apple Intelligence” is just as effective at summarizing text, whether on-device or in the cloud.
More Responsible and Secure
When it comes to generating non-discriminatory, hateful, exclusionary, harmful, sexual, illegal, or violent content, the “Apple Foundation” model is by far the most secure compared to competitors.
In nine out of ten tests, the output from the Apple Foundation model was considered over 50% safer than others.
Apple has meticulously filtered harmful content by removing inappropriate language using inference tools to cleanse the data.