AI

AI Is Apple’s Best Shot at Getting You to Upgrade Your iPhone

AI Is Apple’s Best Shot at Getting You to Upgrade Your iPhone

This trend bears out in secondary market data: Shipments of used smartphones increased nearly 10 percent, to 309.4 million shipments, in 2023, up from 282.6 million units the year prior, according to research firm IDC. For a lot of people, a good phone really is just good enough.Apple is also selling privacy as part of its generative AI package, saying that Apple Intelligence “is integrated into the core of your iPhone, iPad, and Mac through on-device processing.” Apple’s AI tools use Apple-developed large language models, instead of relying on another entity’s models or a patchwork of LLMs, as confirmed by…
Read More
Apple’s Biggest AI Challenge? Making It Behave

Apple’s Biggest AI Challenge? Making It Behave

Giannandrea said that Apple had focused on reducing hallucinations in its models partly by using curated data. “We have put considerable energy into training these models very carefully,” he said. “So we're pretty confident that we're applying this technology responsibly.”That training wheels approach to AI applies across Apple’s offering. If it works as promised, it should mean that Apple Intelligence is less prone to fabricate or suggest something inappropriate. In its blog post, Apple claimed that testers found its models more useful and less harmful more often than competing on-device models from OpenAI, Microsoft, and Google. "We're not taking this…
Read More
Code generation using Code Llama 70B and Mixtral 8x7B on Amazon SageMaker | Amazon Web Services

Code generation using Code Llama 70B and Mixtral 8x7B on Amazon SageMaker | Amazon Web Services

In the ever-evolving landscape of machine learning and artificial intelligence (AI), large language models (LLMs) have emerged as powerful tools for a wide range of natural language processing (NLP) tasks, including code generation. Among these cutting-edge models, Code Llama 70B stands out as a true heavyweight, boasting an impressive 70 billion parameters. Developed by Meta and now available on Amazon SageMaker, this state-of-the-art LLM promises to revolutionize the way developers and data scientists approach coding tasks. What is Code Llama 70B and Mixtral 8x7B? Code Llama 70B is a variant of the Code Llama foundation model (FM), a fine-tuned version…
Read More
The bottleneck in LLMs is finding reasoning errors, not fixing them

The bottleneck in LLMs is finding reasoning errors, not fixing them

The BIG-bench-mistake UI from the repo (link inside!)LLMs have taken the field of natural language processing by storm. With the right prompting, LLMs can solve all sorts of tasks in a zero- or few-shot way, demonstrating impressive capabilities. However, a key weakness of current LLMs seems to be self-correction - the ability to find and fix errors in their own outputs. A new paper by researchers at Google and the University of Cambridge digs into this issue of LLM self-correction. The authors divide the self-correction process into two distinct components:Mistake finding, which refers to identifying errors in an LLM's outputOutput…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.