Apple to fix AI tools after being accused of ‘misleading’ headlines by the BBC
Apple has announced plans to update its AI-powered notification feature following a formal complaint from the BBC regarding inaccurately outlined news headlines.
The BBC complained about the tech giant’s use of its Apple Intelligence feature last month after several high-profile errors.
These included a false claim that Rafael Nadal had come out as gay and a premature notification declaring Luke Littler the winner of the World Darts Championship final hours before the match began.
In another case, Apple Intelligence inaccurately summarised a headline about murder suspect Luigi Mangione, reporting that he had shot himself.
Apple responded by pledging to roll out a software update “in the coming weeks” to address the inaccuracies.
The firm also emphasised that the feature is in beta and remains optional for users.
In its statement, Apple said: “We are continuously making improvements with the help of user feedback.”
The BBC, however, remained more critical, as it called for urgent action.
“These AI summarisations by Apple do not reflect – and in some cases, completely contradict – the original BBC content”, said the broadcaster in a statement. “The accuracy of our news is essential in maintaining trust”.
The controversy highlighted the challenges of integrating generative AI into consumer products, a trend which has been rapidly adopted by leading tech firms across the board.
Apple isn’t alone in facing challenges in implementing generative AI features in their products: Google’s AI tools, including its search engine overviews and image generation features, have also been criticised for inaccuracies.
The tech giant received mounting criticism after debuting its ‘AI Overview’ in Google search, after queries returned inaccurate results.
For example, when asked how many US presidents have been Muslim, AI overview problematically responded: “The United States has had one Muslim president, Barack Hussein Obama.”
It was also criticised for its offensive historical image generation, for which co-founder Sergey Brin admitted “we definitely messed up”.