fbpx
Friday, December 20, 2024
Friday December 20, 2024
Friday December 20, 2024

Apple’s AI missteps: False headlines shake credibility of news giants

PUBLISHED ON

|

Apple faces mounting pressure to withdraw its AI notification feature after it falsely attributed misleading headlines to top news outlets, sparking media outrage

Apple’s ambitious foray into generative AI has landed the tech giant in hot water after its new notification summary feature produced a slew of misleading headlines. The most controversial instance falsely suggested that Luigi Mangione, a murder suspect in a high-profile US case, had taken his own life—a headline wrongly attributed to the BBC.

The BBC swiftly lodged a complaint with Apple, accusing its AI-driven service of damaging the outlet’s credibility. The feature, part of Apple Intelligence, uses artificial intelligence to summarise and group notifications for users. However, the AI-generated headline was not only inaccurate but also alarming, raising serious concerns about the reliability of this technology in handling sensitive news stories.

The incident has drawn criticism from media watchdog Reporters Without Borders (RSF), which called on Apple to discontinue the feature entirely. Vincent Berthier, RSF’s technology and journalism head, warned, “Generative AI is not mature enough to produce reliable public information. This technology risks undermining trust in media outlets and threatens the public’s access to accurate news.”

Despite the backlash, Apple has yet to publicly address the issue. Meanwhile, Luigi Mangione, who faces first-degree murder charges for the killing of healthcare CEO Brian Thompson, remains alive. The BBC stated it had reached out to Apple to resolve the problem but has not confirmed whether it received a response.

Embed from Getty Images

Apple Intelligence launched in the UK last week, aiming to streamline notifications by grouping related updates. While intended to reduce distractions, the feature has misfired spectacularly, tarnishing not just the BBC but other major publications. The New York Times also found itself in a similar predicament when the AI incorrectly summarised a story about Israeli Prime Minister Benjamin Netanyahu. Instead of reporting on the International Criminal Court’s arrest warrant, the notification declared, “Netanyahu arrested.”

Ken Schwencke, a journalist with ProPublica, shared a screenshot of the erroneous New York Times notification, corroborating its authenticity. Like the BBC, the New York Times has refrained from commenting on the matter, leaving questions about the broader implications of this AI technology unanswered.

Apple’s AI issues aren’t confined to news summaries. Reports have surfaced of inaccuracies in email and text message summaries generated by the system. While Apple provides users with the option to report problematic notifications, it has not disclosed the volume of such complaints or the steps taken to address them.

The controversy highlights the fine line between technological innovation and ethical responsibility. By attempting to use AI to streamline information consumption, Apple may have inadvertently compromised its users’ trust and the reputations of respected news organisations.

As the outcry grows louder, all eyes are on Apple to acknowledge the problem and take corrective measures. Will the tech giant step up to the challenge, or will this incident mark a major setback for its AI ambitions?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles