Yazılar

Apple Reportedly Developing Gemini Integration for Apple Intelligence

Apple is reportedly exploring the integration of additional third-party artificial intelligence (AI) services within its Apple Intelligence framework. The company introduced Apple Intelligence features to select iPhone, iPad, and Mac devices with the iOS 18.2 update in December 2024. This marked a significant shift for Apple, as it allowed Siri to leverage OpenAI’s ChatGPT for certain queries, making it the first-ever system-wide third-party AI integration within Apple’s ecosystem. Now, a new leak suggests that Apple may expand this functionality further by incorporating Google’s AI models into the mix.

According to MacRumors analyst Aaron Perris, Apple appears to be working on integrating Google’s Gemini AI models into its operating system. Perris shared a screenshot on X (formerly Twitter), showing lines of code from the recently released iOS 18.4 beta that hint at this potential partnership. If true, this move would signify Apple’s increasing openness to leveraging multiple AI providers to enhance its devices’ capabilities, giving users more flexibility in choosing their preferred AI assistant.

The leaked code reportedly displays two options under the “Third-party model” section within Apple Intelligence settings. While OpenAI’s ChatGPT was already listed, a new entry for Google has now appeared. However, the leak does not specify which particular Gemini models Apple intends to integrate or what kind of functionality they will offer. It remains unclear whether this integration would be limited to select Siri queries, similar to how ChatGPT is currently utilized, or if it would expand into more system-wide applications.

If Apple follows through with this integration, it could mark a major shift in how AI services are incorporated into its ecosystem. Rather than relying on a single AI provider, Apple seems to be adopting a more flexible approach that allows users to choose from multiple AI models based on their preferences. As iOS 18.4 continues its beta testing phase, more details about this potential collaboration may emerge, offering a clearer picture of how Apple plans to implement third-party AI services in future updates.

EU Defends Digital Markets Act, Insists It’s Not Targeting U.S. Tech Giants

European Union officials have rejected accusations that their new Digital Markets Act (DMA) is aimed at U.S. tech giants. In a joint letter to U.S. congressmen Jim Jordan and Scott Fitzgerald, EU antitrust chief Teresa Ribera and EU tech chief Henna Virkkunnen emphasized that the DMA is designed to keep digital markets open and applies to all companies meeting the criteria for being considered “gatekeepers,” regardless of their headquarters.

Ribera and Virkkunnen responded to concerns raised by U.S. lawmakers about the potential impact of the DMA on U.S. firms. The letter, dated March 6, clarified that the law does not specifically target U.S. companies, but instead applies to any firm that fits the established gatekeeper definition in the EU.

The EU officials also defended the DMA against criticism that it could stifle innovation. They argued that the act aims to prevent unfair practices by dominant players, thus fostering a more open and competitive digital market that will allow new players to emerge and innovate. Ribera and Virkkunnen highlighted that similar concerns over monopolistic behavior had prompted antitrust investigations and legal actions against companies like Google, Amazon, Apple, and Meta in the U.S. under the Trump administration and beyond.

In response to claims that EU fines on American tech firms resemble a European tax, the EU officials emphasized that the primary goal of enforcement is to ensure compliance with the law, not to impose punitive measures. They pointed out that sanctions, which are a standard feature of both EU and U.S. regulations, are essential for ensuring effective enforcement.

Google Reports 250 Complaints Over AI-Generated Deepfake Terrorism Content to Australian Regulator

Google has informed Australian regulators that it received over 250 complaints globally between April 2023 and February 2024, indicating that its AI technology, specifically the Gemini model, was being used to create deepfake terrorism content. Additionally, the company reported dozens of complaints regarding the use of Gemini to generate child abuse material, according to the Australian eSafety Commission.

Under Australian law, tech companies are required to periodically report their harm minimization efforts to the eSafety Commission, or risk facing fines. This reporting period marks the first disclosure of such data, which regulators have described as a “world-first insight” into how AI is being exploited for harmful and illegal purposes.

The Australian eSafety Commission emphasized the importance of companies developing AI products to implement safeguards to prevent the generation of harmful material. eSafety Commissioner Julie Inman Grant stated that the findings highlight the critical need for effective protective measures.

According to Google’s report, it received 258 user complaints about AI-generated deepfake terrorist or extremist content created with Gemini, along with 86 reports concerning AI-generated child exploitation or abuse material. However, the company did not specify how many of these complaints were verified.

A Google spokesperson confirmed that the company does not allow the generation or distribution of illegal content, including material related to terrorism, child exploitation, or other abuses. Google also noted that the number of reports provided to eSafety represents the total global volume of complaints, not confirmed policy violations.

While Google uses a system called “hatch-matching” to identify and remove child abuse content generated with Gemini, the company did not apply the same system to detect terrorist or extremist material. This lack of a similar safeguard for violent content has raised concerns among regulators.

The Australian eSafety Commission has previously fined Telegram and Twitter (now X) for their inadequate reporting practices, with X losing an appeal over a fine of A$610,500 ($382,000). Telegram is also preparing to challenge its fine.