Yazılar

Google Launches Gemini 2.0: AI Model With Enhanced Reasoning and Flash Thinking Capabilities

Google has unveiled its latest artificial intelligence model, Gemini 2.0 Flash Thinking, a cutting-edge large language model (LLM) that focuses on advanced reasoning capabilities. This new addition to the Gemini 2.0 family is designed to tackle more complex tasks by adjusting its inference time to allow deeper analysis and problem-solving. According to Google, the AI model excels in addressing intricate challenges related to reasoning, mathematics, and coding, demonstrating enhanced performance despite longer processing times.

The introduction of the Gemini 2.0 Flash Thinking AI model signifies a major leap in Google’s AI development. By increasing the time the model spends on reasoning, it can delve into problems more thoroughly, making it especially effective in areas that require precision and depth. While the extended processing time may seem counterintuitive to performance, Google assures that this model still delivers results faster than its predecessors, thanks to its optimized efficiency.

Jeff Dean, the Chief Scientist at Google DeepMind, shared insights about the new model on X (formerly Twitter), emphasizing that the Gemini 2.0 Flash Thinking model is “trained to use thoughts to strengthen its reasoning.” This approach allows the AI to simulate more human-like cognitive processes, enhancing its ability to tackle multifaceted problems with higher accuracy. The advanced reasoning features are expected to be a game-changer in fields such as scientific research, software development, and problem-solving in complex systems.

Developers eager to explore the capabilities of the Gemini 2.0 Flash Thinking model can now access it via the Google AI Studio, with integration available through the Gemini API. This opens up opportunities for building more sophisticated AI-driven applications, making the latest model an important tool in the arsenal of developers working on cutting-edge AI solutions.

Apple to Pay $95 Million to Settle Siri Privacy Lawsuit

Apple Inc. has agreed to pay $95 million to settle a class-action lawsuit alleging its Siri voice assistant violated users’ privacy by unintentionally recording private conversations and sharing them with third parties, such as advertisers.

The preliminary settlement was filed on Tuesday in the federal court in Oakland, California, and awaits approval from U.S. District Judge Jeffrey White. Plaintiffs in the case claimed that Siri routinely recorded conversations without users’ consent when triggered unintentionally by “hot words” like “Hey, Siri.”

Allegations and Examples

Users reported that these unauthorized recordings led to targeted ads. For instance, two plaintiffs said discussions about Air Jordan sneakers and Olive Garden resulted in related advertisements. Another claimed to have received ads for a surgical treatment after discussing it privately with a doctor.

The class-action period covers Siri-enabled devices purchased between September 17, 2014, and December 31, 2024, beginning with the rollout of the “Hey, Siri” feature.

Settlement Details

Tens of millions of users are eligible for compensation, with potential payouts of up to $20 per device, including iPhones and Apple Watches. Apple has denied any wrongdoing but agreed to the settlement to resolve the claims.

The plaintiffs’ lawyers may request up to $28.5 million in legal fees and $1.1 million for expenses from the settlement fund.

Apple has not yet commented on the settlement.

Context and Broader Implications

The $95 million settlement represents about nine hours of profit for Apple, which reported a net income of $93.74 billion in its most recent fiscal year.

This lawsuit follows a trend of scrutiny over voice-activated assistants and user privacy. A similar case is pending against Google for its Voice Assistant, filed in the same judicial district as the Apple case. Both lawsuits are being handled by the same legal teams.

The case against Apple is Lopez et al. v. Apple Inc., U.S. District Court, Northern District of California, No. 19-04577.

 

Android 16 Developer Preview 2 Brings Battery Optimizations and Screen-Off Fingerprint Unlock for Pixel Phones: Report

Android 16 Developer Preview 2 Adds Screen-Off Fingerprint Unlock and Battery Enhancements

Google has rolled out the second Developer Preview of Android 16, bringing new features and enhancements for developers and early testers. Building on the initial preview released last month, this update focuses on battery optimization and introduces an exclusive feature for Pixel users. One notable addition is the ability to use the fingerprint sensor for unlocking the device even when the screen is turned off.

Improved Fingerprint Unlock on Pixel Devices

As reported by Android Authority, Android 16 Developer Preview 2 includes a new setting called “Screen-off Fingerprint Unlock.” Traditionally, Pixel devices require the screen to be awake for the fingerprint sensor to function. Users have had to rely on features like “Always-On Display” or “Tap to Wake” to enable the sensor. However, the new setting aims to eliminate this requirement, making fingerprint unlock more convenient by allowing it to work directly from a powered-off screen.

Battery Life Enhancements

The update also reportedly improves battery efficiency, though details on the specific changes are yet to be shared by Google. The enhancements are expected to complement Android’s existing battery-saving features, providing a smoother and more energy-efficient experience for testers.

Developer Benefits and Future Expectations

The Developer Preview allows developers to explore and adapt their apps to new APIs and features, preparing them for the public release of Android 16 later this year. With battery improvements and features like screen-off fingerprint unlock, this update hints at Google’s efforts to refine the Android experience, particularly for Pixel users. Further updates and a public beta are expected in the coming months, offering more insights into what Android 16 will bring to the table.