Yazılar

OpenAI to Launch Sora on Android, Enhancing App’s Social Features

OpenAI to Launch Sora on Android, Expanding AI Video Experience

OpenAI is preparing to bring its popular Sora app to Android devices in the near future. Since its release on iOS, Sora has quickly gained popularity, allowing users to generate AI-powered videos featuring themselves and others, which can then be shared on a global feed. The app also provides a unique opportunity for users without a ChatGPT subscription to try OpenAI’s Sora 2 model, making it a standout platform for AI video creation. With the upcoming Android release, a larger audience will gain access to these features.

Android Launch Details

Bill Peebles, Head of Sora at OpenAI, confirmed the Android version in a post on X, formerly known as Twitter, stating that “the Android version of Sora is actually coming soon.” The announcement suggests that the app could be available within the next few weeks, giving Android users a chance to explore the AI video creation platform for the first time.

Invite-Only Access Likely

Despite the impending launch, the Android version may continue to operate on an invite-only basis, requiring users to have an invite code to access the app. This approach mirrors the initial rollout on iOS, which limited availability to certain regions and users, helping OpenAI manage demand while fine-tuning the platform.

Early Success on iOS

Sora achieved remarkable success on iOS, reaching one million downloads within the first five days of launch. This milestone was reached even with invite-only access and regional restrictions limited to North America, outperforming the initial growth of ChatGPT. The Android launch is expected to further accelerate adoption, allowing more users to experiment with AI-generated video content and experience the social features that have made Sora an instant hit.

Tipster Reveals Samsung Developing AI-Based Image-to-Video Technology

Samsung is reportedly developing an innovative AI-powered feature that can transform still images into short videos. According to a tipster, this new technology will enable users to convert any photo from their gallery into a few-second-long video clip. Although detailed information about how the feature will work remains scarce, it is expected to be integrated into Samsung’s Galaxy AI suite and may debut alongside the upcoming One UI 8.0 software update.

The tip about Samsung’s image-to-video capability came from PandaFlash on X (formerly Twitter), who revealed that the feature aims to generate brief videos using just a single image as input. This suggests an advancement beyond simple photo animation, potentially allowing for more dynamic and lifelike video content. However, specifics such as the AI model behind this tool or the range of effects it can produce have yet to be disclosed.

This development closely follows similar announcements from other brands, including the Honor 400 series, which introduced an AI feature capable of creating up to five-second videos from images. TikTok also recently launched “AI Alive,” a tool that animates photos in creative ways. Both of these features primarily enhance images by adding motion, rather than generating fully new video content from scratch. Honor’s solution is reportedly powered by Google’s Veo 2 video generation model, leading to speculation that Samsung might leverage the same technology given its recent partnership with Google on Galaxy S24’s Circle to Search feature.

If implemented, Samsung’s AI video generation tool would expand the multimodal capabilities of Galaxy AI, which already supports generating images from text or image prompts. Introducing video generation would mark a significant step forward for the platform, enabling more immersive content creation directly from users’ photo libraries. The feature is anticipated to arrive as part of the One UI 8.0 update, adding fresh AI-driven creativity tools to Samsung’s flagship ecosystem.

Appy Pie Unveils PixelForge and Vibeo AI Models for Image and Video Creation

Appy Pie, a leading Indian no-code platform specializing in artificial intelligence (AI), has introduced two groundbreaking AI models: PixelForge and Vibeo. These multimodal large language models (LLMs) are designed to revolutionize how images and videos are created. PixelForge, as a text-to-image generation model, enables users to transform text prompts into high-resolution, photorealistic, and artistic visuals. On the other hand, Vibeo takes things a step further by generating videos from text or image inputs, offering even greater versatility in multimedia creation. These models are being made available to both individual users and businesses through Appy Pie’s comprehensive Appy Pie Design platform, which also supports the development of mobile apps, websites, and AI-driven chatbots.

The new models, PixelForge and Vibeo, are the result of Appy Pie’s in-house development, marking a significant departure from their earlier text-focused AI tool, Flawless Text. The company asserts that these two new models are more advanced, catering not just to creators but also to marketing professionals and enterprises that require dynamic and customizable visual content. PixelForge stands out for its ability to generate a wide array of image styles, making it a versatile tool for any project, whether artistic or professional. Meanwhile, Vibeo offers a compelling solution for those looking to create videos with just a simple text or image input.

PixelForge’s core feature is its ability to generate high-quality images from text descriptions. It supports a diverse range of visual styles and can cater to various compositions and use cases, offering something for everyone, from graphic designers to content creators. While the company has highlighted similarities with popular models like OpenAI’s DALL-E and Stability AI’s Stable Diffusion, it has yet to release detailed benchmark data to support these claims. However, Appy Pie promises that PixelForge is optimized for a seamless user experience with a focus on both speed and creativity. Despite the lack of technical details, such as resolution outputs and rate limits, PixelForge is poised to be an invaluable tool in the growing field of AI-powered content creation.

Vibeo, the video generation model, takes AI capabilities a step further by providing users with the ability to generate videos from either textual prompts or reference images. This model is specifically designed to prioritize realism, ensuring that the generated videos not only match the user’s expectations but also convey the intended mood and motion. With Vibeo, users can create dynamic video content with minimal effort, making it an ideal tool for everything from marketing materials to social media content. As Appy Pie continues to innovate in the AI space, these models could reshape the future of multimedia content production, offering users the tools to produce high-quality images and videos with just a few simple inputs.