Meta Unveils AI Model to Boost Metaverse Realism
Meta announced the release of a new artificial intelligence model called Meta Motivo, designed to control human-like digital agents’ movements and enhance the overall Metaverse experience. The company has heavily invested in AI, augmented reality, and Metaverse technologies, projecting a record capital expenditure of $37 billion to $40 billion for 2024.
Meta Motivo aims to tackle challenges in avatar body control, allowing digital characters to move more realistically. This technology could enable lifelike non-playable characters (NPCs), democratize character animation, and create innovative immersive experiences. According to a company statement, Meta envisions this as a significant step toward “fully embodied agents” within the Metaverse.
In addition, Meta introduced the Large Concept Model (LCM), a novel approach to language modeling that focuses on predicting high-level concepts rather than individual tokens. Unlike traditional large language models (LLMs), the LCM uses a multimodal and multilingual embedding space to represent and predict entire sentences or ideas. This innovation is intended to decouple reasoning processes from language representation.
Meta also released Video Seal, an AI tool embedding hidden watermarks into videos. These marks are invisible to the human eye but provide traceability, which could be crucial for combating content misuse and ensuring accountability in media.
The company has adopted an open approach, offering many of its AI models for free to developers. Meta believes that fostering external innovation will ultimately improve tools and services across its platforms, further solidifying its position in the Metaverse and AI ecosystems.