Meta’s Motivo AI model could deliver more lifelike digital avatars: Here’s how it works


META is researching and developing new AI models that could have potential use in Web3 applications. Facebook’s parent company has released an AI model called Meta Motivo, which can control the physical movements of digital avatars. This is expected to improve the overall metaverse experience. The newly unveiled model is expected to offer optimized physical motion and interaction of avatars in the Metaverse ecosystem.

The company claims that Motivo is the ‘first of its kind behavioral foundation model’. AI models can enable virtual human avatars to complete a variety of complex end-to-end tasks, while making virtual physics in the metaverse more intuitive.

Through unsupervised reinforcement learning, Meta makes it convenient for Motivo to perform a range of tasks in complex environments. The company said in a blog post that it has deployed a new algorithm to train this AI model on unlabeled motion to help capture human-like behaviors while maintaining zero-shot estimation capabilities. Uses dataset.

Announcing the launch of Motivo on X, Meta shared a short video demo showing what the integration of this model with virtual avatars will entail. The clip shows a humanoid avatar performing dance moves and kicks using full body functions. Meta said it is incorporating ‘unsupervised reinforcement learning’ to trigger these ‘human-like behaviours’ in virtual avatars, as part of its efforts to make them more realistic.

The company says the Motivo can solve a range of whole-body control tasks. This includes motion tracking, reaching target poses, and reward optimization without any additional training.

Reality Labs is Meta’s internal unit working on initiatives related to the Metaverse. Reality Labs has consistently posted losses since its launch in 2022. Regardless of the pattern, Zuckerberg has placed his bet on the metaverse, testing new technologies to improve the overall experience.

Earlier this year, Meta showcased a demo of Hyperscape that turns a smartphone camera into a gateway to a photorealistic Metaverse environment. Through this, the tool enables smartphones to scan 2D spaces and convert them into hyperrealistic metaverse backgrounds.

In June, Meta split its Reality Labs team into two divisions, where one team was tasked with working on Metaverse-focused Quest headsets and the other was responsible for working on hardware wearables that Meta would develop in the future. Can launch in. The move was intended to consolidate the time the Reality Labs team spent developing new AI and Web3 technologies.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *