ChatGPIT Live Video Feature Spotted on Latest Beta Release, May Launch Soon


ChatGPIT may soon have the ability to answer questions after looking into your smartphone’s camera. According to a report, evidence of the Live Video feature, which is part of OpenAI’s Advanced Voice mode, was spotted in the latest ChatGPIT for Android beta app. This capability was first demonstrated during the AI ​​firm’s Spring Update event in May. This allows the chatbot to access the smartphone’s camera and answer questions about the user’s surroundings in real time. While the emotional voice capability was released a few months ago, the company has not yet announced a potential release date for the live video feature.

ChatGPIT Live Video Feature Discovered on Latest Beta Release

A report from Android Authority details evidence of the live video feature, which was found during the process of tearing down the app’s Android package kit (APK). Several strings of code related to the ability were spotted in ChatGPT for Android beta version 1.2024.317.

Specifically, the live video feature is part of ChatGPT’s advanced voice mode, and it lets the AI ​​chatbot process video data in real-time to answer questions and interact with the user in real time. With it, ChatGPT can look through a user’s fridge and scan the contents and suggest a recipe. It can also analyze the user’s expressions and try to understand their mood. This was paired with emotional voice capabilities that allow AI to speak in a more natural and expressive way.

According to the report, multiple strings of code related to the feature were observed. One such string says, “Tap the camera icon to see ChatGPIT and chat about your surroundings,” which is the same description that OpenAI gave for the feature during the demo.

Other strings reportedly include phrases like “live camera” and “beta”, highlighting that the feature can work in real-time and that the under-development feature will likely be released to beta users first. .

Another series of codes also includes advice for users not to use the live video feature for live navigation or decisions that could affect the health or safety of users.

Although the existence of these strings does not point to the release of the feature, after an eight-month delay, it is the first conclusive evidence that the company is working on the feature. Earlier, OpenAI had claimed that the feature was being delayed for the safety of users.

Notably, Google DeepMind also demonstrated a similar AI vision feature at the Google I/O event in May. Part of Project Astra, this feature lets Gemini view the user’s surroundings using the device’s camera.

In the demo, Google’s AI tool could correctly identify objects, predict current weather conditions, and even remember objects previously seen in a live video session. As of now, the Mountain View-based tech giant has not even mentioned when this feature will be introduced.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *