Microsoft launches ‘Corrections’, an AI feature that can detect and correct AI hallucinations


Microsoft on Tuesday launched a new artificial intelligence (AI) capability that will identify and fix situations when AI models generate inaccurate information. Called “Corrections,” the feature is being integrated into Azure AI Content Safety’s groundedness detection system. Since this feature is only available through Azure, it is aimed at the tech giant’s enterprise customers. The company is also working on other methods to reduce the incidence of AI hallucinations. Notably, this feature can also show an explanation of why a section of text was highlighted as having incorrect information.

Microsoft “improvements” feature launched

In a blog post, the Redmond-based tech giant detailed the new feature, which it claims fights the phenomenon of AI hallucinations, a phenomenon where an AI responds to a question with false information. and fails to recognize its falsehood.

This feature is available through Microsoft’s Azure services. The Azure AI Content Safety system has a tool called Groundedness Detection. It identifies whether the response generated is based on reality or not. While the tool itself works in many different ways to detect hallucinations, the correction feature works in a specific way.

To implement the improvement, users must be connected to Azure’s grounding documents, which are used in document summarization and Retrieval-Augmentation-Generation-based (RAG) Q&A scenarios. Once connected, users can enable the feature. After that, whenever an unfounded or incorrect sentence arises, the feature will trigger a request for correction.

Simply put, grounding documents can be understood as a guideline that the AI ​​system should follow when generating responses. This can be the source material for a query or a large database.

Then, the feature will assess the statement against the grounding document and if it is found to be misinformation, it will be filtered out. However, if the content is consistent with the grounding document, the facilitator may reword the sentence to ensure that it is not misinterpreted.

Additionally, users will also have the option to enable logic when installing the capability for the first time. Enabling this will prompt the AI ​​​​feature to add an explanation as to why it thought the information was incorrect and needed correction.

A spokesperson for the company told The Verge that the remediation feature uses small language models (SLM) and large language models (LLM) to align outputs with grounding documents. “It is important to note that detecting groundedness does not address ‘accuracy’ but rather helps align generator AI output with grounding documents,” the publication quoted the spokesperson as saying.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *