Microsoft released its Phi-4 artificial intelligence (AI) model on Friday. The company’s latest Small Language Model (SLM) joins its open-source FEI family of foundational models. The AI model comes eight months after the release of Phi-3 and four months after the introduction of the Phi-3.5 series of AI models. The tech giant claims that SLM is more capable of solving complex logic-based questions in areas like mathematics. Additionally, it is also said to excel in traditional language processing.
Microsoft’s Phi-4 AI model will be available through Hugging Face
So far, each Phi series has been launched with a Mini variant, however, there has been no Mini model with the Phi-4 model. Microsoft highlighted in a blog post that Phi-4 is currently available on Azure AI Foundry under the Microsoft Research License Agreement (MSRLA). The company plans to make it available on Hugging Face also next week.
The company also shared benchmark scores from its internal testing. Based on these, the new AI model significantly upgrades the capabilities of the older generation model. The tech giant claimed that the Phi-4 outperformed a much larger model, the Gemini Pro 1.5, on benchmarks of math competition problems. It also released a detailed benchmark performance in a technical paper published in the online journal arXiv.
On security, Microsoft said Azure AI Foundry comes with a set of capabilities to help organizations measure, mitigate, and manage AI risks across the development lifecycle of traditional machine learning and generative AI applications. Additionally, enterprise users can use Azure AI content protection features such as Prompt Shield, Groundedness Detection, and others as content filters.
Developers can also add these security capabilities to their applications through a single application programming interface (API). The platform can monitor applications for quality and security, adversarial accelerated attacks and data integrity and provide real-time alerts to developers. It will be available to Phi users who access it through Azure.
In particular, small language models are often being trained on synthetic data after deployment, allowing them to quickly gain more knowledge and higher efficiency. However, the results after training are not always consistent in real-world use cases.