MediaTek announces optimization of Microsoft’s Phi-3.5 AI model on Dimensity chipset


MediaTek announced on Monday that it has now optimized several of its mobile platforms for Microsoft’s Phi-3.5 artificial intelligence (AI) model. The Phi-3.5 series of small language models (SLM), including Phi-3.5 Mix of Experts (MOE), Phi-3.5 Mini, and Phi-3.5 Vision, was released in August. Open-source AI models were made available on Hugging Face. Rather than being typical conversation models, these were instruction models that required users to input specific instructions to achieve the desired output.

In a blog post, MediaTek announced that its Dimensity 9400, Dimensity 9300 and Dimensity 8300 chipsets are now optimized for the Phi-3.5 AI model. With this, these mobile platforms can efficiently run processes and inferences for on-device generator AI tasks using MediaTek’s Neural Processing Units (NPUs).

Optimizing a chipset for a specific AI model involves tailoring the hardware design, architecture, and operation of the chipset to efficiently support the processing power, memory access patterns, and data flow of that particular model. After optimization, the AI ​​model will show reduced latency and power consumption and increased throughput.

MediaTek highlighted that its processors are optimized not only for Microsoft’s Phi-3.5 MoE, but also for Phi-3.5 Mini, which offers multilingual support and Phi-3.5 Vision that supports multi-frame image Comes with understanding and logic.

Specifically, Phi-3.5 MoE has 16×3.8 billion parameters. However, when using two experts (typical use case) there are only 6.6 billion of them as active parameters. On the other hand, Phi-3.5 has 4.2 billion parameters and an image encoder, and Phi-3.5 Mini has 3.8 billion parameters.

In terms of performance, Microsoft claimed that Phi-3.5 MoE outperformed both Gemini 1.5 Flash and the GPT-4o Mini AI model on the SQuALITY benchmark, which tests readability and accuracy when summarizing blocks of text. Is.

While developers can leverage Microsoft Phi-3.5 directly through Hugging Face or the Azure AI Model Catalog, MediaTek’s Neuropilot SDK toolkit also provides access to these SLMs. The chip maker said the latter will enable developers to create customized on-device applications capable of making generic AI inferences using AI models on the mobile platforms mentioned above.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *