Alibaba released a new artificial intelligence (AI) model on Thursday that it says will rival OpenAI’s GPT-O1 series models in reasoning capability. Launched in preview, the QwQ-32B large language model (LLM) is said to outperform GPT-o1-preview in several mathematical and logical reasoning-related benchmarks. The new AI model is available to download on Hugging Face, although it is not fully open-source. Recently, another Chinese AI firm released an open-source AI model DeepSeq-R1, which it claimed would rival the ChatGPIT-maker’s logic-focused Foundation Model.
Alibaba QwQ-32B AI Model
In a blog post, Alibaba explained its new logic-focused LLM in detail and highlighted its capabilities and limitations. QwQ-32B is currently available as a preview. As the name suggests, it is built on 32 billion parameters and has a context window of 32,000 tokens. The model has completed both pre-training and post-training phases.
Talking about its architecture, the Chinese tech giant revealed that the AI model is based on transformer technology. For positional encoding, QwQ-32B uses Rotary Position Embedding (RoPE) with Switched Gated Linear Unit (SwiGLU) and Root Mean Square Normalization (RMSNorm) functions, as well as Attention Query-Key-Value Bias (Attention QKV) bias. Does.
Like OpenAI GPT-o1, the AI model shows its internal monologue while assessing the user’s query and trying to find the right response. This internal thought process allows QwQ-32B to test different theories and fact-check before presenting a final answer. Alibaba claims that LLM scored 90.6 percent in the MATH-500 benchmark and 50 percent in the AI Mathematical Evaluation (AIME) benchmark during internal testing, outperforming OpenAI’s logic-centric models.
In particular, AI models with better reasoning are not evidence of the model being more intelligent or capable. This is simply a new approach, also known as test-time calculation, that lets the model spend additional processing time to complete a task. As a result, AI can provide more accurate responses and solve more complex questions. Many industry veterans have reported that new LLMs are not improving at the same rate as their older versions, which suggests that existing architectures are reaching saturation point.
Since QwQ-32B spends additional processing time on queries, it also has several limitations. Alibaba said AI models may sometimes mix languages or switch between them, leading to issues such as language-mixing and code-switching. It also enters the reasoning cycle and apart from mathematical and reasoning skills, other areas still need improvement.
Notably, Alibaba has made the AI model available through the Hugging Face listing and both individuals and enterprises can download it for personal, educational, and business purposes under the Apache 2.0 license. However, the company has not made the model’s weights and data available, meaning users cannot replicate the model or understand how the architecture works.