China's MiniMax company releases an artificial intelligence inference model
China's MiniMax Releases AI Inference Model
Chinese startup MiniMax has released a new AI inference model and claimed that the model outperforms DeepSeek. The company released the MiniMax-M1 model earlier this week, saying that M1 is among the best in open-source models in complex productivity-oriented scenarios, surpassing domestic closed-source models and approaching the most advanced overseas models. MiniMax said that M1 supports the industry's highest input of 1 million contexts, eight times that of DeepSeek-R1. A larger input token limit (also known as the context window) enables the model to handle longer and more complex information. At the same time, MiniMax said that when M1 conducts in-depth inference with 80,000 tokens, it only needs to use about 30% of the computing power of DeepSeek-R1. MiniMax said that the entire reinforcement learning phase took three weeks using 512 NVIDIA H800 chips, with a leasing cost of $537,400.
—— The Wall Street Journal
via Windvane Reference Express - Telegram Channel