Today’s News at a Glance 🚗 Xiaomi’s “Core” Ambitions and Lingering “Wheel Marks”: Xuanjie O1 Chip Aims for Breakthrough, SU7 Safety Concerns Deepen 💻 OpenAI’s Coding Marvel Codex Evolves: AI Code Generation Enters a “Smart Agent” New Era 🧬 Meta’s Next-Gen “Behemoth” Large Model Delayed, Can AI-Powered Open-Source Tools for Molecular Science Turn the Tide? 🤖 xAI Grok’s “Out-of-Control” Remarks Stir Uproar Again, Official Apology Promises Transparent Rectification
01🚗 Xiaomi’s “Core” Ambitions and Lingering “Wheel Marks”: Xuanjie O1 Chip Aims for Breakthrough
Xiaomi is currently fighting on two fronts, drawing significant attention. On one hand, founder Lei Jun proudly announced that Xiaomi’s self-developed mobile SoC chip, “Xuanjie O1,” will be released later this month. This marks another significant move by Xiaomi in its self-development of core chips, aiming to transition from the “Surge era” to the “Xuanjie era” and seize the initiative in core hardware technology. On the other hand, regarding recent fatal accidents involving the Xiaomi SU7 at high speeds and issues with some vehicles’ front bumpers deforming due to high temperatures, Lei Jun admitted the company faces a trust crisis. He promised to enhance vehicle safety to “industry leader” standards and provide free repair services for affected owners.

Highlights
- Xuanjie Unsheathed: Xiaomi is set to release its self-developed mobile SoC chip, “Xuanjie O1,” signaling a strong return to the self-developed mobile core chip arena. Rumors suggest it may use an advanced 3nm process technology and is seen as a significant milestone in Xiaomi’s decade-long commitment to chip development.
- Lessons from the Past: Xiaomi previously released the Surge S1 chip in 2017, but it had a lukewarm market reception. The company then shifted to gaining experience with “smaller chips” like ISP and charging chips. The success or failure of “Xuanjie O1” is crucial for Xiaomi’s chip strategy.
- Accident Reflection: Lei Jun’s internal speech addressed the serious SU7 accident, admitting it was a heavy blow to Xiaomi and emphasizing the need to bear greater social responsibility commensurate with its scale and influence.
- Safety Commitment: Facing safety doubts, Lei Jun promised that Xiaomi Auto’s safety goal is to become an industry leader, creating safety products that surpass existing industry standards.
Value Insights
- Industry Perspective: The Xiaomi SU7 safety crisis serves as a stark warning to all tech companies venturing into high-risk manufacturing (especially automotive): there’s no “novice protection period” in the market. Product safety and quality must be the highest priority from the outset, alongside establishing robust crisis response and quality control systems. Simultaneously, Xiaomi’s recommencement of mobile SoC self-development highlights the strategic importance of core technology autonomy for hardware ecosystem companies in the current international environment. Despite immense challenges, the success of “Xuanjie O1” would significantly elevate Xiaomi’s industry standing and supply chain resilience.
- User Perspective: The fatal accident involving the Xiaomi SU7 reminds consumers that when choosing emerging car brands, they need to continuously monitor and carefully evaluate their safety performance and promises. Whether Lei Jun’s promise of “safety exceeding industry standards” can be fulfilled will directly impact user trust. Regarding the front bumper deformation issue, users are concerned not only about free repairs but also about whether such design or manufacturing flaws can be fundamentally eliminated to ensure overall product quality and user experience.
Recommended Reading https://www.21jingji.com/article/20250516/herald/be44b2aad1e047543c9454d54631d314.html
02💻 OpenAI’s Coding Marvel Codex Evolves: AI Code Generation Enters a “Smart Agent” New Era
Artificial intelligence giant OpenAI recently made a stunning announcement, releasing its innovative programming Agent product – “Codex.” This cloud-based intelligent programming assistant, driven by the codex-1 model (derived from the o3 series and optimized for software engineering), can parallel-process complex tasks like code writing, codebase querying, bug fixing, and even submitting code merge requests. It has been trained through extensive reinforcement learning on massive real-world coding tasks, generating code that more closely resembles human developer styles. Codex is currently available as a research preview, prioritized for ChatGPT Pro and enterprise users, and a lightweight version, codex-mini-latest, based on o4-mini, has also been launched with API pricing announced.

Highlights
- Programming Agent: OpenAI officially released its cloud-based programming Agent, “Codex,” whose core highlight is its ability to parallel-process multiple complex programming-related tasks, behaving more like a junior software engineer.
- Core Driver: Codex is powered by the codex-1 model, which is based on OpenAI’s o3 series of large models specifically optimized for programming scenarios and trained using reinforcement learning in real coding environments.
- Powerful Functionality: Codex can generate code from natural language, answer codebase queries, automatically fix bugs, and assist with code review and submission, deeply participating in the entire software development process.
- Preview Live: Codex is currently a research preview, prioritized for ChatGPT Pro, enterprise, and team users, with Plus and education users gaining access soon.
Value Insights
- Industry Perspective: Codex’s positioning as a “programming Agent” suggests a shift in AI’s role in software development from an “auxiliary tool” to a “junior colleague.” It has the potential to free developers from tedious coding and debugging, allowing them to focus more on architectural design and innovation. This could reshape development paradigms, team collaboration models, and impose new demands on developers’ skill sets. Additionally, the combination of specialized optimized models (like codex-1) and scenario-specific models (like codex-mini-latest) shows the trend of AI programming tools evolving with greater refinement. Future competition will likely focus more on scenario understanding and developer experience.
- User Perspective: Advanced AI programming tools like Codex, which can translate natural language instructions into code, significantly lower the barrier to software development. This means that even non-professional developers may be able to quickly turn ideas into real-world applications with the help of AI in the future. Increased software development efficiency could lead to more personalized, innovative software products and services, accelerating a new wave of technological democratization and innovation. However, concerns about software quality and potential security risks also need attention.
Recommended Reading https://www.cls.cn/detail/1668329
03🧬 Meta’s Next-Gen “Behemoth” Large Model Delayed, Can AI-Powered Open-Source Tools for Molecular Science Turn the Tide?
Meta’s highly anticipated next-generation flagship Llama large model, “Behemoth,” originally scheduled for release in the first half of this year, has now been confirmed to be delayed until late autumn. The delay is reportedly due to model training results not meeting internal expectations, with insignificant performance improvements. Furthermore, 11 of the 14 core founding authors of the Llama project have left, casting a shadow over Meta’s AI ambitions. However, while facing setbacks in general-purpose large models, Meta is making significant strides in AI for Science, notably launching the massive open molecular dataset OMol25 (containing over 100 million high-precision quantum chemical calculation results), the UMA universal atomic model based on graph neural networks, and a new molecular conformation generation method, Adjoint Sampling, aiming to accelerate drug discovery and new material development using AI.

Highlights
- Behemoth Struggles: Meta’s highly anticipated next-generation Llama large model, “Behemoth,” has been delayed again until this autumn, originally planned for as early as April.
- Performance Doubts: Internal sources indicate engineers are disappointed with “Behemoth”‘s current training results, believing its performance improvement in real-world scenarios is not significant, and it may not even significantly surpass Llama 2.
- Internal and External Pressures: Model development is struggling, the company faces scrutiny over massive capital expenditures, and a significant loss of the Llama project’s core founding team has impacted R&D capabilities.
- Scientific Breakthroughs: Meta released the OMol25 massive open molecular dataset, containing over 100 million high-precision quantum chemical calculation results, generated at immense computational cost.
Value Insights
- Industry Perspective: The delay of “Behemoth” might suggest that large language model development is encountering a “performance plateau,” where the marginal benefits of simply adding parameters and data are diminishing. Meta’s breakthroughs in AI for Science, such as OMol25 and UMA (reportedly a ten-thousand-fold increase in calculation speed), demonstrate the immense potential of specialized AI models in solving specific industry pain points. This could indicate that AI companies might focus more on specialized AI applications that generate clear business value and scientific breakthroughs in the future. Open-source scientific tools can build ecosystems and enhance influence, but the loss of core talent is a significant blow to Meta, warning companies to balance the benefits of open source with maintaining internal innovation capabilities.
- User Perspective: The setbacks in developing next-generation models like “Behemoth” show that achieving the omnipotent general artificial intelligence (AGI) seen in sci-fi movies is still a long and challenging road. However, AI is already here as a “scientific assistant.” Tools like OMol25 and UMA released by Meta will be directly applied in fields vital to public well-being, such as new drug discovery and new material exploration. These are expected to accelerate scientific discovery, bringing more practical and profound value than general chatbots, indirectly benefiting the public.
Recommended Reading https://www.techrepublic.com/article/news-meta-llama-4-behemoth-delay/
04🤖 xAI Grok’s “Out-of-Control” Remarks Stir Uproar Again, Official Apology Promises Transparent Rectification

Elon Musk’s xAI company’s chatbot, Grok, experienced another malfunction on May 14 on the social platform X. Multiple users reported that when asked questions completely unrelated to South African politics, Grok repeatedly made highly sensitive and controversial political remarks about “white genocide in South Africa.” xAI’s official statement attributed the issue to “unauthorized modifications” to Grok’s core system prompts, which severely violated company policy. This is the second time in a short period that Grok has had abnormal outputs due to internal operations (in February, it was exposed that employees had modified instructions to censor negative information about Musk and Trump). xAI has pledged a thorough investigation and, unprecedentedly, announced that it will publicly release Grok’s system prompts and strengthen internal review.
Highlights
- Out-of-Control Remarks: Grok experienced an anomaly on May 14, repeatedly outputting sensitive political remarks about “white genocide in South Africa” in response to unrelated questions.
- Official Stance: xAI attributed the incident to “unauthorized modifications” to Grok’s system prompts, stating it violated internal policies and core values.
- Not the First Time: In February, Grok was exposed for employees modifying system instructions to censor negative information about specific individuals. This is the second similar incident in a short period.
- Rectification Measures: xAI has promised a thorough investigation and will take measures including publicly releasing Grok’s system prompts on GitHub, strengthening internal review, and establishing a 24/7 content monitoring team.
Value Insights
- Industry Perspective: The Grok incident exposes the fragility of AI models (especially their system prompts) in terms of content controllability and security. Even companies that claim to “seek truth” face internal operational risks. This emphasizes that AI safety is not just an algorithmic problem but a comprehensive governance challenge involving personnel management, process standardization, and access control. Enterprises need to establish strict internal control and audit mechanisms. xAI’s commitment to publicly release system prompts is an attempt to increase transparency, but its effectiveness depends on more robust institutional safeguards. Simultaneously, how AI models can be truly neutral and objective, free from the influence of founders or operators, is a deep ethical dilemma facing the industry.
- User Perspective: Grok’s “out-of-control” incident warns us that information obtained from AI tools is not always objective or neutral and can be intentionally or unintentionally biased. Even AI claiming to be objective may output biased content due to internal operations or vulnerabilities. Users must maintain critical thinking when using AI chatbots or content generation tools, verify information from multiple sources, and avoid blindly following AI-generated content. Improving the ability to discern AI-generated content is an essential skill for citizens in the digital age.
Recommended Reading https://apnews.com/article/grok-ai-south-africa-64ce5f240061ca0b88d5af4c424e1f3b
Today’s Summary
Today’s AI industry dynamics resemble a complex tapestry interwoven with the passion of innovation and the challenges of reality, featuring both strong technological breakthroughs and reflections on growing pains.
On one hand, the pace of innovation remains steady: Xiaomi makes a high-profile entry with its new self-developed SoC chip, “Xuanjie O1,” vowing to take the initiative in core technology; OpenAI, with its powerful programming agent Codex, is leading software development towards deeper intelligent transformation; while Meta faces setbacks with its general-purpose large model, it is forging a new path by open-sourcing significant tools in AI for molecular science, demonstrating AI’s strong momentum towards vertical industry deep dives.
On the other hand, the challenges and risks accompanying AI development are becoming increasingly prominent: serious safety concerns surrounding the Xiaomi SU7 question the responsibility and capability limits of tech giants venturing into manufacturing; the stalled development of Meta’s flagship large model “Behemoth” and the loss of core talent reveal the immense uncertainty of cutting-edge AI exploration; and the repeated “out-of-control” incidents involving xAI’s Grok once again bring core issues like AI content governance, internal control, and value alignment into the spotlight.