Uncategorized

Under 10% of an earthquake’s energy makes the ground shake

Earthquakes are driven by energy stored up in rocks over millennia—energy that, once released, we perceive mainly in the form of the ground’s shaking. But a quake also generates a flash of heat and fractures and damages underground rocks. And exactly how much energy goes into each of these three processes is exceedingly difficult to

Under 10% of an earthquake’s energy makes the ground shake Read More »

Secrets of the sleep-deprived brain

Nearly everyone has experienced it—after a night of poor sleep, your brain might seem foggy, and your mind drifts off when you should be paying attention. A new MIT study reveals what happens biologically as these momentary lapses occur: Your brain is performing essential maintenance that it usually takes care of while you sleep.  During

Secrets of the sleep-deprived brain Read More »

AI Governance in practice: How synthetic data prepares you for what’s next

This blog with co-written with Sundaresh Sankaran. The Artificial Intelligence (AI) era is here. To prevent harm, ensure proper governance and secure data, we need to trust our AI output. We must demonstrate that it operates in a fair and responsible manner with a high level of efficiency. As builders of […] The post AI Governance

AI Governance in practice: How synthetic data prepares you for what’s next Read More »

DeepSeek mHC: Stabilizing Large Language Model Training

Large AI models are scaling rapidly, with bigger architectures and longer training runs becoming the norm. As models grow, however, a fundamental training stability issue has remained unresolved. DeepSeek mHC directly addresses this problem by rethinking how residual connections behave at scale. This article explains DeepSeek mHC (Manifold-Constrained Hyper-Connections) and shows how it improves large language model training stability

DeepSeek mHC: Stabilizing Large Language Model Training Read More »

SAS 해커톤 2025 대회에서 한국 ‘블록큐브 팀’ 수상!

단순한 경쟁을 넘어 사회적 문제를 해결하고 인류의 삶을 개선하기 위한 혁신의 장인 글로벌 SAS 해커톤 대회가 지난 9월 중순부터 10월까지 한 달 간의 흥미로운 여정을 마쳤습니다. 올해로 5회를 맞이한 ‘SAS 해커톤 2025’에는 전 세계 66개국 708개 기업 및 대학, 그리고 SAS 파트너사에서 2,058명의 인재들이 등록하며 뜨거운 관심을 입증했습니다. 이 중 […] The post SAS 해커톤

SAS 해커톤 2025 대회에서 한국 ‘블록큐브 팀’ 수상! Read More »

Liquid AI’s LFM2-2.6B-Exp Uses Pure Reinforcement Learning RL And Dynamic Hybrid Reasoning To Tighten Small Model Behavior

Liquid AI has introduced LFM2-2.6B-Exp, an experimental checkpoint of its LFM2-2.6B language model that is trained with pure reinforcement learning on top of the existing LFM2 stack. The goal is simple, improve instruction following, knowledge tasks, and math for a small 3B class model that still targets on device and edge deployment. Where LFM2-2.6B-Exp Fits

Liquid AI’s LFM2-2.6B-Exp Uses Pure Reinforcement Learning RL And Dynamic Hybrid Reasoning To Tighten Small Model Behavior Read More »

😊

MiniMax Releases M2.1: An Enhanced M2 Version with Features like Multi-Coding Language Support, API Integration, and Improved Tools for Structured Coding

Just months after releasing M2—a fast, low-cost model designed for agents and code—MiniMax has introduced an enhanced version: MiniMax M2.1. M2 already stood out for its efficiency, running at roughly 8% of the cost of Claude Sonnet while delivering significantly higher speed. More importantly, it introduced a different computational and reasoning pattern, particularly in how

MiniMax Releases M2.1: An Enhanced M2 Version with Features like Multi-Coding Language Support, API Integration, and Improved Tools for Structured Coding Read More »

There is yet another AI productivity gap

When I first started as a data scientist, there was a gap. I met with dozens of organizations who would invest time and resources into building accurate and tuned models and then ask, “What now?” They had a fantastic model in hand but couldn’t get it into a place and […] The post There is

There is yet another AI productivity gap Read More »