I’m not sure if I agree, but it’s an intriguing take, and I’ve been saying that the main question for the near and medium future of AI is where we are in the s-curve, where in the steep part? The plateau will change a lot of things, including, I think, the reduced importance / influence of frontier models.
Many people, myself included, didn’t try to build a product around a language model because during the time you would work on a business-specific dataset, a larger generalist model will be released that will be as good for your business tasks as your smaller specialized model.
The disappointing releases of both GPT-4.5 and Llama 4 have shown that if you don’t train a model to reason with reinforcement learning, increasing its size no longer provides benefits.
Reinforcement learning is limited only to domains where a reward can be assigned to the generation result. Until recently, these domains were math, logic, and code. Recently, these domains have also included factual question answering, where, to find an answer, the model must learn to execute several searches. This is how these “deep search” models have likely been trained.
If your business idea isn’t in these domains, now is the time to start building your business-specific dataset. The potential increase in generalist models’ skills will no longer be a threat.
Via Simon Willison.
Comments are closed here.