Additionally, ICE sales have started recovering over the past few years now that EV subsidizes have started being phased out in most markets [0]. Even in China, most consumers "despite buying more EVs, are less interested in how their cars are powered, and more in their digital lifestyle integration" [0]. I like EVs but I think that in most markets they're at the same point today that hybrid cars were in the 2010s - proven, but still a difficult financial sell in the short term due to high upfront costs for consumers. Edit: can't reply > Which seems odd in an article claiming the global tide has turned against EVs Becuase China is not the world. Most other markets have seen either a slowdown or a reversal in EV sales - especially following the reduction of EV purchase subsidizes in most markets. It also highlights the fact that a large portion of customers are indifferent about ecological sentiment, and that EVs can only outpace ICE if their upfront cost or net-new features (in China's case, EVs tended to have better features than ICE cars sold domestically) outpace ICE vehicles. Even Chinese automotive players (primarily SOEs that couldn't compete with private sector BYD) have been taking advantage of this market shift, become major ICE car exporters now [1]. [0] - https://www.reuters.com/business/energy/combustion-engine-cars-regain-popularity-worldwide-ey-says-2025-12-09/ [1] - https://www.reuters.com/investigations/china-floods-world-with-gasoline-cars-it-cant-sell-home-2025-12-02/
LLMs have this strong bias towards generating code, because writing code is the default behavior from pre-training. Removing code, renaming files, condensing, and other edits is mostly a post-training stuff, supervised learning behavior. You have armies of developers across the world making 17 to 35 dollars an hour solving tasks step by step which are then basically used to generate prompt/responses pairs of desired behavior for a lot of common development situations, adding desired output for things like tool calling, which is needed for things like deleting code. A typical human working on post-training dataset generation task would involve a scenario like: given this Dockerfile for a python application, when we try to run pytest it fails with exception foo not found. The human will notice that package foo is not installed, change the requirements.txt file and write this down, then he will try pip install, and notice that the foo package requires a certain native library to be installed. The final output of this will be a response with the appropriate tool calls in a structured format. Given that the amount of unsupervised learning is way bigger than the amount spent on fine-tuning for most models, it is not surprise that given any ambiguous situation, the model will default to what it knows best. More post-training will usually improve this, but the quality of the human generated dataset probably will be the upper bound of the output quality, not to mention the risk of overfitting if the foundation model labs embrace SFT too enthusiastically.
 Top