↙ time adjusted for second-chance
Levels of Agentic Engineering (bassimeledath.com)
> Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments. I'm on the contrary believe that the hunt for better data is an attempt to climb the local hill and be stuck there without reaching the global maximum. Interactive environments are good, they can help, but it is just one of possible ways to learn about causality. Is it the best way? I don't think so, it is the easier way: just throw money at the problem and eventually you'll get something that you'll claim to be the goal you chased all this time. And yes, it will have something in it you will be able to call "causal inference" in your marketing. But current models are notoriously difficult to teach. They eat enormous amount of training data, a human needs much less. They eat enormous amount of energy to train, a human needs much less. It means that the very approach is deficient. It should be possible to do the same with the tiny fraction of data and money. > The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model. Well, I learned English almost all the way to B2 by reading books. I was too lazy to use a dictionary most of the time, so it was not interactive: I didn't interact even with dictionary, I was just reading books. How many books I've read to get to B2? ~10 or so. Well, I read a lot of English in Internet too, and watched some movies. But lets multiply 10 books by 10. Strictly speaking it was not B2, I was almost completely unable to produce English and my pronunciation was not just bad, it was worse. Even now I stumble sometimes on words I cannot pronounce. Like I know the words and I mentally constructed a sentence with it, but I cannot say it, because I don't know how. So to pass B2 I spent some time practicing speech, listening and writing. And learning some stupid topic like "travel" to have a vocabulary to talk about them in length. How many books does LLM need to consume to get to B2 in a language unknown to it? How many audio records it needs to consume? Life wouldn't be enough for me to read and/or listen so much. If there was a human who needed to consume as much information as LLM to learn, they would be the stupidest person in all the history of the humanity.
↙ time adjusted for second-chance
Open Weights Isn't Open Training (workshoplabs.ai)
A related Dirty Secret that's going to become clear from all this is that a very large proportion of code in the wild (yes, even in 2026—maybe not in FAANG and friends, IDK, but across all code that is written for pay in the entire economy) has limited or no automated test coverage, and is often being written with only a limited recorded spec that's usually fleshed out only to the degree needed (very partial) as a given feature is being worked on . What do the relatively hands-off "it can do whole features at a time" coding systems need to function without taking up a shitload of time in reviews? Great automated test coverage, and extensive specs. I think we're going to find there's very little time-savings to be had for most real-world software projects from heavy application of LLMs, because the time will just go into tests that wouldn't otherwise have been written, and much more detailed specs that otherwise never would have been generated. I guess the bright-side take of this is that we may end up with better-tested and better-specified software? Though so very much of the industry is used to skipping those parts, and especially the less-capable (so far as software goes) orgs that really need the help and the relative amateurs and non-software-professionals that some hope will be able to become extremely productive with these tools, that I'm not sure we'll manage to drag processes & practices to where they need to be to get the most out of LLM coding tools anyway. Especially if the benefit to companies is "you will have better tests for... about the same amount of software as you'd have written without LLMs". We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely. On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great.
My two cents: I've been coding practically my entire life, but a few years back I sustained a pretty significant and lasting injury to my wrists. As such, I have very little tolerance for typing. It's been quite a problem and made full time work impossible. With the advent of LLMs, AI-autocomplete, and agent-based development workflows, my ability to deliver reliable, high-quality code is restored and (arguably) better. Personally, I love the "hallucinations" as they help me fine-tune my prompts, base instructions, and reinforce intentionality; e.g. is that >really< the right solution/suggestion to accept? It's like peer programming without a battle of ego. When analyzing problems, I think you have to look at both upsides and downsides. Folks have done well to debate the many, many downsides of AI and this tends to dominate the conversation. Probably thats a good thing. But, on the flip side, I personally advocate hard for AI from the point-of-view on accessibility. I know (more-or-less) exactly what output I'm aiming for and control that obsessively, but it's AI and my voice at the helm instead of my fingertips. I also think it incorrect to look at it from a perspective of "does the good outweigh the bad?". Relevant, yes, but utilitarian arguments often lead to counter-intuitive results and end up amplifying the problems they seek to solve. I'd MUCH rather see a holistic embrace and integration of these tools into our ecosystems. Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.
I don't know, it's a pretty leap for me to consider AI being hard to distinguish from human contributions. AI is predictive at a token level. I think the usefulness and power of this has been nothing short of astonishing; but this token prediction is fundamentally limiting. The difference between human _driven_ vs AI generated code is usually in design. Overly verbose and leaky abstractions, too many small abstractions that don't provide clear value, broad sweeping refactors when smaller more surgical changes would have met the immediate goals, etc. are the hallmarks of AI generated code in my experience. I don't think those will go away until there is another generational leap beyond just token prediction. That said, I used human "driven" instead of human "written" somewhat intentionally. I think AI in even its current state will become a revolutionary productivity boosting developer aid (it already is to some degree). Not dissimilar to a other development tools like debuggers and linters, but with much broader usefulness and impact. If a human uses AI in creating a PR, is that something to worry about? If a contribution can pass review and related process checks; does it matter how much or how little AI was used in it's creation? Personally, my answer is no. But there is a vast difference between a human using AI and an AI generated contribution being able to pass as human. I think there will be increasing degrees of the former, but the latter is improbable to impossible without another generational leap in AI research/technology (at least IMO). --- As a side note, over usage of AI to generate code _is_ a problem I am currently wrangling with. Contributors who are over relying on vibecoding are creating material overhead in code review and maintenance in my current role. It's making maintenance, which was already a long tail cost generally, an acute pain.
>It's kind of funny because Telegram is used by Russian military to coordinate a lot of things, so they complain a lot about the block. If that's true, then it was really stupid of them to allow things to get to that point. Look at the US -- they had no tolerance for a major social media app (TikTok) to be outside their own control, and they weren't even in a major war at the time. It seems obvious that if you ARE in a major war, you wouldn't want your main social media and messaging app to be under the control of somebody (Pavel Durov) who was recently arrested by a member (France) of the military alliance you're fighting against (NATO), when it is unclear what deal he may have made with that government to be released from prison. It seems obvious to suspect that the price of his freedom may have been a backdoor that allows the opposing military to read all the messages your own people are sending. The real failure of Russia's is that, unlike the US, they have been systematically unable to keep its own top tech talent supportive of their own government. The top US tech companies have been only too eager to do almost anything their government asks of them, with only some rare and tepid pushback (such as that by Anthropic recently), that seems to get severely punished when it does happen. So there has been no need for the US government to go to the extents that Russia is going to now, simply because they were able to coopt their top talent into working for and with the state (with some rare exceptions like Snowden, and I'd say the "damage" from that has been pretty successfully contained). The Chinese government may have had some issues with that as well, considering what happened with Jack Ma (though I don't know much about it).
14.4 is a maintenance release. If you're installing FreeBSD today, use 15.0 Why FreeBSD ? - Well manicured OS, excellent docs. More performant than OpenBSD in every way and approaches Linux performance in some areas (e.g. Networking) - FreeBSD tends to have fewer features in almost all areas compared to Linux which makes it more approachable and more difficult to mess up. - Though it has fewer features, it still has a lot of features -- many big companies (Netflix most famously) still use it today for critical functions. - FreeBSD Kernel and Userland developed together -- it has got that undefined "cohesive" feel - Has less layers of abstraction than Linux, gets the job done. Because there are fewer layers it's easier to understand what is going on and potentially easier to fix. - FreeBSD is great if you want to learn pf, zfs, ... - Worth your while if you are bored of the Linux monoculture and just want to try something a bit different (but not tooo different) - Changes slowly, so good for setting up on a server that you want to just leave running without too much maintenance - Will increase your Linux skills because diversity always helps the human brain - Very simple daemon configuration via /etc/rc.conf - FreeBSD `bectl` controlled zfs boot environments are just so life changing and amazing. (this is possible via snapper on Linux + btrfs but needs complex installation and is not so integrated). - FreeBSD will accept (smallish) PRs via GitHub if you find a minor bug. Otherwise it uses the decent Phabricator interface at https://reviews.freebsd.org . This is much better IMHO than the mailing list workflow of Linux. The barriers to contribution are lesser than Linux !! - FreeBSD still has that warm fuzzy small "community" feel which I like
 Top