↙ time adjusted for second-chance
RSA and Python (xnacly.me)
I'm seeing people that are technically savvy defend mediocre code and consumption based output (think technical briefs and reports). When the flaws in the output is highlighted in many cases it's brushed off as "good enough" or "nobody will care / notice". I think LLMs and more aptly SLMs have use cases. I enjoy using these tools to make quick work of simplifying and faster iteration of these relatively frequent but time consuming tasks. But I'm always correcting and checking. And very rarely, other than simple and focused scripts does any LLM truly get it right every time. Has it gotten better? For sure. Will it keep getting better? Probably. But right now we seem to be topping the "peak of inflated expectations". And LLMs aren't getting much more efficient with respect to the frontier providers. And in fact if you listen to Altman it seems as though the only reason he would be asking for so much capital and finite resources is that he knows if he controls those tangible things he will lock out competition. But I'm hopeful that it spurs real innovation into SLMs that are truly useful, dependable and can be relied on in more of the traditional in the sense of deterministic software operations. AI for art is dead. It's got some mediocre use cases but true art will not be generated by LLMs in our time. It's ultimately an amalgamation of existing art. I know the argument over what is novel or not keeps being rehashed, but we're not seeing truly new styles of art out of Nano Banana and the like. Coding is the same thing, only we're seeing a resurgence of obviously flawed software being pushed into production on the weekly. And as for conversational AI... Well, that reeks of the worst version of social media we could ever have dreamt. Nobody should trust any provider with personal conversations and we'll keep seeing these models show how truly dystopian they can be over the coming years as leaks and breaches expose how these conversations are being bought and sold to the highest bidders to extract more money and control over its users. They all have a common thread: deep rooted flaws that cannot be contained in the traditional fences of software. And there guardrails are just that: small barriers that can easily be broken, intentionally or unintentionally.
>And the reality is that confirmation is part of life. Sycophantic agreement certainly is, as is lying, manipulation, abuse, gaslighting. Those aren't the good parts of life. Those aren't the parts I want the machine to do to people on a mass scale. >You may even struggle to stay married if you don't learn to confirm your wife's perspectives. Sorry what? The important part is validating the way someone feels , not "confirming perspectives". A feeling or a perspective can be valid ("I see where you're coming from, and it's entirely reasonable to feel that way"), even when the conclusion is incorrect ("however, here are the facts: ___. You might think ___ because ____, and that's reasonable. Still, this is how it is.") You're doing nobody a favor by affirming they are correct in believing things that are verifiably, factually false . There's a word for that. It's lying . When you're deliberately lying to keep someone in a relationship, that's manipulation . When you're lying to affirm someone's false views, distorting their perception of reality - particularly when they have doubts, and you are affirming a falsehood, with intent to control their behavior (e.g. make them stay in a relationship when they'd otherwise leave) - ... - that, my friend, is gaslighting . This is exactly what the machine was doing to the colleague who asked "which of us is right, me or the colleague that disagrees with me". It doesn't provide any useful information, it reaffirms a falsehood, it distorts someone's reality and destroys trust in others, it destroys relationships with others, and encourages addiction — because it maximizes "engagement". I.e., prevents someone from leaving . That's abuse . That, too is a part of life. >I agree with your conclusion, but that's by design All I did was named the phenomena we're talking about (lying, gaslighting, manipulation, abuse). Anyone can verify the correctness of the labeling in this context. I agree with your assertion, as well as that of the parent comment. And putting them together we have this: LLM chatbots today are abusive by design . This shit needs to be regulated, that's all. FDA and CPSC should get involved.
OK, I'll bite the artillery shell: I don't mean to dismiss you or what you are saying; in fact I strongly relate - wouldn't it be nice to be able to hash things out with people and mutually benefit from both the shared and the diverging perspectives implied in such interaction? Isn't that the most natural thing in the world? Unfortunately these days this sounds halfway between a very privileged perspective and a pie in the sky. When was the last time a person took responsibility for the bad outcome you got as a direct consequence of following their advice? And, relatedly, where the hell do you even find humans who believe in discursive truth-seeking in 2026CE? Because for the last 15 years or so I've only ever ran into (a) the kind of people who will keep arguing regardless if what they're saying is proven wrong; (b) and their complementaries, those who will never think about what you are saying, lest they commit to saying anything definite themselves, which may hypothetically be proven wrong. Thing is, both types of people have plenty to lose; the magic wordball doesn't. (The previous sentence is my answer to the question you posited; and why I feel the present parenthesized disclaimer to be necessary, is a whole next can of worms...) Signs of the existence of other kinds of people, perhaps such that have nothing to prove, are not unheard of. But those people reside in some other layer of the social superstructure, where facts matter much less than adherence to "humane", "rational" not-even-dogmas (I'd rather liken it to complex conditioning). But those folks (because reasons) are in a position of power over your well-being - and (because unfathomables) it's a definite faux pas to insist in their presence that there are such things as facts , which relate by the principles of verbal reasoning. Best you could get out of them is the "you do you", "if you know you know", that sort of bubble-bobble - and don't you dare get even mildly miffed at such treatment of your natural desire to keep other humans in the loop. AI is a symptom.
 Top