"Cognitive inbreeding" is an interesting (though maybe not entirely accurate) term for something I dislike a lot about LLMs. It really is a thing. You're recycling the same biases over and over, and it can be very difficult to tell if you don't review and distill the contents of your discourse with LLMs. Especially true if you're only using one. I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation. It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over. When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated. I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.
↙ time adjusted for second-chance
How do Wake-On-LAN works (blog.xaner.dev)
Maybe, just maybe, the previous generation (I include myself here) have lost the plot raising/educating children and are breeding just absolute disrespectful, egotistical, attention seeking assholes as younger generations. Teachers in the classrooms have been globally sounding the alarms for decades about the loss of discipline, the loss of basic manners, the loss of respect for authority, the loss of empathy, the attention issues (attention seeking and attention impairment), the increase entitlement, the inability to cope with negatives, the increase in illiteracy, etc.. No one has listened. Then things go wrong and we blame the teachers. It is ok for kids to be mischievous. It is ok for young adults to take the piss out of each other in a healthy way. But this looks to me like an education problem. That kind of value based education where parents used to educate kids to be compassionate to each other, to respect each other and to f**ng understand that if a moment of fun with some friends could ruin someone else's life maybe it is not worth being the cool dude for 5 minutes. We have lost that. Most kids do not have these values these days. But what did we expect? We have been systematically ditching those values. We have an older generation now that is selfish, egotistical, careless and dismissive with anything that it is not them and their belief framework whatever that is. We have polarised to the extent of hatred. And this is showing in society. It is showing in our kids. I don't think tech is to blame. I don't think kids are to blame. This might be our fault.
There's a few threads there where folks would benefit from reading Judge Rakoff's memo. There's a copy here, or it's on PACER/RECAP: https://www.akingump.com/a/web/ssTGsd5NHbtZ1onzXQMTye/1_25-cr-503-27-memorandum.pdf For 1), his reasoning shows how intelligent, well-read humans view AI which is quite different from the attitudes seen on HN. Rakoff calls the chats "Claude searches" which while it may sound ridiculous (what is this, Perplexity?) is just how some people must view this crazy new thing: another Google. You type stuff in and get results out. 2) Rakoff goes through the 3 elements of attorney client privilege in US law (communications between attorney and client, intended to be and kept confidential, and for the purpose of legal advice). It's obvious the Claude chats fail two of them and he goes over why. 3) A lot of people bring up the point that if you use Google Docs to transcribe privileged information, is that the same, since you send your data to Google? The model AI companies take when they cater to legal clients is akin to that of a locked filing cabinet in a storage facility: sure, you're sending the data to them, but with a ZDR they ain't looking at it or training on it. Another CRITICAL point here not mentioned in the article is Warner v Gilbarco; Gilbarco directly contradicts Heppner and indicates that work-product doctrine covers AI-generated chats! https://perkinscoie.com/insights/update/heppner-and-gilbarco-courts-apply-privilege-and-work-product-protection-generative The law is not settled. I looked into on-premises AI for legal as a business idea but decided it's not a great idea right now.
 Top