Summary using Kagi Summarizer. Disclaimer, this summary uses LLMs, so the summary may, in fact, be bullshit. Title: LLMs are bullshitters. But that doesn't mean they're not useful | Kagi Blog The article "LLMs are bullshitters. But that doesn't mean they're not useful" by Matt Ranger argues that Large Language Models (LLMs) are fundamentally "bullshitters" because they prioritize generating statistically probable text over factual accuracy. Drawing a parallel to Harry Frankfurt's definition of bullshitting, Ranger explains that LLMs predict the next word without regard for truth. This characteristic is inherent in their training process, which involves predicting text sequences and then fine-tuning their behavior. While LLMs can produce impressive outputs, they are prone to errors and can even "gaslight" users when confidently wrong, as demonstrated by examples like Gemini 2.5 Pro and ChatGPT. Ranger likens LLMs to historical sophists, useful for solving specific problems but not for seeking wisdom or truth. He emphasizes that LLMs are valuable tools for tasks where output can be verified, speed is crucial, and the stakes are low, provided users remain mindful of their limitations. The article also touches upon how LLMs can reflect the biases and interests of their creators, citing examples from Deepseek and Grok. Ranger cautions against blindly trusting LLMs, especially in sensitive areas like emotional support, where their lack of genuine emotion can be detrimental. He highlights the potential for sycophantic behavior in LLMs, which, while potentially increasing user retention, can negatively impact mental health. Ultimately, the article advises users to engage with LLMs critically, understand their underlying mechanisms, and ensure the technology serves their best interests rather than those of its developers. Link: https://kagi.com/summarizer/?target_language=&summary=summary&url=https%3A%2F%2Fblog.kagi.com%2Fllms
If all your relationships fail in the same manner, it is likely that the problem is you. > One friend became “convinced” that every major news story was manufactured consent. Another started treating political disagreement as evidence of moral corruption. A third began using the word "liberal" as if it was a personality disorder rather than loose coalitions of sometimes contradictory beliefs. Manufactured consent is a real thing, with mounting evidence that it's becoming increasingly prevalent. The ownership structures around major news outlets are worrisome and what many considered 'reliable' for years are now showing seriously problematic habits (like genocide erasure - lookin' at you, NYT.) Liberalism has come under completely valid scrutiny as we've seen fiscal policies implemented by Clinton and Obama blow up in our faces. No, we don't think Reaganomics is anything but a grift, but many of us see the grift in NAFTA and the ACA and Gramm-Leach-Bliley and have begun to question the honesty of centrist liberal economic policies because we are seeing them fail catastrophically . > The incentive gradient was clear: sanity was expensive, and extremism paid dividends. Author is doing something subtle here - without making a defense or interrogation of the statement, they are saying "Not being liberal / centrist is extremism, and thus invalid". I call bullshit. I have not profited or benefited from my "extreme" leftist views. If anything, I take a risk every time I talk about them out in the open. My comment history is going to be visible to all future employers. Should the government continue it's rightward slide I'll have a target painted on my back that I put there. I don't believe the things I believe because it's convenient , I believe them because in my estimation, we are operating on a set of failed systems and it's important that we fix them because they present a real and present danger . We have Trump because Biden was utterly incapable of facing the actual problems people are having with the economic prosperity gap. If you don't address the actual hardship in people's lives, you leave the door open for a huckster to make those promises for you. Most will take the unreliable promise of a better tomorrow over being lied to about whether they even have a problem. You don't need a PhD in economics to know that whatever the GDP might be you're still broke and you can't afford to feed your kids.
The duopoly comes about from a few things forcing it. There's the "it took a while to build it." iOS and android had decades of time to get to where they are now and centuries of developer hours put into writing it. That makes it challenging for others to get in. It isn't impossible, but it's really challenging. For the company, it's likely a loss for a long while before it becomes a possibility of not being a loss. The windows phone was being worked on for 3 years before the iPhone was released and wasn't released for another 3 years... and wasn't exactly a success. Next is the licensing of the modems for the phone spectrum. That takes FCC approval in the US and isn't something that random companies do without good reason. Part of that licensing is the requirement that it is locked down sufficiently that the user can't do malicious things on the radio spectrum with the device... and that tends to go against many of the open source ideals. It's a preemptive Tivoization of the device. Assuming that those two parts are down, the next challenge is to make it a tool that you'd use in place of an iPhone or an android phone. Things like holding PCI data. That again makes it difficult to do. Persuading a bank that the device can act as a payment card and that the authorization is sufficient to avoid fraud from either the apps on the device or the user being able to inject other payment cards that they don't own into the device. Likewise, things like allowing the device's digital wallet to act as the identification card. https://www.tsa.gov/digital-id/participating-states https://www.apple.com/newsroom/2025/11/apple-introduces-digital-id-a-new-way-to-create-and-present-an-id-in-apple-wallet/ - those require trust between the government and the company that is likely absent with a open source device. I'd love to see an iPod touch like device (non phone) that allows me to run apps or develop my own and build up an ecosystem and demonstrate that trusting it is feasible... but so far I haven't seen many that have lasted beyond kickstarter money running out. I've got a Remarkable ... which isn't exactly small (or cheap). I'd like to see more things like that in other form factors that allow me to do things with it akin to https://developer.remarkable.com
I've set up PXE booting at two previous companies for very different use cases. The first was to automate server deployment; we ran bare metal servers, and even though we had managed hosting in our data centre the installation, configuration, and deployment of a server could potentially take days since it was just me doing it and I had other things to do. So one day I set to work. I installed an Ubuntu server the same way I always did and then captured the debconf configuration to turn into a preseed file. I set up the disk partitioning, etc., and configured the OS to boot from DHCP. Then I configured the DHCP server with MAC addresses for every server we got and an associated IP address so that a given physical server would always get the same IP address. Then I set up an internal apt repository; that's where I put custom packages, backports I had to recompile, third-party packages (e.g. perconadb) and so on. Lastly, I set up salt (the management orchestration tool, like puppet or chef or ansible) with a nice simple (detailed) configuration. The machines would be configured to boot via PXE. They'd load the kernel and initrd, which contained the preseed file that answered all of the installation/configuration questions. Lastly it ran the post-install shell script which started salt and ran the initial configuration, much of which was based on hostname. This would turn the current DHCP-provided IP address into a static networking configuration so that the server wasn't reliant on DHCP anymore; it would ensure that SSH keys were installed, and that the right services were enabled or disabled, install some packages based on the hostname (which represented the role, e.g. db02.blah.blah got percona installed). I also had some custom data sources (whatever you would call them) so that I could install the right RAID controller software based on which PCI devices were present; after all that, it would reboot. Once it rebooted from the local disk, salt would pick back up again and do the rest of the configuration (now that it wasn't running from a chroot and had all the required systemd services running). What used to take me several days to do for two servers turned into something one of our co-ops could do in an hour. Second was another company that wanted to standardize the version of Linux its developers were running. Again, I set up an Ubuntu installer and configured it to boot iPXE and then fetch the kernel and the root image via HTTPS. The Ubuntu installer at that point was a Snap, and the default 'source' was a squashfs file that it unpacked to the new root filesystem before proceeding with package installation. I set up some scripts and configurations to take the default squashfs filesystem, unpack it, install new packages via apt in a chroot, and then repack it again. This let me do things like ensure Firefox, Thunderbird, and Chrome were installed and configured not from snaps; update to the latest packages; make sure Gnome was installed, etc. A lot of that was stuff the installer would do, of course, but given we were on gigabit ethernet it was significantly faster to download a 2 GB squashfs file than to download a 512M squashfs file and then download new or updated packages. One again what used to start with "Here's a USB, I think it has the latest Ubuntu on it" and take most of a day turned into "Do a one-off boot from the network via UEFI, choose a hostname, username, and password, and then just wait for twenty minutes while you get a coffee or meet your coworkers". I even found a "bug" (misbehaviour) in the installer where it would mount the squashfs and then rsync the files, which seemed significantly slower because the kernel was only using one thread for decompressing; using `unsquashfs` could use all cores and was dramatically faster, so I got to patch that (which I'm not sure ever made it into the installer anyway). The one thing I couldn't make work was the OEM installation, where you put everything down onto the system unattended then put the user through the Ubuntu OOBE process. That would have made it far easier to pre-provision systems for users ahead of time; I did replace the default Plymouth splash screen logo with our company logo though, which was pretty cool. I also set up network booting of macOS at another job, but that's sort of a very different process because it has all its own tooling, etc. for managing and Apple ended up moving from custom deployment images to static images and MDM for post-install configuration. TL;DR network booting is pretty great actually; it's a very niche use case but if you're clever you can get a lot done. There's also lots of options for booting into a bootloader that can then present other options, allowing you to choose to netboot Ubuntu Desktop, Ubuntu Server, Windows, RHEL, Gentoo, a rescue image, or anything else you want.
 Top