I've been noticing lately that the discussion around LLMs and using them for programming has begun to expose people for how little they understand programming or what software developers do in general. I think I generally agree with the author as a result. A year ago I might think this was more naive, but today... I think software development has more of a moat than I thought, for more reasons than I originally perceived. There are a lot of senior developers who discuss how they use LLMs and why, for example, and it exposes that even with a decade or so of experience, people can have extremely thin and weak understandings of what they're doing, and why. That isn't to cast shade at all, and I've been (and will be) the experienced yet clueless person at times. I could be right now. A reductive description is that it's turning a lot of people in expert beginners, and the coworker they collaborate most has no way of compensating for it. LLMs are useful and powerful tools, but they can't make up for these kinds of deficiencies yet. It doesn't seem like they will very soon, either. I can't imagine the messes people are creating with LLMs when they have no experience at all, though. They might feel empowered (and to a degree they certainly are) but when it comes to complex, large, mission-critical, and/or distributed systems... These tools are nowhere near where they need to be. I've also found that important software can now become more ambitious. We seem to model the risk developers face based on the software of today, but what I'm seeing is that I'm able to build and maintain more ambitious projects than ever. I'll be pushing the limits of what's possible for myself for a while yet, and I suspect it will continue to produce value for the people I work with. I could have these tools do the work I used to do (or help me do it faster) and leave it at that, but the reality is that I don't just stop there. I keep going, I continue refining, I discover more ways to make it more valuable, I iterate faster and maintain a tighter feedback loop with the people who use the things I create. So, why would I be eliminated from that process? Do people really believe that my position in that loop will be eliminated by AI? This seems to disregard a myriad of qualities that allow software developers to be effective and valuable team members. If that happens, frankly, I believe far more roles than software would be eliminated at that point. The implications would go far beyond software.
The cost of ownership for an OpenClaw, and how many credits you'll use, is really hard to estimate since it depends so wildly on what you do. I can give you an openclaw instruction that will burn over $20k worth of credits in a matter of hours. You could also not talk to your claw at all for the entire month, setup no crons / reoccurring activities / webhooks / etc, and get a bill of under $1 for token usage. My usage of OpenClaw ends up costing on the order of $200/mo in tokens with the claude code max plan (which you're technically not allowed to use with OpenClaw anymore), or over $2000 if I were using API credits I think (which Klause is I believe, based on their FAQ mentioning OpenRouter). So yeah, what I consider fairly light and normal usage of OpenClaw can quite easily hit $2000/mo, but it's also very possible to hit only $5/mo. Most of my tokens are eaten up by having it write small pieces of code, and doing a good amount of web browser orchestration. I've had 2 sentence prompts that result in it spinning up subagents to browse and summarize thousands of webpages, which really eats a lot of tokens. I've also given my OpenClaw access to its own AWS account, and it's capable of spinning up lambdas, ec2 instances, writing to s3, etc, and so it also right now has an AWS bill of around $100/mo (which I only expect to go up). I haven't given it access to my credit card directly yet, so it hasn't managed to buy gift cards for any of the friendly nigerian princes that email it to chat, but I assume that's only a matter of time.
There's something real in the impedance mismatch argument that I think the replies here are too quick to dismiss. The browser's programming model is fundamentally about a graph of objects with identity, managed by a GC, mutated through a rich API surface. Linear memory is genuinely a poor match for that, and the history of FFI across mismatched memory models (JNI, ctypes, etc.) tells us this kind of boundary is where bugs and performance problems tend to concentrate. You're right to point at that. Where I think the argument goes wrong is in treating "most websites don't use WASM" as evidence that WASM is a bad fit for the web. Most websites also don't use WebGL, WebAudio, or SharedArrayBuffer. The web isn't one thing. There's a huge population of sites that are essentially documents with some interactivity, and JS is obviously correct for those. Then there's a smaller but economically significant set of applications (Figma, Google Earth, Photoshop, game engines) where WASM is already the only viable path because JS can't get close on compute performance. The component model proposal isn't trying to replace JS for the document-web. It's trying to lower the cost of the glue layer for that second category of application, where today you end up maintaining a parallel JS shim that does nothing but shuttle data across the boundary. Whether the component model is the right design for that is a fair question. But "JS is the right abstraction" and "WASM is the wrong abstraction" aren't really in tension, because they're serving different parts of the same platform. The analogy I'd reach for is GPU compute. Nobody argues that shaders should replace CPU code for most application logic, but that doesn't make the GPU a "dud" or a second-class citizen. It means the platform has two execution models optimized for different workloads, and the interesting engineering problem is making the boundary between them less painful.
> WebAssembly has a sandbox and was designed for untrusted code. So does JavaScript. > It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code. They have that infrastructure because JS has access to the browser's API. If you tried to redesign all of the web APIs in a way that exposes them to WebAssembly, you'd have an even harder time than exposing those APIs to JS, because: - You'd still have all of the security troubles. The security troubles come from having to expose API that can be called adversarially and can pass you adversarial data. - You'd also have the impedence mismatch that the browser is reasoning in terms of objects in a DOM, and WebAssembly is a bunch of integers. > There are dynamic languages, like JS/Python that can compile to wasm. If you compile them to linear memory wasm instead of just running directly in JS then you lose the ability to do coordinated garbage collection with the DOM. If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside. > Also I don't see how dynamic typing is required to have API evolution and compt. Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load. > Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways. We're talking about the browser, which is a particular platform. Not all platforms are the same. The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it. > The first major language for WebAssembly was C++, which is object oriented. But the object orientation is lost once you compile to wasm. Wasm's object model when you compile C++ to it is an array of bytes. > To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem. Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
↙ time adjusted for second-chance
The MacBook Neo (daringfireball.net)
I'd expect it to come down to data-oriented design: SoA (structure of arrays) rather than AoS (array of structures). I skimmed the author's source code, and this is where I'd start: https://github.com/define-private-public/PSRayTracing/blob/8dea5113f6b00e1ef6bb5c0c117562e971280271/render_library/Objects/HittableList.cpp#L39 Instead of an `_objects`, I might try for a `_spheres`, `_boxes`, etc. (Or just `_lists` still using the virtual dispatch but for each list, rather than each object.) The `asin` seems to be used just for spheres. Within my `Spheres::closest_hit` (note plural), I'd work to SIMDify it. (I'd try to SIMDify the others too of course but apparently not with `asin`.) I think it's doable: https://github.com/define-private-public/PSRayTracing/blob/8dea5113f6b00e1ef6bb5c0c117562e971280271/render_library/Objects/Sphere.cpp#L34 I don't know much about ray tracers either (having only written a super-naive one back in college) but this is the general technique used to speed up games, I believe. Besides enabling SIMD, it's more cache-efficient and minimizes dispatch overhead. edit: there's also stuff that you can hoist in this impl. Restructuring as SoA isn't strictly necessary to do that, but it might make it more obvious and natural. As an example, this `ray_dir.length_squared()` is the same for the whole list. You'd notice that when iterating over the spheres. https://github.com/define-private-public/PSRayTracing/blob/8dea5113f6b00e1ef6bb5c0c117562e971280271/render_library/Objects/Sphere.cpp#L43
When Intel specced the rsqrt[ps]s and rcp[ps]s instructions ~30 years ago, they didn't fully specify their behavior. They just said their relative error is "smaller than 1.5 * 2⁻¹²," which someone thought was very clever because it gave them leeway to use tables or piecewise linear approximations or digit-by-digit computation or whatever was best suited to future processors. Since these are not IEEE 754 correctly-rounded operations, and there was (by definition) no software that currently used them, this was "fine". And mostly it has been OK, except for some cases like games or simulations that want to get bitwise identical results across HW, which (if they're lucky) just don't use these operations or (if they're unlikely) use them and have to handle mismatches somehow. Compilers never generate these operations implicitly unless you're compiling with some sort of fast-math flag, so you mostly only get to them by explicitly using an intrinsic, and in theory you know what you're signing up for if you do that. However, this did make them unusable for some scenarios where you would otherwise like to use them, so a bunch of graphics and scientific computing and math library developers said "please fully specify these operations next time" and now NEON/SVE and AVX512 have fully-specified reciprocal estimates,¹ which solves the problem unless you have to interoperate between x86 and ARM. ¹ e.g. Intel "specifies" theirs here: https://www.intel.com/content/www/us/en/developer/articles/code-sample/reference-implementations-for-ia-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2.html ARM's is a little more readable: https://developer.arm.com/documentation/ddi0596/2021-03/Shared-Pseudocode/Shared-Functions?lang=en#impl-shared.RecipSqrtEstimate.2
Posts predicting this were apparently flagged as "political". For example, Bruce Schneier's warning [0]. For a site called Hacker News, DOGE unfortunately attracted a different priority of notoriety than, say, the numerous merger and acquisition and VC maneuvers reaching the front page. If hacker punks nominally subvert the established order by flaunting laws and authorities, then DOGE was very much hacking. Tina Peters is an unsophisticated hacker punk, She doesn't live up to the social engineering chops of Kevin Mitnick, but her plan did involve a Geek Squad uniform . Legendary but too "political". Attracts too much noise, not enough signal. That's why you didn't see an elevation of the developed thoughts you're talking about. Since the beginning of DOGE, it has not been especially bold to predict: - DOGE will cost more than it saves. The seminal errors, mistaking $ millions for $ billions, world-write permissions on their Drupal site, etc. convinced us that we can't expect deliberate professionalism. - The very first whistleblower, out of NTSB, convinced us that exfiltration was the goal. This is within the top 5 whistleblower stories here. The critical detail was their instruction that access logs be scrubbed. - And the general public smelled it, too. No one doubts that threats against Tesla dealerships were civil libertarian radicals, not recently-fired USAID bean counters. - When Peter Theil's FBI handler, Johnathan Buma, went whistleblower a few months into DOGE, it wasn't about Theil. He saw a Russian active measure influencing Musk's inner circle. One of Kash Patel's first acts as FBI director was to order Buma arrested. So, the commentary worrying about "big tech" was commentary within Y Combinator's sphere. [0] : https://news.ycombinator.com/item?id=43035977
There is a vast difference between a student reading from a textbook and a researcher / scientist reading studies and/or papers. As a student you are to be directed* in your reading by an expert in the field of study that you are learning from. In many higher level courses a professor will assign multiple textbooks and assign reading from only particular chapters of those textbooks specifically because they have vetted those chapters for accuracy and alignment with their curriculum. As a researcher and scientist a very large portion of your job is *verifying and then integrating* the research of others into your domain knowledge. The whole purpose of replicating studies is to look critically at the methodology of another scientist and try as hard as you can to prove them wrong. If you fail to prove them wrong and can produce the same results as them, they have done Good Science. A textbook is the product of scientists and researchers Doing Science and publishing their results, other scientists and researchers *verifying via replication*, and then one of those scientists or researchers who is an expert in the field doing their best to compile their knowledge on the domain into a factually accurate and (relatively) easy to understand summary of the collective research performed in a specific domain. The fact is that people make mistakes, and the job of a professor (who is an expert in a given field) is to identify what errors have made it through the various checks mentioned above and into circulation, often times making subjective judgement calls about what is 'factual enough' for the level of the class they are teaching, and leverage that to build a curriculum that is sound and helps elevate other individuals to the level of knowledge required to contribute to the ongoing scientific journey. * In short, it's not a bad thing if you're learning a subject by yourself for your own purposes and are not contributing to scientific advancement or working as an educator in higher-education.* * You can self-study, but to become an expert while doing so requires extremely keen discernment to be able to root out the common misconceptions that proliferate in any given field. In a blue-collar field this would be akin to picking up 'bad technique' by watching YouTube videos published by another self-taught tradesman; it's not always obvious when it happens.
Stories like this probably scare some people off from electronic voting but I don't think this is that big of a deal. When we finish voting operations in my area we load the ballots up on someone's personal vehicle and they take them down, securely, to where they need to go. That vehicle could get blown up and those ballots could be gone, though I think we could still get a record of the results. That being said for the United States, I am in favor of in-person voting requiring proof of citizenship, and making "voting day" a paid national holiday. Not so much for technical or efficiency reasons but for social reasons. I'd argue it should be mandatory but I don't think we should force people to do anything we don't have to force them to do, and I'm not sure we want disinterested people voting anyway. Exercising democracy, requiring people to put in a minimal amount of thought and effort goes a long way. It should be a celebratory day with cookies and apple pie and free beer for all. Not some cold, AI-riddled, stay in your house and never meet your neighbors, clicking a few buttons to accept the Terms of Democracy process. I know there's a lot of discussion points around "efficiency" or "cost" or "accessibility" or how difficult it supposedly is to have an ID (which is weird when you look at how other countries run elections) and there are certainly things to discuss there, but by and large I think the continued digitalization and alienation of Americans is a much worse problem that can be addressed with more in-person activities and participation in society. We're losing too many touchpoints with reality.
> What percentage of the population has an ID in a place where it's difficult to get one vs somewhere it is easier? Not the OP, but except for passports (and passport cards)... there isn't really any federal-level ID in the US (and passport booklets/cards are expensive , just a bit over $100 IIRC). The nearest equivalent in the state level are driver's licenses, which are also on the expensive side considering the ancillary costs (because it's a driver's license , not just an identification card). This is also the reason why US-centric companies like PayPal, for this exact reason, accepts a driver's license as proof of identification (obviously where not otherwise prohibited by local laws). Some (New York for example) do have an ID (called a non-DL ID, that's how embedded driver's license is in the US), but most states do not have a per se ID. > What constitutes an ID being expensive? Developing countries, rather ironically, issue their IDs for free? Okay, indirectly paid by taxes, but there's no upfront cost. The above-mentioned identity documents have a clear cost attached to them. > How is the rest of the world dealing with this problem? Do you think that their democratic processes might be compromised because of it? Cannot talk about other countries (because there is an ID system and it's not a controversial affair to them), but instead I'll answer with a reflection of the US system. Unfortunately, American ID politics are hard , mainly due to concerns of surveillance, but I think (only my opinion) because some of them want those historically disenfranchised (even if a fully native-born US citizen) de facto disenfranchised. This means that there is no uniform and freely-issued identification system in the US (or even a requirement to do that at the state level). Unfortunately, this... is a tough nut to crack, politically-speaking.
 Top