I've been saying this for maybe nine months vis-à-vis my consulting work keeps proving it. Go is an excellent language for LLM code generation. There exists a large stable training corpus, one way to write it, one build system, one formatter, static typing, CSP concurrency that doesn't have C++ footguns. The language hasn't had a breaking version in over a decade. There's minimal framework churn. When I advise teams to adopt agentic coding workflows at my consultancy [0], Go delivers highly consistent results via Claude and Codex regularly and more often than working with clients using TypeScript and/or Python. When LLMs have to navigate Python and TypeScript there is a massive combinatorial space of frameworks, typing approaches, and utility libraries. Too much optionality in the training distribution. The output is high entropy and doesn't converge. Python only dominated early AI coding because ML researchers write Python and trained on Python first. It was path dependence, not merit.\ The thing nobody wants to say is that the reason serious programmers historically hated Go is exactly why LLMs are great at it: There's a ceiling on abstraction. Go has many many failings (e.g. it took over a decade to get generics). But LLMs don't care about expressiveness, they care about predictability. Go 1.26 just shipped a completely rewritten go fix built on the analysis framework that does AST-level refactoring automatically. That's huge for agentic coding because it keeps codebases modern without needing the latest language features in training data or wasting tokens looking up new signatures. I spent four years building production public key infrastructure in Golang before LLMs [1]. After working coding agents like everyone else and domain-switching for clients - I've become more of a Go advocate because the language finally delivers on its promise. Engineers have a harder time complaining about the verbose and boilerplate syntax when an LLM does it correctly every single time. [0]: https://sancho.studio [1]: https://github.com/zoom/zoom-e2e-whitepaper
Yeah, I don't care for go but I expect it to win here. Its performance is good enough for most use cases, it has a huge ecosystem of libraries, lots of training data, and deploys as a binary so users don't need to install anything else. I expect rust to gain some market share since it's safe and fast, with a better type system, but complex enough that many developers would struggle by themselves. But IME AI also struggles with the manual memory management currently in large projects and can end up hacking things that "work" but end up even slower than GC. So I think the ecosystem will grow, but even once AI masters it, the time and tokens required for planning, building, testing will always exceed that of a GC language, so I don't see it ever usurping go, at least not in the next decade. I wish the winner would be OCaml, as it's got the type safety of rust (or better), and the development speed of Go. But for whatever reason it never became that mainstream, and the lack of libraries and training data will probably relegate it to the dustbin. Basically, training data and libraries >>> operational characteristics >>> language semantics in the AI world. I have a hard time imagining any other language maintaining a solid advantage over those two. There's less need for a managed runtime, definitely no need for an interpreted language, so I imagine Java and Python will slowly start to be replaced. Also I have to imagine C/C++ will be horrible for AI for obvious reasons. Of course JS will still be required for web, Swift for iOS, etc., but for mainstream development I think it's going to be Rust and Go.
A decade ago this didn't require LLMs and cutting edge hardware and a trillion dollars of GPUs. This was a Facebook feature in like 2012. >What I really want is my phone to transcribe all of my phone calls to a Notes document This has been doable for decades . Why haven't you done it? My Pixel phones did this with voicemail before LLMs. Windows Vista shipped with full featured dictation functionality, and it works better than you would expect, all local, all using classical algorithms, all evaluated cheaply. If it wasn't accurate enough, Dragon speech to text tools were gold standard for most of modern computing history, and greatly surpassed the accuracy of that built in system. BTW, you can, on any Windows machine right now , access that built in voice recognition, and with a "Constrained vocabulary", say if you only want a few specific voice commands, it gets near perfect accuracy constantly. You have to search for old documentation now because Microsoft wants to hide that you don't need an internet connection or an Azure account and monthly bill to ship accurate voice recognition with your app. It's trivial to use, from both C++ and C#, and anything else that allows you to invoke native code, and the workflow is easy enough to understand. I built an app to utilize it instead of buying one of those $10 "Voice control your game" apps to add voice control to ARMA, and it was easier to implement the voice recognition than it was to copy and paste native code invocations for the Win32 api to inject keystrokes. I don't even write C# code in general. https://learn.microsoft.com/en-us/previous-versions/windows/desktop/ms723627(v=vs.85) There's tons of documentation about "Grammar" and configuration but the default configuration IIRC is to just turn speech input into text, and do so with at least 85% accuracy, even without the user actually training the recognizer to their voice. If you build context specific grammars or a hierarchical grammar to support a real UX that isn't just hoping some code knows how to interpret raw speech you will get dramatically better recognition performance. This is IMO a frequent pattern. Time and time again the people who keep saying "I want LLMs to do X" don't seem to be aware that "X" was a robust and mature area of research decades ago! They don't seem to be aware that you could already do X and even buy ready to go software for that purpose! Often enough the LLM version is an outright regression in functionality, as things that were doable with a single microchip in 1960 now require an internet connection. >Since it isn't recording an audio conversation, So to be clear, you want this functionality explicitly to bypass law? Federally and in 39ish states, you only need your own consent anyway.
Bubblehouse | Fully REMOTE | Full-Time | $200–250k | Principal Engineer Bubblehouse is a fast-growing custom loyalty platform, tripling the revenue each year. Headquartered in NYC, the entire team is fully remote and spread across the globe. We power loyalty programs for brands like American Girl and Old Spice. We’re expanding our lean team of extremely experienced developers. Companies are switching from other platforms thanks to the customizations and flexibility that we offer, enabled by our pace and technical excellence, which we intend to keep for years to come. We run on Golang and use custom data storage on top of local key-value stores, colocating the storage and compute on dedicated hardware servers, and reading data directly from mmap’ed pages of the database. Ever came across HN saying that one can run Twitter on a single machine these days? We’re doing that in production. Zero lines of React, almost zero third-party dependencies (carefully vetted), every line of JavaScript manually written with respect and understanding of the web platform. We render HTML server-side like it’s 2005. Looking for: 1. Top to bottom understanding of the software stack, from the modern-ish web platform to CPU caches. 2. Thinking and problem solving outside the box. (We don’t _always_ go for unconventional solutions, but we do it often enough to require a person who can do justice considering the entire problem space at every step.) 3. Demonstrated ability and hunger to learn new things quickly. (Every month we’re doing things we have never done before.) 4. Broad experience across multiple programming paradigms, platforms and software stacks. 5. Demonstrated care for the software craftsmanship (which can take many forms). 6. Great spoken English, and ability to communicate 9am to noon in New York time zone. We give you a literally fast-paced environment (with features delivered in days) where you need to solve very challenging problems with practical advanced technology, take on entrenched market leaders, and help entrepreneurs across small and large businesses delight their fans. Send a plain text cover letter to andrey+hiring@bubblehouse.com. Help us see how you stand out. Summarize your experience. Link to 1–5 impressive things you’ve built and proud of, link to where we can see some of your code, include your portfolio/CV, describe the platforms and stacks you’re an expert in. How did you start programming? What are you most passionate about in technology? What are the most interesting or weird things you’ve done? What are your strongest held professional opinions? Please make your email easy to read, we’ll appreciate that. (If you have applied before, no need to re-apply, we’ll reach out.)
Location: Zürich, Switzerland, Europe/EU Remote: Nice to see. Worldwide okay, depending on arrangement. Willing to relocate: Poland/Switzerland, optionally China, but depends on the offer. Technologies: Strong with distributed systems, networking (SD-WAN, VPN, ZTNA), authorization systems, web/network security and more. Backend: Rust, Typescript, Python, C#, Go and a variety of other interpreted and compiled languages. Frontend: React, Angular, Tailwind and so on. Relational and document databases. Data analytics. AWS and Azure. China specialist and interpreter, can speak German. Résumé/CV: https://sowinski.blue/files/cv.pdf Email: igor at sowinski.blue I have 9+ years of experience as a software engineer and technology-oriented China specialist. Based in Switzerland. I have designed and implemented a large part of internal production systems at a well-known international printing company, and currently develop zero-trust solutions at a Swiss networking company. I've been engaged with China since 2015. In 2020, I've started a degree in Chinese Studies at the University of Zürich. In 2024, I have finished a scholarship at the BFSU, the best language university in China. I have interpreted between Polish, English and Mandarin Chinese for the Polish Trade & Investment Agency at the Polish-Chinese Economic Forum in Shanghai (see pics on LinkedIn). I also keep tabs on technology and Chinese cybersecurity laws through Chinese sources and keep in touch with open source enthusiasts based in Mainland China and in Taiwan.
Wait til you hear about Steven Donziger. Steven is a lawyer who helped Ecuador sue Chevron who was polluting massively. The Ecuadorians won and secured an historic $9.5 billion judgment because it was so egregious. Did that end the matter? No. Chevron ran to American courts and argued that Donziger helped secure this judgment by committing fraud. I believe the evidence of this was a video showing a minister and Donziger at a social gathering. The court ruled in Chevron's favor. This made the judgment unenforceable in the US. As part of all this, Chevron wanted Donziger to hand over all communications and electronic devices associated with the Ecuador prosecution. That is of course attorney-client privilege. But the court agreed and Donziger refused. But it didn't end there. Chevron (through their law firm) lobbied the Department of Justice to criminally prosecure Donziger for this. The DoJ declined. But it didn't end there either. Chevron asked the court, and they agreed, to appoint Chevron's own law firm to conduct a private criminal prosecution . You might be asking "what is that?" and you'd be right to be confused. It rarely happens but a civil court can pursue a private criminal prosecution. Donziger was convicted, disbarred and spent years in home detention over this whole thing. The Appeals Court affirmed all this and the Supreme Court declined to intervene. So does it surprise me that Greenpeac can get hit by a $345M judgment for hurting the feelings of an oil company? No, no it does not.
I still don't get what they're for. Most people I know end up in the same situation as me, buying one thinking you'll use it mostly as a writing device but then either it ends up in a closet or just a web browser you use while sitting on the couch watching TV. In that case what does any of the improvements matter? With first party native apps it's not great for writing, editing pdfs, nor drawing. I mean the notes app doesn't even have simple things like letting you zoom in. You'd think a common use case would be to use it as a drawing tablet for your computer? Maybe not a common use case but I think something a lot of people would end up using a few times a year (countless times I'd love to have a whiteboard on a zoom call but setting that up is annoying) There's great third party apps to do this but I think it just shows that either Apple is disconnected or just trying to get money from developers. It's also not great as a computer. I mean in another thread I've mentioned my laptop (macbook air) is a glorified ssh machine and frankly, an iPad should the perfect device for that because its size. But it seems they don't want me to use it like a computer and idk why iOS locks down third party terminals so much. It also sucks as a second monitor (why is everything monitor related so bad with Apple?). Keeps disconnecting, I need to restart Bluetooth/airdrop constantly to detect it, and the angle it sits at when sitting on my desk... really? I really want to know what you guys use it for because mine just really feels like expensive ewaste.
Mikado is really only powerful when dealing with badly coupled code. Outside of that context you’re kinda cosplaying (like people peppering Patterns in code without an actual plan). Refactoring is generally useful for annealing code enough that you can reshape it into separate concerns. But when the work hardening has been going on far too long there usually seems like there’s no way to get from A->D without just picking a day when you feel invincible, getting high on caffeine, putting on your uptempo playlist and telling people not to even look at you until you file your +1012 -872 commit. I used to be able to do those before lunch. I also found myself to be the new maintainer of that code afterward. That doesn’t work when you’re the lead and people need to use you to brainstorm getting unblocked or figuring out weird bugs (especially when calling your code). All the plates fall at that point. It was less than six months after I figured out the workaround that I learned the term Mikado, possibly when trying to google if anyone else had figured out what I had figured out. I still like my elevator pitch better than theirs: Work on your “top down” refactor until you realize you’ve found yet another whole call tree you need to fix, and feel overwhelmed/want to smash your keyboard. This is the Last Straw. Go away from your keyboard until you calm down. Then come back, stash all your existing changes, and just fix the Last Straw. For me I find that I’m always that meme of the guy giving up just before he finds diamonds in the mine. The Last Straw is always 1-4 changes from the bottom of the pile of suck, and then when you start to try to propagate that change back up the call stack, you find 75% of that other code you wrote is not needed, and you just need to add an argument or a little conditional block here and there. So you can use your IDE’s local history to cherry pick a couple of the bits you already wrote on the way down that are relevant, and dump the rest. But you have to put that code aside to fight the Sunk Cost Fallacy that’s going to make you want to submit that +1012 instead of the +274 that is all you really needed. And by the way is easier to add more features to in the next sprint.
 Top