Gotta say we admire both teams, and have a lot of respect for anyone trying to make progress in this space. As far as differences: both are an additional service you need to bolt-on in addition to signing up for Stripe. We're aiming to consolidate onboarding as a single provider that both processes. A lot of work still to do on our side, but that's where we want to end up: that you get your dream devex without needing to sign up for 2 products. Both are essentially billing-only services where you bring your API key. We have a billing engine that we built from scratch, and are actually processing the payments, currently using Stripe Connect under the hood. Lago seems to still require you to deal with webhooks - if not theirs, then Stripe's - and is focused on "billing as write operation" (their first-class concern is producing a correct, well-formed charge or invoice object). We want to solve both the "read" (what features can my customer access, what balance does their usage meter have?) and the "write" more conventional billing operations like charges, prorations, converting free trials to paid, etc. With Autumn we're tackling a similar problem but they currently still require you to use Stripe Billing + your API key. So you'll be paying for Stripe Billing + Autumn (unless you self host). Overtime as we get deeper into the money movement side of things our paths will look more different, as more of our devex will include smoother ways to handle funds flows, tax compliance, etc. And compared to both, at least from what I can tell from the outside, we're putting relatively larger share of our brain cycles towards making our SDK and docs deeply intuitive for coding agents. We want to design our default integration path around the assumption that you will have a coding agent doing most of the actual work. As a result we've got some features like e.g. an MCP-first integration path that makes it easy for your coding agent to ask our docs pointed questions that may come up as it integrates Flowglad. And a dynamically generated integration guide md file that considers both your codebase context. A lot of that is the result of our own trial and error trying to integrate payments with coding agents, and we're going to be investing a lot more time and care into that experience going forward.
> don't ISPs detect these and ban No. No ISP who desperately tries to grow marketshare at all costs and lock their customers into a year-long contract will intentionally ban users. I'm not even sure where this misconception comes from, it's not like ISPs led a massive PR campaign warning people of the dangers of running a server. The only way you will get banned is if you cause disproportionate strain on their network, which means you'd need to exceed the usage of the typical gamer (downloading games worth hundreds of gigs regularly), streamer (streaming 4k video for hours at a time), cloud backup customer (uploading gigabytes regularly), Windows user (in its default configuration Windows can use P2P to share updates), torrenter (sustained full-duplex bandwidth usage), and unlucky idiot with a compromised device spewing DoS traffic at line-rate. Saturate the pipe consistently for several days by hosting video? Yeah sure you could get a warning and eventually disconnected, assuming they don't already have traffic shaping solutions in place to just silently throttle you to an acceptable level and leave it up to you to move your homebrew YouTube clone elsewhere when you realize it's too slow. Hosting a website which will have a few mbps worth of traffic with the occasional spike? That's a rounding error compared to your normal legitimate usage, so totally fine. The reason most consumer ISPs have a clause against running servers (not even defining what counts as a server) is to preempt a potential business starting a data center off a collection of consumer connections and then bitching about it or demanding compensation when it goes down or they get cut off. Nobody cares about a technical user playing around and hosting a blog at home.
↙ time adjusted for second-chance
Inflatable Space Stations (worksinprogress.co)
I started taking prescription Zepbound (tirzepatide) right when it was approved for about 6 months and lost 30 pounds, later switched to a low dose of much cheaper grey market semaglutide for maintenance. The anti-drinking side effect was unexpected and somewhat shocking to experience. I had heavily drank in the evening for almost a decade to varying degrees and then pretty much stopped overnight once hitting the 5mg dose of Zepbound on the second month. After ending the Zepbound I had a few months where I wasn't taking anything before resuming the maintenance semaglutide, and although food cravings slowly started returning, I still had/have zero interest in drinking whatsoever unless in a social setting where I may have 1-2 drinks (but usually avoid it altogether without requiring any conscious effort). There is definitely massive variance in the individual psychology/biology that leads to habitual alcohol overuse so I'm sure others might not have the same experience. But for me I'm pretty confident that breaking that deeply engrained habit of starting the first of 6-10 drinks at 6-7pm every day was what did it. Which was pretty much impossible for me to even envision back when it was such a normal part of my day-to-day coping strategy for stress/depression/etc. Although I always knew my drinking was excessive and terrible for my health, past my early 20s I was super high functioning and wasn't interfering with my job or life (other than holding me back and probably slowly killing me), and so being an "alcoholic" was never part of my identity (rightly or wrongly), which I think kinda ironically made it easier to just take the win and move on with my life without nagging self-doubt or fixation on whether my "addiction is cured". But it's been about 2 years now and I hardly ever think about alcohol even when super stressed so something, somewhere in my brain changed thanks to tirzepatide and whatever the mechanism I'm grateful for that happy accident of a side effect!
> Since 2018, at least two dozen people in the United States have been arrested and accused of abducting or abusing victims they met on Roblox, according to a 2024 investigation by Bloomberg. So about three per year, out of 112 million users? That's a far better track record than the Boy Scouts of America or the Roman Catholic Church. Roblox has a strange demographic problem. Their average user age is around 14. They keep trying to push that up, at least to high school age where there's more spending power. Or so said one of their annual reports. But they just can't retain the early teens into the high school years. This is the same problem as Chat Control. You let people talk, sometimes they're going to talk about things they're Not Supposed To Talk About. The amount of censorship needed to prevent this goes way beyond Orwell ever dreamed of. Roblox claims a goal of cutting off wrongspeak within 100ms. They're trying pretty hard. That's a concern - an AI listening to everything you say and evaluating it for political correctness. Kids have been able to access Pornhub, etc. for more than a decade, and not much seems to have happened. Teen sex is down, not up. The graphics in Roblox are so bad that sex there is silly, not obscene, anyway. This belongs to a long series of non-problems, along with the Hayes Code, the 1950s Congressional hearings on comic books, the Meese Report, and such. Amusingly, we aren't hearing much from the religious right any more; they aligned with MAGA, and now they're stuck defending Trump's sex life. If anything, the Roblox problem is a subset of the too much screen time problem.
I just finished my Flux 2 testing (focusing on the Pro variant here: https://replicate.com/black-forest-labs/flux-2-pro ). Overall, it's a tough sell to use Flux 2 over Nano Banana for the same use cases, but even if Nano Banana didn't exist it's only an iterative improvement over Flux 1.1 Pro. Some notes: - Running my nuanced Nano Banana prompts though Flux 2, Flux 2 definitely has better prompt adherence than Flux 1.1, but in all cases the image quality was worse/more obviously AI generated. - The prompting guide for Flux 2 ( https://docs.bfl.ai/guides/prompting_guide_flux2 ) encourages JSON prompting by default , which is new for an image generation model that has the text encoder to support it. It also encourages hex color prompting, which I've verified works. - Prompt upsampling is an option, but it's one that's pushed in the documentation ( https://github.com/black-forest-labs/flux2/blob/main/docs/flux2_with_prompt_upsampling.md ). This does allow the model to deductively reason, e.g. if asked to generate an image of a Fibonacci implementation in Python it will fail hilariously if prompt sampling is disabled, but get somewhere if it's enabled: https://x.com/minimaxir/status/1993361220595044793 - The Flux 2 API will flag anything tangently related to IP as sensentive even at its lowest sensitivity level, which is different from Flux 1.1 API. If you enable prompt upsampling, it won't get flagged, but the results are...unexpected. https://x.com/minimaxir/status/1993365968605864010 - Costwise and generation-speed-wise, Flux 2 Pro is on par with Nano Banana, and adding an image as an input pushes the cost of Flux 2 Pro higher than Nano Banana. The cost discrepancy increases if you try to utilize the advertised multi-image reference feature. - Testing Flux 1.1 vs. Flux 2 generations does not result in objective winners, particularly around more abstract generations.
Image models are more fundamentally important at this stage than video models. Almost all of the control in image-to-video comes through an image. And image models still needs a lot of work and innovation. On a real physical movie set, think about all of the work that goes into setting the stage. The set dec, the makeup, the lighting, the framing, the blocking. All the work before calling "action". That's what image models do and must do in the starting frame. We can get way more influence out of manipulating images than video. There are lots of great video models and it's highly competitive. We still have so much need on the image side. When you do image-to-video, yes you control evolution over time. But the direction is actually lower in terms of degrees of freedom. You expect your actors or explosions to do certain reasonable things. But those 1024x1024xRGB pixels (or higher) have way more degrees of freedom. Image models have more control surface area. You exercise control over more parameters. In video, staying on rails or certain evolutionary paths is fine. Mistakes can not just be okay, they can be welcome. It also makes sense that most of the work and iteration goes into generating images. It's a faster workflow with more immediate feedback and productivity. Video is expensive and takes much longer. Images are where the designer or director can influence more of the outcomes with rapidity. Image models still need way more stylistic control, pose control (not just ControlNets for limbs, but facial expressions, eyebrows, hair - everything), sets, props, consistent characters and locations and outfits. Text layout, fonts, kerning, logos, design elements, ... We still don't have models that look as good as Midjourney. Midjourney is 100x more beautiful than anything else - it's like a magazine photoshoot or dreamy Instagram feed. But it has the most lackluster and awful control of any model. It's a 2021-era model with 2030-level aesthetics. You can't place anything where you want it, you can't reuse elements, you can't have consistent sets... But it looks amazing. Flux looks like plastic, Imagen looks cartoony, and OpenAI GPT Image looks sepia and stuck in the 90's. These models need to compete on aesthetics and control and reproducibility. That's a lot of work. Video is a distraction from this work.
> We also ultimately derive pretty much everything we most value in life from our interactions with other lives This implies that almost everything you value is something transient that can, and one day will, be taken away. If not willingly, then by death. Doesn't it make more sense to have a few core values that don't depend on others and then build relationships and all the rest upon that foundation? To steal from Alan Watts, lets use an example. Imagine a whirlpool in a clear stream. It has great beauty and takes intricate forms as it dances a whirls. You sit beside it and enjoy watching it for hours. Now ask yourself is it the particular group a H2O molecules that make up the whirlpool that you love? If so it will be gone in an instant, and each moment for you will become another in a series of great losses as the molecules are swept away by new ones. Is it the pattern the water makes that you love? No, the pattern itself changes every moment as well. The change itself is part of what mesmerizes you. What you love about the whirlpool is something deeper, and more fundamental, something that change can't take from you. That's the thing you have to build your appreciation of life from. Other people are just the molecules and ripples. > Some people can go build a cabin in the woods and live off the land and spend all their free time meditating and be perfectly happy. I would argue that a man who can't stand to be alone with himself is either a bad man who is a good judge of character, or an incomplete person. I don't mean that everyone should go live alone, just that everyone should be able to. You're probably right that most people can't do it, but the majority is often wrong.
Slightly related but unpopular opinion I have: I think software, broadly, today is the highest quality its ever been. People love to hate on some specific issues concerning how the Windows file explorer takes 900ms to open instead of 150ms, or how sometimes an iOS 26 liquid glass animation is a bit janky... we're complaining about so much minutia instead of seeing the whole forest. I trust my phone to work so much that it is now the single, non-redundant source for keys to my apartment, keys to my car, and payment method. Phones could only even hope to do all of these things as of like ~4 years ago, and only as of ~this year do I feel confident enough to not even carry redundancies. My phone has never breached that trust so critically that I feel I need to. Of course, this article talks about new software projects. And I think the truth and reason of the matter lies in this asymmetry: Android/iOS are not new. Giving an engineering team agency and a well-defined mandate that spans a long period of time oftentimes produces fantastic software. If that mandate often changes; or if it is unclear in the first place; or if there are middlemen stakeholders involved; you run the risk of things turning sideways. The failure of large software systems is, rarely, an engineering problem. But, of course, it sometimes is. It took us ~30-40 years of abstraction/foundation building to get to the pretty darn good software we have today. It'll take another 30-40 years to add one or two more nines of reliability. And that's ok; I think we're trending in the right direction, and we're learning. Unless we start getting AI involved; then it might take 50-60 years :)
> "Why worry about something that isn’t going to happen?” Lots to break down in this article other than this initial quotation, but I find a lot of parallels in failing software projects, this attitude, and my recent hyper-fixation (seems to spark up again every few years), the sinking of the Titanic. It was a combination of failures like this. Why was the captain going full speed ahead into a known ice field? Well, the boat can't sink and there (may have been) organizational pressure to arrive at a certain time in new york (aka, imaginary deadline must be met). Why wasn't there enough life jackets and boats for crew and passengers? Well, the boat can't sink anyway, why worry about something that isn't going to happen? Why train crew on how to deploy the life rafts and emergency procedures properly? Same reason. Why didn't the SS Californian rescue the ship? Well, the 3rd party Titanic telegraph operators had immense pressure to send telegrams to NY, and the chatter about the ice field got on their nerves and they mostly ignored it (misaligned priorities). If even a little caution and forward thinking was used, the death toll would have been drastically lower if not nearly nonexistent. It took 2 hours to sink, which is plenty of time to evacuate a boat of that size. Same with software projects - they often fail over a period of multiple years and if you go back and look at how they went wrong, there often are numerous points and decisions made that could have reversed course, yet, often the opposite happens - management digs in even more. Project timelines are optimistic to the point of delusion and don't build in failure/setbacks into schedules or roadmaps at all. I've had to rescue one of these projects several years ago and it took a toll on me I'm pretty sure I carry to this day, I'm wildly cynical of "project management" as it relates to IT/devops.
 Top