I understand the sentiment. I grew up lower middle class but with financially illiterate and neglectful parents and had a great deal of food scarcity and other things that caused me to leave home at 17. It was really difficult. The first place I managed to get, was a room for $750 a month and I took home $900. I had no car and had to take the bus everywhere. It's true - everything just piles up when you are stretched thin. What I ended up doing was finding a cheap place to live in a crappy area with a buttload of roommates, started searching for promotion at my job, got one, which gave me more financial leeway and time (more flexible schedule) to pursue a degree at a community college, which was free because of my income. From there I went to a good state school which was also free due to my income and did well and got a degree in CS and was hired by a professor's startup. This whole process took like 15 years of brutally difficult grinding. A lot of people in my spot, that have "made it" (although I still bear the scars all over the place, and I am handicapped in habitual ways, especially financially, that I may never get over without hundreds of thousands of $ of therapy), will look down on people like this author for "not trying hard enough." I think it's bullshit. I got extraordinarily lucky and had a streak of nothing too "bad" happening (didnt get a crippling illness, car mostly stayed good, grades stayed stable, didnt get laid off), plus innate talents not everyone has. I think it's a myth a lot of people tell themselves that they "made it" because they just worked hard enough. The truth a huge amount of the time is you got lucky. Hard work + luck yields opportunity, but not all opportunities pan out. My career may dead end because of AI and I may end up in the same spot again for all I know. All I can do is keep trying.
This article assumes that concepts are somehow precise coordinates within a single language; that's not the case, at best, speakers of a language mutually approximate a relatively consistent representation, but like, look at a word like yeet or whatever: we decided as a society on its meaning while it was being developed, as it were. Furthermore, it never rigorously defines what it means by translation. It claims 上京 is a single basis meaning moving to Tokyo, for example, but that isn't even an accurate translation: the individual components represent superior/greater/above and Tokyo and as an idiomatic phrase it represents the concept of moving to the capital for a better life. Something like "moving on up" or the like in some vernaculars of English, and idioms translating to idioms is a form of translation. It's disingenuous to represent the first concept as a single basis but not the second. Similarly, it claims mono no aware (物の哀れ) is unable to be translated, but, again, more literally "translated" is saying "the sorrow within things" character by character, and, only as an idiom has the full contextual understanding. It's not really a single point even if it's rather accurately located in a hypothetical embedding space by Japanese speakers. Imo, an English translation of the concept is "everything is dust in the wind", only 2 more individual conceptual units than the original Japanese phrase, and 3 of them are mainly just connecting words, but it's understood as a similar idiom/concept, here. Concepts are only usefully distinguished by context and use. By the author's own argumentation: nothing is translatable (or, generally, even communicatable) unless it has a fixed relative configuration to all other concepts that is precisely equivalent . In practice, we handle the fuzziness as part of communication and its useless to try and define a concept as untranslatable unless you're also of the camp that nothing is ever communicated (in which case, this response to the author's post is completely useless as nobody could possibly understand it enough internally for it to be useful. If you've read this far, congrats on squaring the circle somehow)
This is the fictional hallucination that somehow makes it into every comment section about Mozilla. So let's go through the facts one more time: (1) They spend more on browser development now than they ever have in their history even after adjusting for inflation. (2) The majority of things claimed to be "money sinks" don't actually cost that much or siphon resources away from core browser development with some exceptions (we'll get to those). (3) The market share losses happened from 2010-2015, the side bets era is approximately 2020-2025. The side bets didn't retroactively cause the market share losses. (4) The narrative that a failure to keep up/push new features drove market share losses paints a picture that's entirely zoomed in on Mozilla and ignores Google leveraging its search and mobile monopolies to muscle its browser onto the map, which likely would have happened regardless of how good Firefox was. (5) The narrative that the browser was broken and behind is somewhat outdated - it was true in the market share loss era, but then they did the dang thing and launched a major engineering effort, fundamentally rebuilt the major parts of the browser via Project Quantum, a monumental engineering transformation that delivered speed and stability, the thing everyone asked for. It's obviously not perfect, but in terms of performance and stability its certainly good enough to be a daily driver in most cases and not in a state of tragic disrepair. (6) Despite it being supposedly so obvious, no one can explain what missing browser feature they can add that will restore all their market share overnight. That said, yes, there are bad things: the dabbling in adtech is bad imo ("privacy preserving ads" seems to be category error), dabbling with AI doesn't seem to have an obvious point in its current iteration, Pocket was understandable as a revenue grower but seems have been a wash and annoyed users and they didn't bother to maintain it, Mozilla nonprofits broader advocacy for privacy seems to be confusing some people, and Firefox OS genuinely did seem to have cost engineering resources at a time that they lost market share. That said, I would love if there was a 10 year old Firefox OS project right now given Google's pushing of developer certification. So, yes, there's stuff I don't love. I don't feel like this iteration of Mozilla has the innovative spirit of, say, Opera back in its heyday, and it's not as polished as Chrome. But the comment section rhetoric has spilled over into fever dream territory and not is even pretending to map onto any coherent historical timeline, factual record, or story of cause and effect, and often contradictory in its declaration of demands.
>The market share losses happened from 2010-2015 According to this site,[0] Firefox was at 16% of the desktop market share in January 2016 and is at 3.8% now. Its peak was at 31% in 2009 according to the same site. If we include mobile it was at 8.9% in 2016 and is at 2.2% now. So they have been continually losing market share. >the side bets era is approximately 2020-2025. The side bets didn't retroactively cause the market share losses. I see your point, but Mozilla bought Pocket in 2017.[2] In any case you are right that they would have lost basically the same market share even if they never invested in any side projects. And that they lost market share to chrome even before (to my knowledge) investing in random side projects, mainly due to Chrome's superior performance and interface at the time. I think what people are upset about is how they add random features to their app in a way almost no other open source project does. I don't expect FFmpeg or Blender to load an ad for their new cloud service of the month or latest context menu clutter. People want them to behave more like a normal open source project rather than as a company. >Despite it being supposedly so obvious, no one can explain what missing browser feature they can add that will restore all their market share overnight I personally don't see their market share ever returning unless some new regulation changes something with Chrome development. I want them to focus only on browser features not because it will increase market share but because that is what I expect as a user from open source projects. I think Ladybird (and I assume Servo as well) will fill this niche much better than Firefox does, and many power users and developers will move to those browsers once either of them releases officially, even if they are slower, simply because they fit the mold of what people expect from projects now: a GitHub repo that is easy to contribute to, GitHub issues, no telemetry or ads on startup that you need to disable on first install etc. Personally I plan to move to ladybird for everyday sites like HN, Google, GitHub, etc. as long as they mostly work, and then I will use Chrome (or Firefox) for bank sites and broken or slow pages. Basically, I don't want to have an antagonistic relationship with the functionality of the software I am using. That to me is a prerequisite to caring about performance, otherwise why not not just use chromium or webkit? >often contradictory in its declaration of demands Unless you mean within the same comment, people have different opinions. Many people are upset at Mozilla, not all for the same exact reason. Or maybe they don't have the facts or timeline fully correct, but their main criticism can still be accurate. Lastly, I don't see "adding AI" to the browser as inherently bad. When I saw it in the update they added it in I just thought "Oh another Mozilla feature to ignore." Same as when they added translations. They keep trying to get me to use their translation, but it just isn't as good as Google Translate, so I never use it. It's just another annoying pop-up that appears from time to time. I also don't have an issue with proprietary software or even software that tracks you, as long as it has some value add I can't get anywhere else. But when something is open source I expect it to be a certain way and will try to use it instead of the proprietary option where possible. [0] https://gs.statcounter.com/browser-market-share#monthly-200901-202510 [1] https://gs.statcounter.com/browser-market-share/desktop/worldwide#monthly-200901-202510 [2] https://blog.mozilla.org/en/mozilla/news/mozilla-acquires-pocket/
I get the skepticism about the dramedy of burning future AGI in effigy. But given humans are a dramady themselves, I don’t judge odd or hyperbolic behaviors too harshly from a distance. It’s too easy to dismiss others’ idiosyncrasies and miss the signal. And the story involves a successful and capable person communicating poetically about an area they have a track record in that probably the author of this article and most of us can’t compete with. I am struck by any technical person that still thinks AGI is any kind of barrier, or what they expect the business plan of a leader in moving AI power forward significantly, with a global list of competitors, is supposed to look like? AGI is talked about like a bright line, but it’s more a line of significance to us than any kind of technical barrier. This isn’t writing. Although that changed everything. This isn’t the printing press. Although that changed everything. This isn’t the telegraph. Although that changed everything. This isn’t the phonograph, radio communication, the Internet, web or mobile phones. Although those changed everything. This is intelligence. The meta technology of all technology. And intelligence is the part of the value chain that we currently earn a living at. The artificial kind is moving forward very fast, despite every delay seeming to impress people. “We haven’t achieved X yet” isn’t an argument at any time, but certainly not in the context today’s accelerated progress. It is moving forward faster than any single human, growing up from birth, ever has or ever will, if it helps to think of it that way. Nor is, “they haven’t replaced us yet” an argument. We were always going to be replaced. We didn’t repeal the laws of competition and adaptation “this time”. Our species was never going to maintain supremacy after we unleashed technology’s ability to accumulate capabilities faster than we or any biological machine could ever hope to evolve. It isn’t even a race is it? How fast is the human biological intelligence enhancements department going? Or the human intelligence breeding club? Not very fast I think. Very few AI die hards ever imagined we would be anywhere near this close to AGI today, in 2025, even five years ago, circa 2020. Once we have AGI, in a few years, we will pass it. Or, more accurately, it will pass us. Don’t spend much time imagining a stable world of parity. Other than as a historically nice trope for some fun science fiction where our continued supremacy made for good story. That’s not what compounding progress looks like. Chaotically compounding progress has been the story of life. And then tech. It isn’t going to suddenly stop for us. What an odd thought.
> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment). Me too. But, I worry this “want” may not be realistic/scalable. Yesterday, I was trying to get some Bluetooth/BLE working on a Raspberry CM 4. I had dabbled with this 9 months ago. And things were making progress then just fine. Suddenly with a new trixie build and who knows what else has changed, I just could not get my little client to open the HCI socket. In about 10 minutes prompt dueling between GPT and Claude, I was able to learn all about rfkill and get to the bottom of things. I’ve worked with Linux for 20+ years, and somehow had missed learning about rfkill in the mix. I was happy and saddened. I would not have k own where to turn. SO doesn’t get near the traffic it used to and is so bifurcated and policed I don’t even try anymore. I never know whether to look for a mailing list, a forum, a discord, a channel, the newsgroups have all long died away. There is no solidly written chapter in a canonically accepted manual written by tech writers on all things Bluetooth for the Linux Kernel packaged with raspbian. And to pile on, my attention span driven by a constant diet of engagement, makes it harder to have the patience. It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.
Assuming you want to define the goal, "AGI", as something functionally equivalent to part (or all) of the human brain, there are two broad approaches to implement that. 1) Try to build a neuron-level brain simulator - something that is a far distant possibility, not because of compute, but because we don't have a clear enough idea of how the brain is wired, how neurons work, and what level of fidelity is needed to capture all the aspects of neuron dynamics that are functionally relevant rather than just part of a wetware realization OR 2) Analyze what the brain is doing, to extent possible given our current incomplete knowledge, and/or reduce the definition of "AGI" to a functional level, then design a functional architecture/implementation, rather than neuron level one, to implement it The compute demands of these two approaches are massively different. It's like the difference between an electronic circuit simulator that works at gate level vs one that works at functional level. For time being we have no choice other than following the functional approach, since we just don't know enough to build an accurate brain simulator even if that was for some reason to be seen as the preferred approach. The power efficiency of a brain vs a gigawatt systolic array is certainly dramatic, and it would be great for the planet to close that gap, but it seems we first need to build a working "AGI" or artificial brain (however you want it define the goal) before we optimize it. Research and iteration requires a flexible platform like GPUs. Maybe when we figure it out we can use more of a dataflow brain-like approach to reduce power usage. OTOH, look at the difference between a single user MOE LLM, and one running in a datacenter simultaneously processing multiple inputs. In the single-user case we conceptualize the MOE as saving FLOPs/power by only having one "expert" active at a time, but in the multi-user case all experts are active all the time handling tokens from different users. The potential of a dataflow approach to save power may be similar, with all parts of the model active at the same time when handling a datacenter load, so a custom hardware realization may not be needed/relevant for power efficiency.
> On HN, we can do better! IMO the move is drop the politics, and discuss things on their technical merits. Zero obligation to satisfy HN audience; tiny proportion of the populace. But for giggles... Technical merits: there are none. Look at Karpathy's GPT on Github. Just some boring old statistics. These technologies are built on top of mathematical principles in textbooks printed 70-80 years ago. The sharding and distribution of work across numerous machines is also a well trodden technical field. There is no net new discovery. This is 100% a political ploy on the part of tech CEOs who take advantage of the innumerate/non-technical political class that holds power. That class is bought into the idea that massive leverage over resource markets is a win for them, and they won't be alive to pay the price of the environmental destruction. It's not "energy and water" concerns, it's survival of the species concerns obfuscated by socio-political obligations to keep calm carry on and debate endlessly, as vain circumlocution is the hallmark of the elders whose education was modeled on people being VHS cassettes of spoken tradition, industrial and political roles. IMO there is little technical merit to most software . Maps, communication. That's all that's really needed. ZIRP era insanity juiced the field and created a bunch of self-aggrandizing coder bros whose technical achievements are copy-paste old ideas into new syntax and semantics, to obfuscate their origins, to get funded, sell books, book speaking engagements. There is no removing any of this from politics as political machinations gave rise to the dumbest era of human engineering effort ever. The only AI that has merit is robotics. Taking manual labor of people that are otherwise exploited by bougie first worlders in their office jobs. People who have, again with the help of politicians, externalized their biologies real needs on the bodies of poorer illiterates they don't have to see as the first-world successfully subjugated them and moved operations out of our own backyard. Source: was in the room 30 years ago, providing feedback to leadership how to wind down local manufacturing and move it all over to China. Powerful political forces did not like the idea of Americans having the skills and knowledge to build computers. It ran afoul of their goals to subjugate and manipulate through financial engineering. Americans have been intentionally screwed out of learning hands on skills with which they would have political leverage over the status quo. There is no removing politics from this. The situation we are in now was 100% crafted by politics.
Because the stated goal of generative AI is not to make an individual more efficient, it's to replace that individual all together and completely eliminate the bottom rungs of the professional career ladder. Historically software that made humans more efficient resulted in empowerment for the individual, and also created a need for new skilled roles. Efficiency gains were reinvested into the labor market. More people could enter into higher paying work. With generative AI, if these companies achieve their stated goals, what happens to the wealth generated by the efficiency? If we automate agriculture and manufacturing, the gain is distributed as post-scarciaty wealth to everyone. If we automate the last few remaining white-collar jobs that pay a living wage, the gain is captured entirely by the capital owners & investors via elimination of payroll, while society only loses one of its last high-paying ladders for upward mobility. Nobody lost their career because we built a faster operating system or a better compiler. With generative AI's stated goals, any efficiency gains are exclusively for those at the very top, while everyone else gets screwed. Now, I'll concede and say, that's not the AI companies' fault. I'm not saying we shouldn't magically stop developing this technology, but we absolutely need our governments to start thinking about the ramifications it can have and start seriously considering things like UBI to be prepared for when the bottom falls out of the labor market.
 Top