One thing I wish someone would write is something like the browser's developer tools ("inspect elements") for PDF — it would be great to be able to "view source" a PDF's content streams (the BT … ET operators that enclose text, each Tj operator for setting down text in the currently chosen font, etc), to see how every “pixel” of the PDF is being specified/generated. I know this goes against the current trend / state-of-the-art of using vision models to basically “see” the PDF like a human and “read” the text, but it would be really nice to be able to actually understand what a PDF file contains. There are a few tools that allow inspecting a PDF's contents ( https://news.ycombinator.com/item?id=41379101 ) but they stop at the level of the PDF's objects, so entire content streams are single objects. For example, to use one of the PDFs mentioned in this post, the file https://bfi.uchicago.edu/wp-content/uploads/2022/06/BFI_WP_2022-68-1.pdf has, corresponding to page number 6 (PDF page 8), a content stream that starts like (some newlines added by me): 0 g 0 G 0 g 0 G BT /F19 10.9091 Tf 88.936 709.041 Td [(Subsequen)28(t)-374(to)-373(the)-373(p)-28(erio)-28(d)-373(analyzed)-373(in)-374(our)-373(study)83(,)-383(Bridge's)-373(paren)27(t)-373(compan)28(y)-373(Ne)-1(wGlob)-27(e)-374(reduced)]TJ -16.936 -21.922 Td [(the)-438(n)28(um)28(b)-28(er)-437(of)-438(priv)56(ate)-438(sc)28(ho)-28(ols)-438(op)-27(erated)-438(b)28(y)-438(Bridge)-437(from)-438(405)-437(to)-438(112,)-464(and)-437(launc)28(hed)-438(a)-437(new)-438(mo)-28(del)]TJ 0 -21.923 Td and it would be really cool to be able to see the above “source” and the rendered PDF side-by-side, hover over one to see the corresponding region of the other, etc, the way we can do for a HTML page.
the tl;dr: > The NSF’s investments have shaped some of the most transformative technologies of our time—from GPS to the internet—and supported vital research in the social and behavioral sciences that helps the nation understand itself and evaluate its progress toward its democratic ideals. So in 2024, I was honored to be appointed to the National Science Board, which is charged under 42 U.S. Code § 1863 with establishing the policies of the Foundation and providing oversight of its mission. > But the meaning of oversight changed with the arrival of DOGE. That historical tension—between the promise of scientific freedom and the peril of political control—may now be resurfacing in troubling ways. Last month, when a National Science Board statement was released on occasion of the April 2025 resignation of Trump-appointed NSF Director Sethuraman Panchanathan, it was done so without the participation or notice of all members of the Board. > Last week, as the Board held its 494th meeting, I listened to NSF staff say that DOGE had by fiat the authority to give thumbs up or down to grant applications which had been systematically vetted by layers of subject matter experts. > Our closed-to-the-public deliberations were observed by Zachary Terrell from the DOGE team. Through his Zoom screen, Terrell showed more interest in his water bottle and his cuticles than in the discussion. According to Nature Terrell, listed as a "consultant" in the NSF directory, had accessed the NSF awards system to block the dispersal of approved grants. The message I received was that the National Science Board had a role to play in name only. I can't sum up everything that's wrong with this moment better than that. This is not some necessary pain that comes with shaking up the system. This is a hostile takeover of the federal government by embarrassingly ignorant goons who think they know everything, just because they can vibe code an almost functional app. This is what happens when you have VCs huffing their own farts in their Signal echo chamber: https://www.semafor.com/article/04/27/2025/the-group-chats-that-changed-america . Congratulations, you buffoons, you have demonstrated there are scaling laws for footguns.
Counterpoint via anecdotes… this week I am at the International Symposium for Green Chemistry. >600 chemists from all over the world. They are all psyched to advance safer and sustainable solutions to a wide variety of problems. You see all their funding sources from the UN to EU to country to city to local, as well as private companies. You see their collaboration and enthusiasm. Of course the US comes up… but it seems that the rest of the world is just moving on without us (I am American). Our government is simply an unreliable partner. Some US PhD candidates here are looking for post-doc labs in the EU. A speaker for Dow Chemical was talking about their Year 2050+ plan for net-zero CO2 and circular economy. I was surprised to learn (news was last month) that Dow cancelled their $9B net-zero ethylene processing facility in Canada because US tariffs will make it too expensive (to build it and long term it’s the source of ethylene). Imagine the jobs lost, contracts lost, US exports lost, and environmental damage. This morning I had this conversation (before seeing OP): “If all the US university research funding disintegrates, how does that affect the primacy of US science education? How should somebody applying to college now think about this?” Perhaps focus on a teaching-focused college and then try to do the research abroad? Of course such choices are more easily available to the wealthy. US higher science education and industry will just naturally decline? Random: Only one talk I’ve seen so far included a GitHub repo. Separately, I have multiple friends who lost their US lab funding and/or jobs. I also have a friend who was being poached via Dutch Visa fast-track. I think the science brain drain is real.
You're broadly right, but I would argue that the "anti-populist infrastructure" is specifically responsible for electoral fetishism. The thing to remember is that in the 1950s and 60s the US government was basically running a censorship regime and had manufactured an anti-Communist consensus. They had to do this because democratic politics back then meant political parties actually listening to their constituents. In other words, America had populist infrastructure, which the state had to carefully commandeer to maintain the illusion of a unified society willing to fight a Cold War against a country which, at least on paper, was promising a better America than America. This broke in the 70s, when the Vietnam War pitted young Boomers against old[0]. A lot of the civic institutions that were powering democracy in that era got torn apart along age lines, and fell apart completely. Politics turned from something you made with your voice to something you purchased with your vote. This is how we got the Carter / Reagan neoliberal consensus of "free trade and open borders for me but not for thee". The state was free to dictate this new public policy to its citizens because the citizenry were too busy fighting to mount an effective opposition to it. [0] Recall that "Baby Boomer" is actually two generations of people, both because the baby boom was so long and because America's access to birth control was on par with that of a third world country. There's a never-ending wellspring of parental abandonment in that generation.
> America's success as a scientific powerhouse If you really think about it, America's "success as a scientific powerhouse" owes a significant debt to foreigners: scholars from abroad and workers on H1-B visas. It has been a well-known fact for decades that white native-born Americans are at a real disadvantage in academia, tech fields, and any sort of research positions, because for whatever reason, the foreigners coming in on H1-B can run circles around them in terms of productivity, innovation, and even sheer numbers. Perhaps this isn't the sort of "stealin' our jerbs" people first think of, rather than picking strawberries and selling oranges off the 405, but it's a real phenomenon. This is a very prominent reason for skepticism about DEI from the Right, because the outsized influence of foreigners on academia, and by extension STEM and other sciences, has been steadily growing and growing and inflamed by DEI hiring practices in the industries. Essentially, thousands of foreign nationals have been immigrating to the USA to be educated at our best schools, and then exert two-way influence over culture and commerce. Is that a sustainable practice? Is America enough of a "melting pot" that we can withstand that sort of outsized foreign influence within our borders? Furthermore, America was first in line to embrace Eastern-bloc Jews, from the outbreak of WWI through the end of WWII. Those German and Soviet Jews from the diaspora gained significant prestige, not merely from sympathy or pity, and they now have outsized influence in many Western industries, including Hollywood, finance, manufacturing, and psychology. These are markedly distinct career paths from the ones taken by homegrown Jews, such as the Hasidic ones and other Orthodox communities from New York.
> AI coding assistants bias towards assuming the code they're working with is correct, and that the person using them is also correct. But often neither is ideal! That's why you should just write tests, before you write the code, so that you know what you are expecting with the code that is under test is doing. i.e Test driven development. > And you can absolutely have a model second-guess your own code and assumptions, but it takes a lot of persistent work because these damn things just want to be "helpful" all the time. No. Please do not do this. These LLMs have zero understanding / reasoning about the code they are outputting. Recent example from [0]: >> Yesterday I wanted to move 40GB of images from my QR menu site qrmenucreator . com from my VPS to R2 >> I asked gemini-2.5-pro-max to write a script to move the files >> I even asked it to check everything was correct >> Turns out for some reason the filenames got shortened somehow, which is a disaster because the QR site is quite basic and the image paths are written in the markdown of the menus >> Of course the script already deleted 40GB of images from the VPS >> But lesson learnt: be very careful with AI code, it made a mistake, couldn't even find the mistake when I asked it to double check the code, and because the ENDs of the filenames looked same I didn't notice it cut the beginnings off >> And in this case AI can't even find its own mistakes Just like the 2010s with the proliferation with dynamically typed languages creeping into the backend with low-quality code, we now will have vibe-coded low-quality software causing destruction because their authors do not know what their code does and also have not bothered to test it or even know what to test for. [0] https://twitter.com/levelsio/status/1921974501257912563
> What I realized is that lower costs, and therefore lower quality, This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either. I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools. For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do: let usEmployees = from x in Employees where x.Country == "US"; func byFemale(Query<Employees> q) => from x in q where x.Sex == "Female"; let femaleUsEmployees = byFemale(usEmployees); These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric. I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind. > How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters. Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers. Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
This permission has been a security issue since its introduction. Random apps have been caught iterating over used media to extract geolocation history based on EXIF information and other such metadata (for no good reason, data collection for data traders), so Google did the right thing and made file access permission-first. Almost no apps need this permission, so being skeptical makes a lot of sense. File managers and other such apps are routinely permitted to use this permission, so it's not like Google is locking out utility apps or anything. The current state of Google Play is the result of years of Google being too permissive by default and trying to patch things later while desperately trying to remain backwards compatible. Give advertisers a finger and they take the whole hand. Your average Android phone's internal storage used to be full of dotfiles, hidden directories, not-so-hidden directories, all full of identifiers and cross-identifiers to break the cross-app tracking boundary enforced by the normal API. As far as I know, Google has made an API available for picking a directory to sync with. I'm not sure why NextCloud needs to see every file on my SD card when it can ask for folders to sync into and can use a normal file picker to upload new files without going through a file manager, but there's probably a feature somewhere hidden in their app that necessitates this permission. The policy itself makes a lot of sense and I'd argue is beneficial for Google Play's user base. NextCloud's problem seems to be that Google isn't letting a human with common sense review their upload. Because of Google being Google, outcry is the only way to get attention from an actual human being when it comes to app stores (Apple has had very similar issues, though they claim their reviews are all done by humans). EDIT: NextCloud states "SAF cannot be used, as it is for sharing/exposing our files to other apps, so the reviewer clearly misunderstood our app workflow." as a reason for not being able to use the better APIs, but I'm not sure if that's true. SAF has a dedicated API for maintaining access to a folder ( https://developer.android.com/training/data-storage/shared/documents-files#perform-operations ). I think NextCloud misinterpreted Google here.
As long as programmers view a program as a mechanism that manipulates bytes in flat memory, we will be stuck in a world where this kind of topic seems like a success. In that world, an object puts some structure above those memory bytes and obviously an allocator sounds like a great feature. But you'll always have those bytes in the back of your mind and will never be able to abstract things without the bytes in memory leaking through your abstractions. The author even gives an example for a pretty simple scenario in which this is painful, and that's SOA. As long as your data abstraction is fundamentally still a glorified blob of raw bytes in memory, you'll be stuck there. Instead, data needs to be viewed more abstractly. Yes, it will eventually manifest in memory as bytes in some memory cell, but how that's layouted and moved around is not the concern of you as the programmer that's a user of data types. Looking at some object attributes foo.a or foo.b is just that - the abstract access of some data. Whether a and b are adjacent in memory should be insubstantial or are even on the same machine or are even backed by data cells in some physical memory bank. Yes, in some very specific (!) cases, optimizing for speed makes it necessary to care about locality, but for those cases, the language or library need to provide mechanisms to specify those requirements and then they will layout things accordingly. But it's not helpful if we all keep writing in some kind of glorified assembly language. It's 2025 and "data type" needs to mean something more abstract than "those bytes in this order layed out in memory like this", unless we are writing hand-optimized assembly code which most of us never do.
 Top