>Ah yes, I too conflate bills written by organized lobbyists with a loosely affiliated group that says American shouldn't be ran by Nazi's. Somebody doesn't understand analogies, so let me spell it out explicitly for you: Approximately nobody is against "antifa" because they're fighting "fascists". Here's an excerpt from wikipedia: >Antifa activists' actions have since received support and criticism from various organizations and pundits. Some on the political left and some civil rights organizations criticize antifa's willingness to adopt violent tactics, which they describe as counterproductive and dangerous, arguing that these tactics embolden the political right and their allies.[13] Both Democratic and Republican politicians have condemned violence from antifa.[14][15][16][17] Many right-wing politicians and groups have characterized antifa as a domestic terrorist organization, or use antifa as a catch-all term,[18] which they adopt for any left-leaning or liberal protest actions.[19] According to some scholars, antifa is a legitimate response to the rise of the far right.[20][21] Scholars tend to reject an equivalence between antifa and right-wing extremism.[2][22][23] Some research suggests that most antifa action is nonviolent.[24][25][26] Those allegations might not have merit, and it's okay to have a productive discussion over the merits of that, but it's wholly unjustified to round everyone who oppose antifa off to "they're against antifa because they're fascists, because why else would you be against a group that's anti-fascist?". Doing so is making the same mistake as the PATRIOT act above. It's fine to be against the patriot act, or even support it. But it's totally poor reasoning to skip all that logic and go with "you oppose the PATRIOT act so you must be not a patriot".
I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing . A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it. There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they? From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems. Of course it works. Why would anybody think otherwise? You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself. Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes . Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.
In the 1930s, when electronic calculators were first introduced, there was a widespread belief that accounting as a career was finished. Instead, the opposite became true. Accounting as a profession grew, becoming far more analytical/strategic than it had been previously. You are correct that these models primarily address problems that have already been solved. However, that has always been the case for the majority of technical challenges. Before LLMs, we would often spend days searching Stack Overflow to find and adapt the right solution. Another way to look at this is through the lens of problem decomposition as well. If a complex problem is a collection of sub-problems, receiving immediate solutions for those components accelerates the path to the final result. For example, I was recently struggling with a UI feature where I wanted cards to follow a fan-like arc. I couldn't quite get the implementation right until I gave it to Gemini. It didn't solve the entire problem for me, but it suggested an approach involving polar coordinates and sine/cosine values. I was able to take that foundational logic turn it into a feature I wanted. Was it a 100x productivity gain? No. But it was easily a 2x gain, because it replaced hours of searching and waiting for a mental breakthrough with immediate direction. There was also a relevant thread on Hacker News recently regarding "vibe coding": https://news.ycombinator.com/item?id=45205232 The developer created a unique game using scroll behavior as the primary input. While the technical aspects of scroll events are certainly "solved" problems, the creative application was novel.
By now I reached a point where I don't believe that big tech companies will do anything to improve outcomes for user if it will have a hit on their bottom line, and I'm sure that opposite is true, they will do anything to improve their bottom line even if it hurts the user. So it's fair to say that this relationship can't work in long term. I'm not really on the platforms mentioned except of YouTube, and it's considered to be the lesser offender here but still I can't avoid seeing how bad it got. I remember 2007-2012 the platform was mostly for entertainment, silly cat videos pranks, a low budget documentary here and there. 2012-2015 felt like the period where YouTube became a platform for more useful things, people showing how they are fixing cars, professors uploading their recorded classes, history channels, but on the sidelines people were starting to make money off doing weird things, like unboxing stuff on camera, drop testing phones, etc. If you were told in early 2000's that people will be getting extremely rich by unpackaging products on camera, you would have been called insane, no one would have considered wasting their free time watching things like that. It might be more difficult to convince older folks to engage but younger generation was malleable and was easy to hook, and slowly it became normal. 2015 to present days became a period where it's completely normal to make user to watch the ad disguised as content. People testing/showcasing/unboxing products or even political ideology propaganda presented as discussion in form of a podcast. It's obvious that the quality what is offered on YouTube has gotten worse, but they can counter it with autoplay, infinite scroll, landing page filled with eye grabbing content. The only way to watch things on YouTube and not be effected by this nonsense is to use a different client (freetube, jaybird, newpipe, there are plenty more). You can define of your homepage will look like, weather you want to see shorts or not, infinite feed, suggestion etc.
> Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor. Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value. This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product ( https://www.trtvault.com/ ) to support himself and his family. > Using these things will fry your brain's ability to think through hard solutions. CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad. > Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)? We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it. I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement. We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.
> It's far from perfect, but using a simple application with no built-in ads, AI, bloat, crap, etc is wonderful. I think there are three main reasons it's not perfect yet: 1. Building both a decentralised open standard (Matrix) at the same time as a flagship implementation (Element) is playing on hard mode: everything has to be specified under an open governance process ( https://spec.matrix.org/proposals ) so that the broader ecosystem can benefit from it - while in the early years we could move fast and JFDI, the ecosystem grew much faster than we anticipated and very enthusiastically demanded a better spec process. While Matrix is built extensibly with protocol agility to let you experiment at basically every level of the stack (e.g. right now we're changing the format of user IDs in MSC4243, and the shape of room DAGs in MSC4242) in practice changes take at least ~10x longer to land than in a typical proprietary/centralised product. On the plus side, hopefully the end result ends up being more durable than some proprietary thing, but it's certainly a fun challenge. 2. As Matrix project lead, I took the "Element" use case pretty much for granted from 2019-2022: it felt like Matrix had critical mass and usage was exploding; COVID was highlighting the need for secure comms; it almost felt like we'd done most of the hard bits and finishing building out the app was a given. As a result, I started looking at the N-year horizon instead - spending Element's time working on P2P Matrix (arewep2pyet.com) as a long-term solution to Matrix's metadata footprint and to futureproof Matrix against Chat Control style dystopias... or projects like Third Room ( https://thirdroom.io ) to try to ensure that spatial collaboration apps didn't get centralised and vendorlocked to Meta, or bluesky on Matrix ( https://matrix.org/blog/2020/12/18/introducing-cerulean/ , before Jay & Paul got the gig and did atproto). I maintain that if things had continued on the 2019-2022 trajectory then we would have been able to ship a polished Element and do the various "scifi" long-term projects too. But in practice that didn't happen, and I kinda wish that we'd spent the time focusing on polishing the core Element use case instead. Still, better late than never, in 2023 we did the necessary handbrake turn focusing exclusively on the core Element apps (Element X, Web, Call) and Element Server Suite as an excellent helm-based distro. Hopefully the results speak for themselves now (although Element Web is still being upgraded to use the same engine as Element X). 3. Finally, the thing which went wrong in 2022/2023 was not just the impact of the end of ZIPR, but the horrible realisation that the more successful Matrix got... the more incentive there would be for 3rd parties to commercialise the Apache-licensed code that Element had built (e.g. Synapse) without routing any funds to us as the upstream project. We obviously knew this would happen to some extent - we'd deliberately picked Apache to try to get as much uptake as possible. However, I hadn't realised that the % of projects willing to fund the upstream would reduce as the project got more successful - and the larger the available funds (e.g. governments offering million-dollar deals to deploy Matrix for healthcare, education etc) then you were pretty much guaranteed the % of upstream funding would go to zero. So, we addressed this in 2023 by having to switch Element's work to AGPL, massively shrinking the company, and then doing an open-core distribution in the form of ESS Pro ( https://element.io/server-suite/pro ) which puts scalability (but not performance), HA, and enterprise features like antivirus, onboarding/offboarding, audit, border gateways etc behind the paywall. The rule of thumb is that if a feature empowers the end-user it goes FOSS; if it empowers the enterprise over the end-user it goes Pro. Thankfully the model seems to be working - e.g. EC is using ESS for this deployment. There's a lot more gory detail in last year's FOSDEM main-stage talk on this: https://www.youtube.com/watch?v=lkCKhP1jxdk Eitherway, the good news is that we think we've figured out how to make this work, things are going cautiously well, and these days all of Element is laser-focused on making the Element apps & servers as good as we possibly can - while also continuing to also improve Matrix, both because we believe the world needs Matrix more than ever, and because without Matrix Element is just another boring silo'd chat app. The bad news is that it took us a while to figure it all out (and there are still some things still to solve - e.g. abuse on the public Matrix network, finishing Hydra (see https://www.youtube.com/watch?v=-Keu8aE8t08 ), finishing the Element Web rework, and cough custom emoji). I'm hopeful we'll get here in the end :)
I know this isn’t a popular opinion, and yeah, I will also miss it, but I’ve always thought the World Factbook was a strange thing for the CIA to be publishing in the first place. Not because the information is false, but because the act of choosing which facts to publish is itself an opinion. Once you accept that, you’re no longer talking about neutral data; you’re talking about the official position of the United States government, whether that was the intent or not. pro tip: I'm sure it was, esp during the Cold War(tm) That creates problems, especially in diplomacy. Negotiation depends on what you don’t say as much as what you do. Publicly cataloging a country’s political structure, demographics, or internal conditions may feel benign, but it can complicate discussions that are already delicate, and sometimes existential. It also gives away more than anyone would like to admit. It signals what we know, what we think we know, and what we’re willing to put our name behind. Even basic statistics like population or religious composition can become leverage or liabilities in the wrong context, and you can’t realistically scrub or redact them every time you enter into a diplomatic negotiation or whatever. The core issue is simple: this isn’t a private research group or a tech company publishing an open dataset; it’s literally the largest intelligence agency (if you exclude NSA I think) of the United States government publicly describing other nations. That isn’t neutral. Also, once an agency like the CIA is ideologically skewed, even subconsciously, objective facts become directional. Not by falsifying GDP or population, but by emphasizing governance scores, freedom indices, demographic categories, or economic structures in ways that subtly reinforce a worldview. That kind of torque is harder to detect and harder to challenge than obvious propaganda. During the Cold War, that might have made sense. Actually, it probably makes sense all the time, but my guess is that the current administration thought (rightly or wrongly) that the editorial team was no longer objective, or they decided there were better avenues to get their message out there. However, the fact that it no longer even maintained archives since the Biden administration (2020), though, says something else, at least to me: it says that the current admin was in agreement with the previous administration, which means it might have been a bi-partisan view that either it was no longer needed or (really, it seems) no longer wanted or at least valued by either administration.
 Top