> "I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off of it." I have to say this reads a bit hollow to me, and perhaps a little bit shallow. If the content this guy created could be scraped and usefully regurgitated by an LLM, that same hack, before LLMs, could have simply searched, found the content and still profited off of it nonetheless. And probably could have done so without much more thought than that required to use the LLM. The only real difference introduced by the LLM is that the purpose of the scraping is different than that done by a search engine. But let's get rid of the loaded term "hack" and be a little less emotional and the complaint. Really the author had published some works and presumably did so that people could consume that content: without first knowing who was going to consume it and for what purpose. It seems to me what the author is really complaining about is that the reward from the consuming party has been displaced from himself to whoever owns the LLM. The outcome of consumption and use hasn't changed... only who got credit for the original work has. Now I'm not suggesting that this is an invalid complaint, but trying to avoid saying, "I posted this for my benefit"... be that commercial (ads?) or even just for public recognition...is a bit disingenuous. If you poured you knowledge, experience, and creativity into some content for others to consume and someone else took that content as their own... just be forthright about what you really lost and don't disparage the consumer. Just because they aren't your "hacks" anymore, but that middlemen are now reaping your rewards.
Yeah, but humans still had to work to create those websites, it increased jobs, didn't replace them (this is happening). This will devalue all labor that has anything to do with i/o on computers, if not outright replace a lot of it. Who cares if it can't write perfect code, the owners of the software companies never cared about good code, they care about making money. They make plenty of money off slop, and they'll make even more if they don't have to have humans create the slop. The job market will get flooded with the unemployed (it already is) with fewer jobs to replace the ones that were automated, those remaining jobs will get reduced to minimum wages whenever and wherever possible. 25% of new college grads cannot find employment. Soon young people will be so poor that you'll beg to fight in a war. Give it 5-10 years. This isn't a hard future to game theory out, its not pretty if we maintain this fast track of progress in ML that minimally requires humans. Notice how the ruling class has increased the salaries for certain types of ML engineers, they know what's at stake. These businessmen make decisions based on expected value calculated from complex models, they aren't giving billion dollar pay packages to engineers because its trendy. We should use our own mental models to predict where this is going, and prevent it from happening however possible. THE word ''Luddite'' continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D. D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO's, up against whom us average poor bastards are completely outclassed, although Ike didn't put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time. If our world survives, the next great challenge to watch out for will come - you heard it here first - when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron's mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins: [0] https://archive.nytimes.com/www.nytimes.com/books/97/05/18/reviews/pynchon-luddite.html
This is in my opinion the greatest weakness of everything LLM related. If I care about the application I'm writing, and I believe I should if I bother doing it at all, it seems to me that I should want to be precise and concise at describing it. In a way, the code itself serves as a verification mechanism for my thoughts and whether I understand the domain sufficiently. English or any other natural language can of course be concise enough, but when being brief they leave much to imagination. Adding verbosity allows for greater precision, but I think as well that that is what formal languages are for, just as you said. Although, I think it's worth contemplating whether the modern programming languages/environments have been insufficient in other ways. Whether by being too verbose at times, whether the IDEs should be more like databases first and language parsers second, whether we could add recommendations using far simpler, but more strict patterns given a strongly typed language. My current gripes are having auto imports STILL not working properly in most popular IDEs or an IDE not finding referenced entity from a file, if it's not currently open... LLMs sometimes help with that, but they are extremely slow in comparison to local cache resolution. Long term I think more value will be in directly improving the above, but we shall see. AI will stay around too of course, but how much relevance it'll have in 10 years time is anybody's guess. I think it'll become a commodity, the bubble will burst and we'll only use it when sensible after a while. At least until the next generation of AI architecture will arrive.
So many people responding to you with snarky comments or questioning your programming ability. It makes me sad. You shared a personal take (in response to TFA which was also a personal take). There is so much hostility and pessimism directed at engineers who simply say that AI makes them more productive and allows them to accomplish their goals faster. To the skeptics: by all means, don't use AI if you don't want to; it's your choice, your career, your life. But I am not sure that hitching your identity to hating AI is altogether a good idea. It will make you increasingly bitter as these tools improve further and our industry and the wider world slowly shifts to incorporate them. Frankly, I consider the mourning of The Craft of Software to be just a little myopic. If there are things to worry about with AI they are bigger things, like widespread shifts in the labor force and economic disruption 10 or 20 years from now, or even the consequences of the current investment bubble popping. And there are bigger potential gains in view as well. I want AI to help us advance the frontiers of science and help us get to cures for more diseases and ameliorate human suffering. If a particular way of working in a particular late-20th and early-21st century profession that I happen to be in goes away but we get to those things, so be it. I enjoy coding. I still do it without AI sometimes. It's a pleasant activity to be good at. But I don't kid myself that my feelings about it are all that important in the grand scheme of things.
> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet) Of course, and if LLMs keep improving at current rates it will happen much faster than people think. Arguably you don't need junior software engineers anymore. When you also don't need senior software engineers anymore it isn't that much of a jump to not needing project managers, managers in general or even software companies at all anymore. Most people, in order to protect their own ego, will assume *their* job is safe until the job one rung down from them disappears and then the justified worrying will begin. People on the "right things to build" track love to point out how bad people are at describing requirements, so assume their job as a subject matter expert and/or customer-facing liaison will be safe, but does it matter how bad people are at describing requirements if iteration is lightning fast with the human element removed? Yes, maybe someone who needs software and who isn't historically some sort of software designer is going to have to prompt the LLM 250 times to reach what they really want, but that'll eventually still be faster than involving any humans in a single meeting or phone call. And a lot of people just won't really need software as we currently think about it at all, they'll just be passing one-off tasks to the AI. The real question is what happens when the labor market for non-physical work completely implodes as AI eats it all. Based on current trends I'm going to predict in terms of economics and politics we handle it as poorly as possible leading to violent revolution and possible societal collapse, but I'd love to be wrong.
> "build the right things" [vs] "build things right" I think this (frequent) comparison is incorrect. There are times when quality doesn't matter and times that it does. Without that context these discussions are meaningless . If I build my own table no one really gives a shit about the quality besides me and maybe my friends judging me. But if I sell it, well then people certainly care[0] and they have every right to. If I build my own deck at my house people do also care and there's a reason I need to get permits for this, because the danger it can cause to others. It's not a crazy thing to get your deck inspected and that's really all there is to it. So I don't get these conversations because people are just talking past one another. Look, no one gives a fuck if you poorly vibe code your personal website, or at least it is gonna be the same level as building your own table. But if Ikea starts shipping tables with missing legs (even if it is just 1%) then I sure give a fuck and all the customers have a right to be upset. I really think a major part of this concern with vibe coding is about something bigger. It is about slop in general. In the software industry we've been getting sloppier and sloppier and LLMs significantly amplify that. It really doesn't matter if you can vibe code something with no mistakes, what matters is what the businesses do. Let's be honest, they're rushing and don't care about quality because they have markets cornered and consumers are unable to accurately evaluate products prior to purchase . That's the textbook conditions for a lemon market. I mean the companies outsource tech support so you call and someone picks up who's accent makes you suspicious of their real name being "Steve". After all, it is the fourth "Steve" you've talked to as you get passed around from support person to support person. The same companies who contract out coders from poor countries and where you find random comments in another language. That's the way things have been going. More vaporware. More half baked products. So yeah, when you have no cake the half baked cake is probably better than nothing. At home it also doesn't matter if you're eating a half baked cake or one that competes with the best bakers in the world. But for everyday people who can't bake their own cakes, what do they do? All they see is a box with a cake in it, one is $1, another for $10, and another other is $100. They look the same but they can't know until they take a bite. You try enough of the $1 cakes and by the time you give up the $10 cakes are all gone. By the time you get so frustrated you'll buy the $100 cake they're gone too. I don't dislike vibe coding because it is "building things the wrong way" or any of that pretentious notion. I, and I believe most people with a similar opinion, care because "the right things" aren't being built . Most people don't care how things were built, but they sure do care about the result. Really people only start caring about how the sausage is made when they find out that something distasteful is being served and concealed from them. It's why everyone is saying "slop". So when people make this false dichotomy it just feels like people aren't listing to what's actually being said. [0] Mind you, it is much easier for an inexperienced person to judge the quality of a table than software. You don't need to be a carpenter to know a table's leg is missing or that it is wobbly but that doesn't always hold true for more sophisticated things like software or even cars. If you haven't guessed already, I'm referencing lemon markets: https://en.wikipedia.org/wiki/The_Market_for_Lemons
Plenty of people cycle on a fixie too. So what? C, especially modern C, does provide metaprogramming and abstraction facilities. In practice, you can even get things like the "defer" construct from other languages: https://lwn.net/Articles/934679/ The question isn't "Can I write a game in C?". Yes, of course you can, and it's not even that painful. The question is "Why would you?", and then "Why would you brag about it?" > C++ covers my needs, but fails my wants badly. It is desperately complicated. Despite decent tooling it's easy to create insidious bugs. It is also slow to compile compared to C. It is high performance, and it offers features that C doesn't have; but features I don't want, and at a great complexity cost. C++ is, practically speaking, a superset of C. It being "complicated"? The "insidious bugs"? It being "slow to compile"? All self-inflicted problems. The author of this article can't even fall back on the "well, my team will use all the fancy features if I let them use C++ at all!" argument pro-C-over-C++ people often lean on: he's the sole author of his projects! If he doesn't want to do template metaprogramming, he... just doesn't want to do it. I don't read these sorts of article as technical position papers. People say, out loud, "I use C and not C++" to say something about themselves . ISTM that certain circles there's this perception that C is somehow more hardcore. Nah. Nobody's impressed by using it over a modern language. It really is like a fixie bicycle.
> Drivers in the UK must be able to read a number plate from 20 metres away, according to the Driver and Vehicle Licensing Agency (DVLA). Unless I've botched the math and/or what the internet tells me about the size of the characters on a UK number plate is wrong this seems to be a bit overboard. The internet is telling me that the characters are 79mm tall and 50mm wide (except for '1' and 'I') with a 14mm stroke. My eyes right now are about 250mm from my monitor. Something that is 79mm tall and 20m away would have the same angular size as something 250mm away that is 79mm / 20m x 250mm = 0.9875mm tall. If I set the size to 75% in Chrome that is the size of the numbers on this page in the timestamps and the submission points and comment counts. It is about 1/2 the size of the numbers in the text box that I'm writing this comment in. I've just taken a photo of that and will include a link to it to show how small that is [1]. In that I'm holding a ruler next to the left side of the text. The "180" up where it says "180 points" is what you have to be able to read to pass the test. (If you can't see the photo because Imgur blocks your country just grab a ruler, hold it vertically 25cm in front of you, and the apparent size of the space between the mm marks is the size character you need to read). I have no idea what road signs and markings are like in the UK, but in the US in ~50 years of driving I don't think I've ever needed to read anything anywhere near that small. [1] https://imgur.com/a/NGuPdfF
To pick a few (from the server crate, because that's where I looked): - The StoreError type is stringly typed and generally badly thought out. Depending on what they actually want to do, they should either add more variants to StoreError for the difference failure cases, replaces the strings with a sub-types (probably enums) to do the same, or write an type erased error similar to (or wrapping) the ones provided by anyhow, eyre, etc, but with a status code attached. They definitely shouldn't be checking for substrings in their own error type for control flow. - So many calls to String::clone [0]. Several of the ones I saw were actually only necessary because the function took a parameter by reference even though it could have (and I would argue should have) taken it by value (If I had to guess, I'd say the agent first tried to do it without the clone, got an error, and implemented a local fix without considering the broader context). - A lot of errors are just ignored with Result::unwrap_or_default or the like. Sometimes that's the right choice, but from what I can see they're allowing legitimate errors to pass silently. They also treat the values they get in the error case differently, rather than e.g. storing a Result or Option. - Their HTTP handler has an 800 line long closure which they immediately call, apparently as a substitute for the the still unstable try_blocks feature. I would strongly recommend moving that into it's own full function instead. - Several if which should have been match. - Calls to Result::unwrap and Option::unwrap. IMO in production code you should always at minimum use expect instead, forcing you to explain what went wrong/why the Err/None case is impossible. It wouldn't catch all/most of these (and from what I've seen might even induce some if agents continue to pursue the most local fix rather than removing the underlying cause), but I would strongly recommend turning on most of clippy's lints if you want to learn rust. [0] https://rust-unofficial.github.io/patterns/anti_patterns/borrow_clone.html
I'm just using Gemini in the browser. I'm not ready to let it touch my code. Here are my last two prompts, for context the project is about golf course architecture: Me, including the architecture_diff.py file: I would like to add another map to architecture_diff. I want the map to show the level of divergence of the angle of the two shots to the two different holes from each point. That is, when your are right in between the two holes, it should be a 180 degree difference, and should be very dark, but when you're on the tee, and the shot is almost identical, it should be very light. Does this make sense? I realize this might require more calculations, but I think it's important. Gemini output was some garbage about a simple naive angle to two hole locations, rather than using the sophisticated expected value formula I'm using to calculate strokes-to-hole... thus worthless. Follow up from me, including the course.py and the player.py files: I don't just want the angle, I want the angle between the optimal shot, given the dispersion pattern. We may need to update get_smart_aim in the player to return the vector it uses, and we may need to cache that info. We may need to update generate_strokes_gained_map in course to also return the vectors used. I'm really not sure. Take as much time as you need. I'd like a good idea to consider before actually implementing this. Gemini output now has a helpful response about saving the vector field as we generate the different maps I'm trying to create as they are created. This is exactly the type of code I was looking for.
 Top