Here's the actual statement from the European Comission: https://ec.europa.eu/commission/presscorner/detail/en/ip_26_312 It's important to note they aren't creating laws against infinite scrolling , but are ruling against addictive design and pointing to infinite scrolling as an example of it. The wording here is fascinating, mainly because they're effectively acting as arbiters of "vibes". They point to certain features they'd like them to change, but there is no specific ruling around what you can/can't do. My initial reaction was that this was a terrible precedent, but after thinking on it more I asked myself, "well what specific laws would I write to combat addictive design?". Everything I thought of would have some way or workaround that could be found, and equally would have terrible consequences on situations where this is actually quite valuable. IE if you disallow infinite scrolling, what page sizes are allowed? Can I just have a page of 10,000 elements that lazy load? Regardless of your take around whether this is EU overreach, I'm glad they're not implementing strict laws around what you can/can't do - there are valuable situations for these UI patterns, even if in combination they can create addictive experiences. Still, I do think that overregulation here will lead to services being fractured. The regulated friction of major platforms (ie discord w/ ID laws) is on a collision course with the ease of vibe coding up your own. When that happens, these comissions are going to need to think long and hard around having a few large companies to watch over is better than millions of small micro-niche ones.
ahhhh, every time the same discussion 1. GDPR consent dialogs are not cookie popups, most things you see are GDPR consent dialogs 2. GDPR consent dialogs are only required if you share data, i.e. spy on the user 3. GDPR had from the get to go a bunch of exceptions, e.g. you don't need permission to store a same site cookie indicating that you opted out of tracking _iff_ you don't use it for tracking. Same for a lot of other things where the data is needed for operation as long as the data is only used with that thing and not given away. (E.g. DDOS protection, bot detection, etc.) 4. You still had to inform the user but this doesn't need any user interacting, accepting anything nor does it need to be a popup blocking the view. A small information in the corner of the screen with a link to the data policy is good enough . But only if all what you do falls under 3. or non personal information. Furthermore I think they recently have updated it to not even require that, just having a privacy policy in a well know place is good enough but I have to double check. (And to be clear this is for data you don't need permission to collect, but like any data you collect it's strictly use case bound and you still have to list how its used, how long stored etc. even if you don't need permissions). Also to be clear if you accept the base premise of GDPR it's pretty intuitive to judge if it's an exception or not. 5. in some countries, there are highly misguided "cookie popup" laws predating GDPR (they are actually about cookies, not data collection in general). This are national laws and such the EU would prefer to have removed. Work on it is in process but takes way to long. I'm also not fully sure about the sate of that. So in that context, yes they should and want to kill "cookie popups". That just doesn't mean what most people think it does (as it has nothing to do with GDPR).
↙ time adjusted for second-chance
Building a TUI is easy now (hatchet.run)
↙ time adjusted for second-chance
Do Metaprojects (taylor.town)
This article does not make it clear what a metaproject actually is. That said, I _believe_ the author is describing a concept similar to something I do. Choose a very small number of open-ended, broad projects. Each project should basically be an endless well of work. Moving the metaproject forward involves finishing several sub-projects, each of which is a respectable weekend or holiday project all on its own. A good choice for metaproject allows for a choice between a wide variety of new and interesting things to work on. Get bored or stuck? Do something else. There’s so many things to do. You’re still working on the same metaproject. Find something cool online that you want to experiment with? Find a way to frame it as an experiment or project under the umbrella of the metaproject. For example, my overarching project is to develop my own computer system, from the custom CPU, up to the operating system and applications, as completely from scratch as possible. This has led me to learn more about Verilog, electronics, soldering, computer architecture, RISC-V, emulators, you name it. At one point, I decided I needed to design my own high-level language for this thing. The compiler has itself become a metaproject where there’s always something to work on: parsing, lexing, optimization passes, experiments in syntax, garbage collectors, writing a debugger, etc. Someday soon, I hope to be able to start a project to build video hardware with a sprite engine, like in those old 8-bit and 16-bit game systems. I’ll mentally bill this under the umbrella of “working on my computer project.” I’ve been thinking of “that computer project” as a kind of life project that I’ll plug away on here and there until the day I die. I wonder if this is how those old men who build boats feel about their boat. Hey, there’s my own catchy phrase right there: “Build your boat”
It's a cool idea, but personally I don't like the implementation. I usually don't use monolithic tools that cram a lot of different solutions into one thing. For one thing, especially if they're compiled, it's very hard to just modify them to do one extra thing I need without getting into a long development cycle. For two, they are usually inflexible, restricting what I can do. Third, they often aren't very composeable. Fourth, often they aren't easily pluggable/extensible. I much prefer independent, loosely coupled, highly cohesive, composeable, extensible tools. It's not a very "programmery" solution, but it makes it easier as a user to fix things, extend things, combine things, etc. The Docker template you have bundles a ton of apps into one container. This is problematic as it creates a big support burden, build burden, and compatibility burden. Docker works better when you make individual containers of a single app, and run them separately, and connect them with tcp, sockets, or volumes. Then the user can swap them out, add new ones, remove unneded ones, etc, and they can use an official upstream project. Docker-in-docker with a custom docker network works pretty well, and the host is still accessible if needed. As a nit-pick: your auth code has browser-handling logic. This is low cohesion, a sign of problems to come. And in your rsync code: sshCmd := fmt.Sprintf("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=%q", proxyCmd) I was just commenting the other day on here about how nobody checks SSH host keys and how SSH is basically wide-open due to this. Just leaving this here to show people what I mean. (It's not an easy problem to solve, but ignoring security isn't great either)
SmartCheckr was founded/owned by Hoan Ton-That and Richard Schwartz. So they transferred the assets of a company they already founded/owned to another company that they founded/owned: "Clearview AI was founded in 2017 by Hoan Ton-That and Richard Schwartz after transferring the assets of another company, SmartCheckr, which the pair originally founded in 2017 alongside Charles C. Johnson" [0]. Other notable white supremacists with material ties in the article: Chuck Johnson [1] collaborated with Ton-That and "in contact about scraping social media platforms for the facial recognition business." Ran a white supremacist site (GotNews) and white supremacist crowd funding sites. Douglass Mackey [2] a white supremacist who consulted for the company. Tyler Bass [3] an employee and member of multiple white supremacist groups and Unite the Right attendee. Marko Jukic [4], employee and syndicated author in a publication by white supremacist Richard Spencer. The article also goes into the much larger ecosystem of AI and facial recognition tech and its ties to white supremacists and the far-right. So there are not just direct ties to Clearview AI itself, but a network of surveillance companies who are ideologically and financially tied to the founders and associates. [0] https://en.wikipedia.org/wiki/Clearview_AI [1] https://en.wikipedia.org/wiki/Charles_C._Johnson [2] https://en.wikipedia.org/wiki/Douglass_Mackey [3] https://gizmodo.com/creepy-face-recognition-firm-clearview-ai-sure-has-a-lo-1842740604 [4] https://www.motherjones.com/politics/2025/04/clearview-ai-immigration-ice-fbi-surveillance-facial-recognition-hoan-ton-that-hal-lambert-trump/
Cops are legally forbidden from surveilling everyone at all times using machines. Explicitly so. Yet, if a company starts up and surveils everyone at all times, and their only customer is Cops, it's all Okay somehow. The cops don't even need a warrant anymore. What's worse, is that third party doctrine kills your rights worse than direct police surveillance. Imagine if you will, back in the day of film cameras: The company developing your film will tell the police if you give them literal child porn but otherwise they don't. But imagine if they kept a copy of every picture you ever took, just stuffed it into a room in the back, and your receipt included a TOS about you giving them a license to own a copy "for necessary processing". Now, a year after you stopped using film cameras, the cops ask the company for your photos. The company hands it over. You don't get to say no. The cops don't need a warrant, even though they 100% need a warrant to walk into your home and grab your stash of photos. Why is this at all okay? How did the supreme court not recognize how outright stupid this is? We made an explicit rule for video rental stores to not be able to do this! Congress at one time recognized the stupidity and illegal nature of this! Except they only did that because a politician's video rental history was published during his attempt at confirmation. That law is direct and clear precedent that service providers should not be able to give your data to the cops without your consent, but this is America so precedent is only allowed to help businesses and cops.
↙ time adjusted for second-chance
Sandwich Bill of Materials (nesbitt.io)
This is the attitude that made me keep my patches to myself. Hey, you, FOSS maintainer, whoever you are: - If you make your project public, it means you want and expect people to use it. You could at least write some documentation, so I don't waste my time and then find out, days later, it isn't capable of what I need or I simply don't know how to use it. - If you set up a bug tracker, then at least have the decency to answer bug reports. Bugs make it unusable. Someone took the time to write those bug reports. I'm not asking to fix them (I lost that hope decades ago), but at least you could give a one line answer or 2-line guidance for some another person that might want to try a fix - "I don't have time to fix it, sorry, but it's probably because of <that thing> in <that file>." I mean, you wrote the stuff! One minute of thinking on your part is the same as 6 hours of digging for someone who never saw the code before. - If you open it up to pull requests, it means you want people to contribute. Have the decency to review them. Someone took time away from their jobs, families or entertainment to write those PRs. Ignoring them because you don't need that feature, not affected by the bug, or simply because of code aesthetics is an insult to the one who wrote it. PS: - And no, don't expect someone else to write the documentation for your code. Same as the bugs: 1 minute of your time is 6 hours of work for someone else. If you can't do at least these things, just say it's abandoned on the front page and be done with it.
Counterpoints: - You are entitled to human decency. Maintainers don't get to be rude just because they run a project. This is a common thing in a lot of projects; maintainers have power, and this allows them to be rude without concern. Not ok. - As a maintainer, if you publish your work as open source, you already acknowledge you are engaging with an entire community, culture, and ethos. We all know how it works: you put a license on your work that (often, but not always) says people need to share their changes. So those people may share their changes back to you, assuming you might want to integrate them. So you know this is going to happen... so you need to be prepared for that. That is a skill to learn. - Since maintainers do owe basic human politeness, and they know people will be interacting with them, maintainers do owe this culture some form of communication of their intentions. If they don't want to take any changes, put that in CONTRIBUTING and turn off GH PRs. If they want to take changes, but no AI changes, put that in CONTRIBUTING. If they don't want to do support, turn off GH Issues. If they require a specific 10-point series of steps before they look at a PR or Issue, put that in CONTRIBUTING. It's on the user to read this document and follow it - but it's on you to create it, so they know how to interface with you. Be polite, and tell people what you will and won't accept in CONTRIBUTING (and/or SUPPORT). Even if it's just "No contributing", "No support". (My personal issue: I spend hours working on preparing an Issue or PR to fix someone's project, and they ignore or close it without a word. Now I don't want to contribute to anything. This is bad for the open source community.)
↙ time adjusted for second-chance
Faster Than Dijkstra? (systemsapproach.org)
Suppose you have n elements to sort and you don't have duplicates. Each element is a string of L fixed size symbols (bits, bytes, whatever). And suppose that there are at least n/10 unique elements. You may replace 10 with any other constant. This means that, as you add more elements, you are not just adding more duplicates of the same values. In order to have n/10 unique elements, you need to be able to construct n/10 different strings, which means that L needs to be at least log_(base = how many distinct symbols you have)(n/10), which is O(log n). So you have L * n = O(n log n) symbols to write down, and even reading the input takes time O(n log n). As a programmer, it's very very easy to think "64-bit integers can encode numbers up to 2^64, and 2^64 is HUUUUGE, so I'll imagine that my variables can store any integer". But asymptotic complexity is all about what happens when inputs get arbitrarily large, and your favorite computer's registers and memory cells cannot store arbitrarily large values, and you end up with extra factors of log n that you need to deal with. P.S. For fun, you can try to extend the above analysis to the case where the number of unique elements is sublinear in the number of elements. The argument almost carries straight through if there are at least n^c unique elements for 0 < c < 1 (as the c turns into a constant factor when you take the log), but there's room to quibble: if the number of unique elements is sublinear in n, one might argue that one could write down a complete representation of the input and especially the sorted output in space that is sublinear in L * n. So then the problem would need to be defined a bit more carefully, for example by specifying the the input format is literally just a delimited list of the element values in input order.
 Top