Hopefully your question is sarcasm, as it should be obvious why this is a terrible idea on many fronts. In case it isn't, for starters, especially given the way the world seems to be changing these days, if you put all of your critical supplies in the hands of another nation, especially an adversary like China, you basically are at their beck and call when things get ugly. Even non-advesary states can either have regime change or just not want to deal with you, and all of a sudden everything is completely out of your control. Others basically own you at that point, which is obviously unacceptable from a defense or critical logistics standpoint. On a whole other level, it's incredibly immoral and stupid that we're ok with externalizing problems that labor and environmental standards protect. If you wouldn't accept having your kin or friends work in the sort of conditions you see in many exploitative "cheap labor" centers overseas so much so that it's codified in law, why is it OK to just pawn it off on another nation's people? If you wouldn't accept the environmental damage that other countries seem willing to inflict, why is it suddenly ok when laundered as free trade, especially given how concerned we are with the global reach of environmental problems. If there were ever an application for tariffs that made sense it would be to ding the shit out of products and services that come from states that don't meet minimum levels of labor and environmental law. The only reason we don't do this is that we're addicted to cheap shit and can't think more than maybe a year ahead.
As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out. I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard. Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!" I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons. Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again. Some specific thoughts for you: 1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future. 2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now. 3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here? 4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.
I always plug my laptop into one or two external displays. Even without configuring distinct DPIs per monitor that was not a problem for me, because on the small screen of the laptop I kept only some less important application, like the e-mail program, while working on the bigger external displays, so I had no reason to move windows between the small screen of the laptop and the bigger external displays. But like I said, setting a different DPI value for each monitor has been added to X11 many years ago, I do not remember how many. I do not see why one would want to move windows between the external displays and the laptop, when you have connected external displays, so I consider this a niche use case, i.e. moving windows between small screens and big screens. I agree with you that having simultaneously big screens and small screens is not niche, so I was not referring to this. Without a per-screen DPI value you cannot control the ratio between the sizes of a window when is moved between the big screen and the small screen, but even when you control the ratio, moving windows between screens of different sizes does not work well because you must choose some compromise, e.g. if you keep the same physical size some windows from the big screen will not fit on the small screen and if you make the windows occupy the same fraction of the screen size they will change their sizes during moving and they will be more difficult to use on the small screen. But like I have said, this no longer matters as the problem has been solved even for this niche use case. I do not even remember if this problem still existed by the time when Wayland became usable.
I couldn't agree more. Having spent a lot of time with a language with currying like this recently, it seems very obviously a misfeature. 1. Looking at a function call, you can't tell if it's returning data, or a function from some unknown number of arguments to data, without carefully examining both its declaration and its call site 2. Writing a function call, you can accidentally get a function rather than data if you leave off an argument; coupled with pervasive type inference, this can lead to some really tiresome compiler errors 3. Functions which return functions look just like functions which take more arguments and return data (card-carrying functional programmers might argue these are really the same thing, but semantically, they aren't at all - in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?) 3a. Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function (so make_string_comparator_for_locale has type like Locale -> Function<string -> string -> order>), so now if you actually want to return a function, there's boilerplate at the return and call sites that wouldn't be there in a less 'concise' language! I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase. I think academic and hobby languages, and so functional languages, are particularly prone to this. I think implicit currying is one of these features.
Apologies, I was focused on the usual pairing in this space and not the more subtle one you're talking about. As others have pointed out, there isn't really semantic a difference between the two. Both approaches to function parameters produce the same effect. The differences are purely in "implementation," either theoretically or in terms of systems-building. From a theoretical perspective, a tuple expresses the idea of "many things" and a multi-argument parameter list expresses the idea of both "many things" and "function arguments." Thus, from a cleanliness perspective for your definitions, you may want to separate the two, i.e., require function have exactly one argument and then pass a tuple when multiple arguments are required. This theoretical cleanliness does result in concrete gains: writing down a formalism for single-argument functions is decidedly cleaner (in my opinion) than multi-argument functions and implementing a basic interpreter off of this formalism is, subsequently, easier. From a systems perspective, there is a clear downside in this space. If tuples exist on the heap (as they do for most functional languages), you induce a heap allocation when you want to pass multiple arguments! This pitfall is evident with the semi-common beginner's mistake with OCaml algebraic datatype definitions where the programmer inadvertently wraps the constructor type with parentheses, thereby specifying a constructor of one-argument that is a tuple instead of a multi-argument constructor (see https://stackoverflow.com/questions/67079629/is-a-multiple-argument-constructor-ever-useful-over-a-single-tuple-argument-cons for more details).
I am quite passionate about algos, do lots of katas on codewars for fun, and done plenty of leetcodes. Then I had a technical interview when I was asked to implement a simple algo for the tris game (aka tic tac toe) and my mind was completely blurry. I was tired, i'm in eu and this was for a San Francisco startup interviewing me at their lunch time, very late in Italy. And generally don't like to be interviewed/tasked. Of course the solution is beyond simple, but I struggled even at brute forcing it. I can easily do these kind of exercises (and much harder ones obviously) for fun, but not when interviewed. I struggled with the same thing in University. I graduated with 104/110 even though I was consistently among the most prepared, and I learned to learn, not to pass exams (plenty of stellar performers didn't remember anything few weeks after exams). Once I asked a professor why did he grade me 27/30 even though I spent one hour answering with details on everything, including the hardest questions. "Because you never appear convinced when you answer". I get nervous, I don't like to prove my knowledge this way. I rethink constantly what I'm saying, or even how I sound. I forget how to type braces or back ticks. I did not have any issues when not interviewed, or in written exams, or during my research period when I published 3 papers that have been highly cited. But I am just not a fan of these types of interviews they tell absolutely nothing about the candidate. You interview me and you'll have the very wrong impression if you ask me to live code or white board. Meanwhile I've seen leetcode black belts spend most of their time logged on Tekken 7 on discord, consistently creating work and providing negative value while somehow always selling their high skills. I have found much more value in seeing personal projects, and OSS contributions. Never asked a single of these bs questions and never failed at hiring anyone. Not once.
 Top