>The same people saying this today had hot takes on Kyiv falling in ‘21. Please note that Kiev not falling after a week in '22 (assuming you misspelled) was pure luck. Russians had extreme advantage in man and firepower. They made a big mistake by using their army against their doctrine - not bombing/shelling targets before attacking (what Russian army was designed for). But them losing the war (at least the first week) is due to a few lucky dice rolls for us. Us both Europe, but also for me as a Polish expat, knowing my brothers and friends are not dying right now fighting Russian army with all the Ukrainians conscripted into it. These lucky dice rolls that I can come up from memory: 1. Shooting down one of two military passenger planes with russian Seals that were to take Kiev's Hostomel airport and open an air bridge. The group from the plane that survived did take the airfields, but they couldn't decide on their own to move and take the airports buildings - no distributed command in Russia at that point. Thanks to that, local territorial defence managed to easily kill these elite forces. 2. Fast and generous support from England in form of Javelins that limited Russian heavy equipment advantage. Sorry if I don't credit the countries involved correctly. 3. Fast and generous aid with post soviet equipment from old Warsaw pact countries. These tanks could be used right away as they required no re-training. 4. General incompetence and duty negligence that was systemic in Soviets and is still systemic in Russia. To that we owe cars running out of fuel, or having their tires pop, because, against orders to regularly move them, they all sat with sun damaging one side of the tire so many years, while the responsible for maintenance were drinking vodka and eating pierogi with kielbasa.
My pain points are mostly in the CPU debugger (since I'm not using much of the actual "IDE features" of Xcode except the regular edit-compile-debug loop anyway. Starting a 'cold' debug session into a UI application may take 10-ish seconds until applicationDidFinishLaunching is reached, and most of that time seems to be spent with loading the symbols for hundreds of framework DLLs which are loaded during application start (which I never even need because I can't step into system frameworks anyway) - and seriously, why are there even hundreds of system DLLs in a more or less hello-world-style Metal application with minimal UI? This problem seems to go back to the ancient times, but it gets worse and worse the bloatier macOS UI processes become (e.g. the more system frameworks they load at start). The debugger variable view panel is so bare bones that it looks like it's ripped out straight from an 80's home computer monitor program. When debug-stepping, the debugger frontend is quite often stuck for 10s of seconds at completely unpredictable places waiting for the debugger to respond (it feels like a timeout). Step-debugging in general feels sluggish even compared to VSCode with lldb. For comparison, VS2026 isn't exactly a lightweight IDE either, but debugging sessions start instantly , debug-stepping is immediate, and the CPU debugger is much more feature rich than Xcode's. While in Xcode, everything feels like it's been added as a checklist item, but then never actually used by the Xcode team (I do wonder what they're using to develop Xcode, I doubt that they are dogfooding their own work). The one good and useful thing about Xcode is the Metal debugger though.
"I don't have an issue with it" tells me you've never used anything else. Have you tried Slack? Zulip? Mattermost? Fucking... IRC from 1988? Teams isn't just mediocre, it's aggressively hostile to basic usability. The camera bar sits at the top of the window, directly blocking where you're supposed to position your camera for eye contact. Chat organisation is broken: you get duplicate groups because the order people were added matters somehow. Notifications phantom in and out. Reactions are buried in an activity feed. Search is useless. You can't reliably paste text without major formatting issues. The mobile app logs you out randomly and doesn’t tell you unless you manually check it . Desktop notifications don't sync with read state. Files uploaded to chat don't appear in the Files tab. The "new Teams" broke half the features that worked in classic Teams. Presence status is a coin flip. Audio settings reset themselves between calls. Screen sharing has a 50/50 chance of sharing the wrong window. The difference between a chat and a channel is arbitrary and confusing. You can't edit messages older than a few hours. Threading is bolted on and barely works. Performance is inexcusable. Multiple gigabytes of RAM to display text messages and lag constantly on modern hardware. How do you make a chat application lag ? It's rendering text, not computing fluid dynamics. Opening the application takes 30 seconds on an SSD. Switching between chats stutters. Typing has input delay. The real problem isn't that Teams is terrible. It's that "it technically functions" has become an acceptable standard. When you've never experienced better, "it works" seems fine. But Teams is what happens when a monopoly position means you don't have to care about quality. Microsoft has unlimited resources and still ships this. Even Skype for Business was more stable, and in Skype for Business you couldn't reliably select text. That's how low the bar is.
Your claim that "if it was consistently bad, it would have been replaced already" just... totally misunderstands how enterprise software decisions work, even in organisations where people value their time. Switching costs are enormous. Your organisation has Teams integrated with your Office 365 licensing, which means you're already paying for it. Replacing it with Slack means paying $8-12 per user per month on top of your existing Office costs, because you still need Outlook, Word, Excel, and SharePoint. For a 500-person company, that's an additional $48,000-72,000 annually for a tool that overlaps with something you've already paid for. Finance departments kill these proposals before they reach decision-makers, regardless of how much time is wasted on Teams' inefficiencies. The IT burden to move is quickly substantial. Migrating chat history, file repositories, and integrations takes months. You need to retrain users, update documentation, reconfigure SSO, and migrate bots and webhooks. Most IT departments are already understaffed. Unless Teams is completely non-functional, that project never gets prioritised over security updates, infrastructure maintenance, or business-critical requests. Organisations don't optimise for employee time the way you seem to think they do. The calculus isn't "is this tool good", it's "is this tool bad enough to justify the cost and disruption of replacing it". That threshold is extraordinarily high. People tolerate inefficient tools because the alternative is fighting procurement, convincing IT, and enduring months of migration pain. Lotus Notes persisted in enterprises for over a decade despite being universally despised because the switching cost was too high. SAP is notorious for terrible UX but remains entrenched because migration is a multi-year project costing millions. Your workflow actually proves the point. You use email as your source of truth because Teams' search and organisation aren't reliable enough. You manually distribute meeting minutes and transcripts because you don't trust Teams as a system of record. You've built workarounds to compensate for the tool's deficiencies and normalised them as standard practice. That's not Teams working well, that's your organisation adapting to work around its limitations. Let me address the specific issues you haven't encountered: - Teams' resource usage is measurable and documented. PC World's 2023 benchmarks showed Teams using 1.4GB RAM at idle compared to 500MB for Slack and 350MB for Discord. ExtremeTech's testing found Teams taking 22 seconds to cold start versus 4 seconds for Slack on identical hardware. r/sysadmin consistently reports Teams causing performance problems on machines with 8GB of RAM , forcing hardware upgrades. Microsoft implicitly acknowledged this by completely rebuilding Teams in 2023, promising 2x faster performance and 50% less memory. The fact that they had to rewrite the entire application is an admission that the performance problems were architectural. (it didn't help though) - Microsoft's own documentation acknowledges search limitations. The search index doesn't include all message content beyond a certain threshold. Results ranking is poor enough that Microsoft published a support article explaining how to use advanced search operators to find messages, which rather proves the basic search doesn't work. The r/MicrosoftTeams subreddit has over 3,000 posts about search not returning results that users know exist . IT administrators on Spiceworks report having to advise users to "use Ctrl+F in the browser if Teams search doesn't work", which is a workaround for a broken core feature. - Files uploaded in chat messages don't appear in the Files tab automatically. They're stored in a hidden SharePoint folder that most users don't know how to access. Microsoft's official guidance for this is to manually move files to the Files tab or use SharePoint directly. Is that an edge case? Is it FUCK, it's documented in Microsoft's own support articles as expected behaviour. If your organisation hasn't hit this, it's because you're not using Files tabs or you've trained people to work around it. - Microsoft's Tech Community forums have literally thousands of threads about notification badges showing unread messages that don't exist (5,000+ when I last checked), or notifications not appearing for actual messages. Microsoft's official response, posted repeatedly since 2020, is "we're aware of this issue and investigating". It's six years later now, it's still not fixed. The fact that you haven't noticed might mean your notification settings are configured differently, or you've unconsciously learned to ignore the notification count as unreliable. - Going back to r/MicrosoftTeams: the community continually documents persistent issues with the mobile app... notifications not syncing with desktop read state, automatic logouts requiring re-authentication, messages appearing in different orders on mobile versus desktop, and the app draining battery faster than comparable applications. GitHub's issue tracker for Teams mobile shows hundreds of unresolved bugs (then again, I suppose what popular app doesn't). You mentioned you don't use mobile, which explains why you haven't experienced this. - Regarding Chat versus channel architecture, Microsoft's own UX research lead, cited in a 2022 Verge interview, acknowledged that the distinction between chats and channels confuses users but can't be changed due to early architectural decisions. The duplicate groups issue I mentioned isn't a bug, it's a consequence of treating "Alice, Bob, Charlie" as a different entity from "Alice, Charlie, Bob". This is documented in Microsoft's developer documentation as intended behaviour. Your organisation either hasn't hit this scale yet or has developed unofficial naming conventions to work around it. You've been using Teams for four months. These issues emerge over time, at scale, or in specific usage patterns. When you're managing multiple projects with overlapping team members across different time zones and need to reference decisions made months ago, the organisational problems compound. When you're working on older hardware or need reliable mobile access, the performance issues become blocking. When you need to find a specific technical discussion from six months ago buried in one of 40 channels, the search deficiencies become critical. The question isn't whether Teams works for your specific, constrained use case after four months. The question is whether it's good software compared to alternatives, and whether the problems people report are valid. The evidence says yes, they are valid. The performance metrics are measurable. The bugs are documented in Microsoft's own forums. The UX problems are acknowledged by Microsoft's own researchers. The antitrust case is real. Your experience is one data point. It's not invalid, but it's also not representative. Saying "I haven't personally experienced these problems in my limited usage" doesn't refute the documented experiences of millions of users, the measured performance benchmarks, or the systematic issues that Microsoft itself acknowledges. It just means you haven't hit them yet, or your use case is simple enough that they don't matter, or you've normalised workarounds as standard practice. And, I haven't even started talking about what happens if you dare to work across multiple organisations.
Ok, that's the second article on this that doesn't mention how it works in France. I will explain because I see a lot of post that could be better if their author understood that the French system isn't the US system. France 'prosecutor' role is divided in two: one is called 'procureur' and represent the state, but is chosen among judges by the executive power. The second is 'juge d'instruction' and represent the judiciary. They are chosen nominated by the local court without any executive power involvement. They lead the investigation, they order the raids, they order the arrest etc, without involvement from the 'procureur'. The 'procureur' ask for a 'juge d'instruction' to lead an investigation on X/Y or Z (this fucking company name makes everything worse FFS). The judge will then collect evidence, for and against the procureur case, and then if necessary will ask for raids and auditions to finalise. When that's done and all the new evidence is collected (it can take on average 2 years, but if it's an international case like for our ex-president, it can take 10+), the 'juge d'instruction' will present all the gathered evidence to the procureur (who will decide to pursue or not) _and_ the accused. This system exists to avoid as much as possible the executive (police and politicians) to use investigations as a scare tactic. Of course the magistrates know each other, and both corruption and influence is possible, and maybe that's the case here, but you ought to know the raid can't be at the behest of the procureur/president. We take separation of powers seriously here
We need a new word, not "local model" but "my own computers model" CapEx based This distinction is important because some "we support local model" tools have things like ollama orchestration or use the llama.cpp libraries to connect to models on the same physical machine. That's not my definition of local. Mine is "local network". so call it the "LAN model" until we come up with something better. "Self-host" exists but this usually means more "open-weights" as opposed to clamping the performance of the model. It should be defined as ~sub-$10k, using Steve Jobs megapenny unit. Essentially classify things as how many megapennies of spend a machine is that won't OOM on it. That's what I mean when I say local: running inference for 'free' somewhere on hardware I control that's at most single digit thousands of dollars. And if I was feeling fancy, could potentially fine-tune on the days scale. A modern 5090 build-out with a threadripper, nvme, 256GB RAM, this will run you about 10k +/- 1k. The MLX route is about $6000 out the door after tax (m3-ultra 60 core with 256GB). Lastly it's not just "number of parameters". Not all 32B Q4_K_M models load at the same rate or use the same amount of memory. The internal architecture matters and the active parameter count + quantization is becoming a poorer approximation given the SOTA innovations. What might be needed is some standardized eval benchmark against standardized hardware classes with basic real world tasks like toolcalling, code generation, and document procesing. There's plenty of "good enough" models out there for a large category of every day tasks, now I want to find out what runs the best Take a gen6 thinkpad P14s/macbook pro and a 5090/mac studio, run the benchmark and then we can say something like "time-to-first-token/token-per-second/memory-used/total-time-of-test" and rate this as independent from how accurate the model was.
> It's really easy to switch between models. The different models have some differences that you notice over time but the techniques you learn in one place aren't going to lock you into a provider anywhere. We have two cell phone providers. Google is removing the ability to install binaries, and the other one has never allowed freedom. All computing is taxed, defaults are set to the incumbent monopolies. Searching, even for trademarks, is a forced bidding war. Businesses have to shed customer relationships, get poached on brand relationships, and jump through hoops week after week. The FTC/DOJ do nothing, and the EU hasn't done much either. I can't even imagine what this will be like for engineering once this becomes necessary to do our jobs. We've been spoiled by not needing many tools - other industries, like medical or industrial research, tie their employment to a physical location and set of expensive industrial tools. You lose your job, you have to physically move - possibly to another state. What happens when Anthropic and OpenAI ban you? Or decide to only sell to industry? This is just the start - we're going to become more dependent upon these tools to the point we're serfs. We might have two choices, and that's demonstrably (with the current incumbency) not a good world. Computing is quickly becoming a non-local phenomenon. Google and the platforms broke the dream of the open web. We're about to witness the death of the personal computer if we don't do anything about it.
 Top