Honestly, the author is spot on about the normalisation problem. I've watched this play out at multiple organisations. You implement TLS inspection, spend ages getting certs deployed, and within six months `curl -k` is in half your runbooks because "it's just the corporate proxy again". He's also absolutely right about the architectural problems too, single points of failure, performance bottlenecks, and the complexity in cloud-native environments. That said, it can be a genuinely valuable layer in your security arsenal when done properly. I've seen it catch real threats, such as malware C2 comms, credential phishing, data exfiltration attempts. These aren't theoretical; they happen daily. Combined with decent threat intelligence feeds and behavioural analytics, it does provide visibility that's hard to replicate elsewhere. But, and this is a massive but, you can't half-arse it. If you're going to do TLS inspection, you need to actually commit: Treat that internal CA like it's the crown jewels. HSMs, strict access controls, proper rotation schedules, full-chain and sensible life-span. The point about concentrated risk is bang on, you've turned thousands of distributed CA keys into one single target. So act like it. Run it like a proper CA with proper key signing ceremonies and all the safeguards etc. Actually invest in proper cert distribution. Configuration management (Ansible/Salt/whatever), golden container base images with the CA bundle baked in, MDM for endpoints, cloud-init for VMs. If you can't reliably push a cert bundle to your entire estate, you've got bigger problems than TLS inspection. Train people properly on what errors are expected vs "drop everything and call security". Document the exceptions. Make reporting easy. Actually investigate when someone raises a TLS error they don't recognise. For dev's, it needs to just work without them even thinking about it. Then they don't need to work around it, ever. If they need to, the system is busted. Scope it ruthlessly. Not everything needs to go through the proxy. Developer workstations with proper EDR? Maybe exclude them. Production services with cert pinning? Route direct. Every blanket "intercept everything" policy I've seen has been a disaster. Particularly for end-users doing personal banking, medical stuff, therapy sessions, do you really want IT/Sec seeing that? Use it alongside modern defences. ie EDR, Zero Trust, behavioural analytics, CASB. It should be one layer in defence-in-depth, not your entire security strategy. Build observability, you need metrics on what's being inspected, what's bypassing, failure rates, performance impact. If you can't measure it, you can't manage it. But Yeah, the core criticism stands though, even done well, it's a massive operational burden and it actively undermines trust in TLS. The failure modes are particularly insidious because you're training people to ignore the very warnings that are meant to protect them. The real question isn't "TLS inspection: yes or no?" It's: "Do we have the organisational maturity, resources, and commitment to do this properly?" If you're not in a regulated industry or don't have dedicated security teams and mature infrastructure practices, just don't bother. But if you must do it, and plenty of organisations genuinely must, then do it properly or don't do it at all.
↙ time adjusted for second-chance
Are We over the "Jaws Effect?" (nautil.us)
I always wonder why people bother with providing source under a source available license. It makes outside contributions a lot less likely. Your active community of people working on the code base effectively becomes your employees. There's little to no benefit to outside users. Any work they do on the code is effectively free work they do for you that entitles them to nothing. Including free usage and distribution of the work they did. It's not likely to be helpful. My attitude to source available products is the same as to proprietary products. I tend to limit my dependency on those. Companies have short life spans. Many OSS projects I use have a history of surviving the implosion of companies that once actively contributed to them. Developer communities are much more resilient than companies. Source unavailable effectively becomes source unavailable when companies fail. Especially VC funded companies are kind of engineered (by VCs) to fail fast. So, it's just not a great basis for making a long term commitment. If something like Bun (recently acquired by anthropic) becomes orphaned, we'd still have the git source code and a permissive license. Somebody could take over the project or fork it or even create a new company around it. Some of the original developers would probably show up. A project like that is resilient against that. And projects like that have active contributors outside of the corporate context that provide lots of contributions. Because of the license. You don't get that without a good OSS license. I judge software projects by the quality of their development communities. It needs to have diversity, a good mix of people that know what they are doing, and a broad enough user community that the project is likely to be supported in perpetuity. Shared source provides only the illusion of that. Depending on them is risky. And that risk is rarely offset by quality. Of course people use proprietary software for some things. And that's fine. I'm no different. But most of the stuff I care about is OSS.
You have made two major mistakes, which is common among Rust proponents, and which is disconcerting, because some of those mistakes can help lead to memory safety bugs in real life Rust programs. And thus, ironically, by Rust proponents spreading and propagating those mistakes and misconceptions about Rust, Rust developers mislearn aspects, that then in turn make Rust less safe and secure than it otherwise would have been. 1: The portions of the code that have to be checked of unsafe blocks are not limited to just those blocks. In fact, in some cases, the whole module that tiny unsafe blocks resides in have to be checked. The Rustonomicon makes this clear. Did you read and understand the Rustonomicon? 2: Unsafe Rust is significantly harder than C and C++. This is extra unfortunate for the cases where Rust is used for the more difficult code, like some performance-oriented code, compounding the difficulty of the code. Unsafe Rust is sometimes used for the sake of performance as well. Large parts of the Rust community and several blog posts agree with this. Rust proponents, conversely, often try to argue the opposite, thus lulling Rust developers into believing that unsafe Rust is fine, and thus ironically making unsafe Rust less safe and secure. Maybe Rust proponents should spend more time on making unsafe Rust easier and more ergonomic, instead of effectively undermining safety and security. Unless the strategy is to trick as many other people as possible into using Rust, and then hope those people fix the issues for you, a common strategy normally used for some open source projects, not languages.
Unless rules have changed Rust is only allowed a minor role in Chrome. > Based on our research, we landed on two outcomes for Chromium. > We will support interop in only a single direction, from C++ to Rust, for now. Chromium is written in C++, and the majority of stack frames are in C++ code, right from main() until exit(), which is why we chose this direction. By limiting interop to a single direction, we control the shape of the dependency tree. Rust can not depend on C++ so it cannot know about C++ types and functions, except through dependency injection. In this way, Rust can not land in arbitrary C++ code, only in functions passed through the API from C++. > We will only support third-party libraries for now. Third-party libraries are written as standalone components, they don’t hold implicit knowledge about the implementation of Chromium. This means they have APIs that are simpler and focused on their single task. Or, put another way, they typically have a narrow interface, without complex pointer graphs and shared ownership. We will be reviewing libraries that we bring in for C++ use to ensure they fit this expectation. -- https://security.googleblog.com/2023/01/supporting-use-of-rust-in-chromium.html Also even though Rust would be a safer alternative to using C and C++ on the Android NDK, that isn't part of the roadmap, nor the Android team provides any support to those that go down that route. They only see Rust for Android internals, not app developers If anything, they seem more likely to eventually support Kotlin Native for such cases than Rust.
 Top