> There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt. "On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question" It turns out that when you make a fancy, sophisticated system that users can not possibly understand, as long as it isn't obviously crap, people will begin to implicitly trust it, as if it has supernatural powers. You really need to do work to make sure that people don't overly trust these things. Even just Tesla using the term "Autopilot" for its non-autonomous driving assistance has gotten a lot of flak, and I think that's particularly interesting because it's literally named after a system that doesn't allow fully autonomous control (in aircraft.) So when Clearview sells its "AI technology" to law enforcement, I think it is a problem when it markets it like this[1]: > - Leading facial recognition technology, excelling even in challenging photographic conditions, tested by NIST. > - Trained on the largest and most diverse dataset and relied on by law enforcement in high-stakes scenarios. > Clearview AI’s highly accurate facial recognition platform is protecting our families, Not to mention all of the success stories where Clearview instantly found the criminal! In fact, at no point to they even dig into how likely you are to hit false positives. The only implication that Clearview can even possibly give you an incorrect result is the idea that it is supposed to be used to generate "leads", but they don't really bother going into any more detail. Sure, it's marketing material, so you really expect them to embellish it, but embellishing your marketing material is fine for a programming tool that is used to work on CRUD apps and very not fine for a tool that has a very high probability of being used to put people in jail (even if it shouldn't be.) Clearview has almost no incentive to actually prevent law enforcement from "misusing" the system by just jailing random leads, so as long as they've covered their liability risks with a tiny 4pt-font disclaimer, other than the negative PR it brings; but hey, why would they care? There's someone else to blame... I'm genuinely offended by your reply. Don't get me wrong, it's not that I don't think the "human-in-the-loop" should have no accountability, I just think it's insane to assume that some guy, who was probably just doing his job, that has no fucking clue how AI works at all or any of its limitations, because that's not part of their job, and has been force-fed all of this crap about how great these AI systems are with clearly not enough counterbalancing to show its limitations, is supposed to be the person who determines that the AI system might be wrong. What the fuck is this? What about Clearview, who is selling this shit? What about their managers, who bought these systems and almost definitely set the policies for their usage? Just adding a short disclaimer about the limitations of AI systems is not going to cut it. If these systems are as intelligent as they claim, they should do a better job of conveying their own limitations. Instead, what we get is so ridiculous that it is literally impossible for it to not go wrong. Like for example, in the Peppermill Casino incident[2], where police arrested a man based on a private security guard merely claiming that their AI facial recognition software had a "100% match". It is hard to describe the immense anger that I am holding back while typing this, but under no circumstances, should ANY AI facial recognition software, EVER output a response of any kind, that implies "100% certainty". I don't give a flying fuck what disclaimers are put around it, or what confidence intervals it provides. If a highly sophisticated machine gives you back "100% facial match" and you are a layperson who has no idea how AI systems work and just think they're magical black boxes, this is highly likely to short circuit every "this might just be a fluke" instinct that you might have. What's truly astonishing is that false arrests due to AI facial recognition have only happened a handful of times so far, but despite multiple widespread cases, it doesn't stop more from happening. So clearly, we need more and louder articles calling out these cases and specifically highlighting the pivotal role that AI services and their marketing played in them, not downplaying and making excuses on their behalf and putting the responsibility elsewhere. [1]: https://www.clearview.ai/ [2]: https://www.casino.org/news/video-peppermill-casino-facial-recognition-wrongful-arrest-bodycam-footage-released/
 Top