I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance. The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it. The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security. I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
Optimists would argue that the answer to bad actors using AI is good actors using AI. I think this adds marginal costs that eat into the efficiency benefits of digitisation, and instead we will simply see certain things de-digitise An example is interviewing and jobs. A fully digitised recruitment pipeline - Zoom calls, CVs, GitHub profiles - is too easy to defraud. I remember even before the pandemic, any kind of remote role would attract hundreds of applications of dubious quality. The most likely outcome as I see it, is companies will simply demand in person interviews. Probably only at the final stage, but they will want that in-person verification. This was a domain which we digitised to make efficient: lower friction, more accessible, more standardised. It worked until it invited fraud. Detecting fraud is complex and expensive, so, it's easier just to reverse some digitisation Another example is education. Universities teach using AI generated scripts and they grade AI generated essays. Students are under too much financial pressure to pay for slop, and, institutions that are subject to fraud will end up trashing their reputation. So the opportunity emerges for more competitive universities to differentiate themselves by assessing on blue-book exams. Students won't like it, but they'll respond to incentives when employers filter out anyone with a highly digitised degree Ultimately I suspect AI will to some extent corrode the value of digital information, just by generally producing distrust. In some ways "slop" is not actually a new problem: we have had people generating spam, scams, and algorithm-bait for many years. But the volume of fraud and the cost of suffering it was acceptable enough. That may have changed
Electricians. Top skilled folks for the most part who can do industrial level conduit work and the type who can operate switching gear and control systems. There is enough of this ongoing work for these huge facilities to effectively employ a half dozen full time contractors or more. One of the facilities I work the most in has electrical contractors on-site every single day, with at least a few trucks in the parking lot. These are local union guys. Always something breaking, needing maintenance, or a new area of the facility being refreshed. The facility is over a decade old and the work has never slowed down. Plumbers. Cooling these facilities takes vast amounts of plumbing work. And it's also typically some of the highest skilled plumbing needed outside of refinery and other manufacturing plant work. When you have 50 giant chillers running 24x7 at least one is undergoing some form of maintenance at any given time. Probably overlapping with the above, but HVAC technicians. Again, the scale of these facilities means constant work being available as you are operating at miniature city sized installations. Security guards of course. Not really material though. I've noticed more armed guards than before, with at least two on duty 24x7. As these places get more controversial, I imagine this sort of staffing will increase. On-site (IT) technicians. For facilities these sizes, you will be staffing it 24x7 and have a large enough crew to get basic refresh projects done. Hard to really estimate this, but in the dozens of full time labor for these giant projects. Think folks who can pull cable, troubleshoot basic hardware, swap drives/bad RAM sticks, etc. For the larger refresh projects contractors typically get flown in during a surge so on-site staffing is relatively minimal, but very few facilities are operating "lights out". Then you have facility management - highly skilled positions that know how to operate all the electric/mechanical and cooling equipment during emergencies. Every facility I've worked in is staffed by a crew of around half a dozen of these folks or so, with the top tier subject matter experts being flown in during critical emergencies. These are the guys generally coordinating all the contract labor above. Probably a couple mid-tier network engineers and higher skilled sysadmin types as well depending on who is operating it. Everyone loves to pretend these are highly automated and copy/paste facilities hyperscalers are just perfect at executing - but there is a lot of "dirty" hands-on work to be done since that stuff is not nearly as perfect as advertised and often requires hands-on problem solving and on the spot hacks to get stuff going. As anywhere, how the sausage gets made is a lot uglier than the marketing. Once you get out of the highly competent hyperscalers, the above numbers go way up. Enterprise datacenter operators are going to need far more on-site labor due to simply not being great at this sort of work. The stories I hear of some current builds are rather humorous in how many people it's taking to get stuff working - basically solving what should be automated via manual processes. It's not a lot of jobs, but for these huge 100's of Megawatt facilities the low-end is probably in the 100+ range of FTE equivalent labor after construction is completed. Everyone but security and the basic "remote hands" type employees would be in the $100k+ salary range.
I guess their point is that of all possible industrial usecases, data centers are the least obnoxious one. I live in one of the countries that actually manufactures things, unlike the US, and I find it hard to argue with that. Any noise pollution caused by data centers is far far less than most industrial setups. It's the same with every other resource, water, electricity, effect on local shared infrastructure like roads and commerce, etc,. Other industries are an order of magnitude worse. Given that you _have_ to have some industrial setup unless you want to import everything (tokens, in this case), datacenters are far and away the best choice. I'll add a qualifier to the above, modifying it to say that of all industrial setups generating atleast X dollars of economic value, datacenters are far and away the best in terms of impact on nbhd. The jobs argument also falls apart, when you consider that it's essentially 100 jobs in return for just an office building worth of space. If you want a thousand job plant just build that as well next town over, it will take way way more space and other resources though. The reason that didnt happen even before this datacenter boom is because most manufacturing setups are fairly infeasible in rich countries like the US. I can't imagine the response to a textile plant or a steel plant if this is the response to datacenters. I agree however, that if you colocate a gigantic power plant, then you get the worst of both worlds. Fewer jobs and the hindrance of a big power plant near residential areas. Grid expansion being slow in developed areas like most of the US is not surprising though. But this is pretty much the best case scenario. Tolerating the power plant until the grid expands is the way to go I suppose.
I'm a head of security, great career, did engineering into management, made a tidy living doing advanced work as a risk plumber across companies that have been relevant. I've built great teams, met and solved hard IR, delved into the real reaches of vuln research, other neckbeard things, got paid very well along the way. Seen and worked on the APT issues. More or less, I am the attractive resume, and: the game has changed folks. For what it is worth, I am taking my ball and going home in about 12 months. I've saved enough, locked in a perma-middle class lifestyle in a great nondescript city, and swapping over to offensive consulting and a AI-free, non-tech trade that won't take too long to get into - think a PA, nurse, plumber, etc. I'm not quite old enough and with the end of responsibilities as to FIRE, but I can read the writing on the wall enough to understand an AI-proof FI needs to be locked in before everyone else realizes the same. Many others in sec are feeling this. I think tech will find security pros willing to throw themselves into the fray for pay and optimism. There are others like me who are extracting their final nuts. There are others who have golden-handcuffed themselves into this ride with their mortgages and private school tuitions. And I'm sure some others will stick it out. There will also be an AI-enabled version of sec eng soon enough. But if private sector doesn't wake up to AI integrations - internal doc rollouts hoovering up PII that wasn't supposed to be stored there, externally-facing customer support portals social engineered and pivoted into, PRs via Slack comment via marketing hires who are ATO'd - this is going to be a 1990's-style BBQ where 0days on critical systems are dropped at happy hours at conferences nightly. And: your security teams are going to be burned out, banking up, and quitting. The risk acceptances, the double-speak, the slow-rolling, the half-baked risk thinking for engineering and product leads, the corners cut, the public endpoints opened up just this one time - that's going to be enough rope, and already is enough, to hang yourself in this offensive context that's building now. It is deeply humorous that SWE and engineering leadership has worked itself into this position via its AI push to unemploy itself while thinking it's the 1x white collar job exempt from automation threats. All it'll take is another recession like '08, and the leaves get shaken off the trees finally. Thankfully there is only one (wait, there are two probably), thankfully there are only two-to-three (wait, there are like 10) systemic market threats right now.
If I can play devils advocate in favor of public disinterest about these events, I think you can argue that cybersecurity doesn't really matter, in the grand scheme of things. At least data exfiltration. What would the consequences for humanity be if every single electronic patient record was leaked onto the internet? Immediately hugely bad for some groups, unfortunately. After a good deal of embarrassment and drama however, some severe, perhaps the net effect is positive. It would most likely facilitate a lot of scientific inquiry. A lot of people, especially in medical deserts, also use Chatgpt as an md. Providing AI companies with high quality medical data is actually a public service. So it goes for many things in life, and except for financial and destructive wipe attacks, data security is mostly about protecting the IP of incumbents, which is somewhere between irrelevant and a net negative. It's hard to say what the long term consequences of the IP system breaking down would be, but there is a good argument to be made that it's not necessarily bad. As for individual people, most don't really care or are resigned to the fact that Google already knows everything about them, and probably abstractly enjoy the fact that a major company gets brought down to their reality. Plenty of societies have extremely collectivistic mindsets of public info being shared, like Scandinavian countries having public tax filings, and they work just fine. I think most people would secretly relish the outcomes of everything leaking everywhere. Just like people relish the Epstein files being released, and probably would have loved an unredacted version being leaked. Secrets are something human beings naturally gravitate towards to dig up and sharing, and this is actually for good, sensible reasons. Evolution has simply favored groups that did not hoard knowledge, at least not internally. There is a reason the scientific method has openness as a virtue, and is arguably one of the pillars that has carried humanity out of the dark ages.
 Top