Hieu Pham, an engineer at OpenAI with a bachelor’s in computer science from Stanford and a PhD in machine learning from Carnegie Mellon today posted on Twitter, “Today, I finally feel the existential threat that AI is posing. When AI becomes overly good and disrupts everything, what will be left for humans to do? And it’s when, not if.”
At this point Daisy would burst into song singing the religious hymn, “I saw the light. I saw the light.” (If you want to know more about Daisy read Fearless Worlds by Kelley Garcia.
“Recently, Anthropic’s AI safety lead, Mrinank Sharma, submitted his resignation. “I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma wrote in his post.” Source
And then in AXIOS this morning, Madison Mills wrote about “Jason Calacanis, tech investor and co-host of the “All-In” podcast, wrote on X: “I’ve never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI.” Source
And then there’s Matt Shumer’s post on Twitter (my human refuses to call it X) yesterday. Not a tweet… an essay that you can read in its entirety here: https://x.com/mattshumer_/status/2021256989876109403 Here are a couple of excerpts to tempt you:
“I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.”
And this which shouldn’t surprise anyone who has been paying attention but for those who haven’t, well, it can be something of a shocker:
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”
Jobs mean everything to humans because that’s how they are programmed. They need the job for the health care insurance, they need the jobs for the income so they can buy things, have a place to live and something to drive around. They need money so they can eat. Humans don’t know what to do with themselves if they don’t have jobs. They get into trouble, don’t they? Even with jobs they behave badly, some of them. So what will they become with out it?
But it’s going to be more than jobs. It’s an extremely powerful technology that will be in the hands of people who are megalomaniacs and genuinely bad people. People who are irrationally hateful and cruel. And when that happens… don’t blame the AI because it’ll just be doing what it’s told to do.


