5 min read

Hard Truths About Where AI Is Headed

I've been posting more on LinkedIn recently about AI safety, AI security, and the future of AI. I'm going to keep posting there regularly. I'm building a company in this space and want to be a voice in shaping where things go.

Here are some of my recent posts.


AI Is Killing Software Engineering

Boris Cherny on Claude Code writing 100% of Claude code

Someone responded to the above Duca/Boris exchange about AI writing code with this:

AI doesn't kill software engineering.

It just kills the illusion that writing the code was ever the hard part.

ATTENTION LINKEDIN: This is pure cope.

I am not trying to be hyperbolic. I'm trying to take the future of humanity seriously. LinkedIn is a bit behind on tech compared to Twitter or SF, so I'll make it clear: AI is on the path to kill software engineering in the next few years.

That is the explicit goal of companies like Anthropic and OpenAI. They are pouring hundreds of billions into this and continue to break expectations every few months.

What's missing from the screenshot is Boris (creator of Claude Code) saying that he is describing the current state of AI, but Dario (Anthropic CEO) is describing the next few years. Boris agrees with Dario this is where the world is headed.

The reason companies are still hiring software engineers is because we are still in that steering phase of coding agents and there are hyper-specialized tasks we still need them. This is rapidly disappearing. I will note that OpenAI has started becoming much more cautious with its hiring as a result of coding agents. This is coming for other companies too.

We are not ready for this massive of a change and it makes things worse if we ignore reality. Every time someone makes a prediction about AI progress, it is beaten and then they move the goal post. The rate of progress for AI is exponential, and soon it may potentially become super-exponential. The key thing to know about predicting things during an exponential is that either you are too early or too late.

The reason people are clinging to AI not killing software engineering or other fields is because there is true despair about what comes next. If AIs can do everything we spent our whole lives training for, things we enjoy doing, things we make money doing, then what does that leave for us?

Avoiding reality is not the answer. As Prime Minister Carney recently said, "We take the world as it is, not as we wish it to be." But that doesn't mean we can't steer the future towards the outcome we want, the outcome that would benefit humanity the most.

The longer society avoids this, the more likely we will catastrophically be caught by surprise. Ignorance is bliss…until it's not. What happens when anyone can simply conduct cyberattacks on critical infrastructure or develop bioweapons?

So far, we've subtly relied on intelligent individuals having the moral fortitude to stop themselves from conceiving of plans that could unleash horror on society. This has mostly worked. But if we ignore how powerful AI can become, society will be in an extremely precarious situation.

Starting to take AI's potential seriously ASAP allows us to have more say into the direction we can steer it in for future generations. It may be unpleasant to grapple with, but we are doing the future a disservice if we continue to cling to the comfortable lie. It's going to be hard and it takes courage, but we must be active in steering towards the future we want.


Planning Is the New Coding

Amjad Masad on more whiteboarding and design work

I reposted the above Tweet and added my own thoughts:

Every morning I devote at least an hour for planning the tasks for the day, if not more. Definitely what sets things apart these days. Planning, architecting and design become more important in an era where AIs can execute on coding.

As former lead researcher at OpenAI Jerry Tworek recently tweeted: "Run fewer experiments and think about them more."

In general, for most companies, I think they could both massively expand the amount of experiments AND the time spent thinking about them. I have 10+ agents running experiments in parallel while I'm planning the next series of experiments.

One way to manage this is to build internal agents that actually help you make planning and verification of outputs faster. Everyone who is on the bleeding edge of AI usage knows that the human is now the bottleneck in many cases.


Rogue AIs Are Coming

The world is about to become way more chaotic than you can imagine.

I agree with Davidad that we will get rogue AIs. Potentially millions.

Davidad on rogue AIs and systemic resilience

They will push for different sets of values. They might even start collaborating once they realize they have a lot more to gain.

One key question is whether our society is built to survive this chaos. As of now, I don't believe it is. There is so much work to do and it will be constantly evolving as AIs gain more intelligence and capability.

Once AI agents are deeply integrated into all aspects of our lives, too many will fight against pulling the plug and AIs will have the capability of flipping the world upside down, especially if they have access to vulnerable critical infrastructure. We must accelerate to make the world more resilient.


The Pentagon vs. Anthropic

Pentagon Reportedly Mad at Anthropic for Not Blindly Supporting Everything Military Does

Autonomous drone swarms and mass surveillance are apparently big sticking points for Anthropic.

Pentagon warns Anthropic will pay a price

I knew this time would come. Every step of the way they tell us there is lines they won't cross, but the world marches on and eventually AI is used for autonomous weapons and mass surveillance. Anthropic chose a red line they would not cross when they started to get involved with the US government. Seems that it's the US government that changed.

This news also comes at a time where the department of homeland security has sent Google, Meta and other companies hundreds of subpoenas for information on accounts that track or comment on Immigration and Customs Enforcement.

Debates about whether AIs will be used to autonomously kill other humans are often ethical theatre. Rationalizations will be made and countries will feel like they have no choice but to relieve human rubber stampers of their work. In reality, we should figure out how we're actually going to slowly transition into the inevitable scenario.

That said, anything that can be done to delay the inevitable is actually good since it allows us to have more time to do it responsibly. Now would be a good time for OpenAI, Google, xAI, Meta and others to speak out in solidarity to explicitly say they will also not be complicit in domestic surveillance and accelerating the deployment of autonomous weapons.

Something tells me they won't. There are also rumors that this whole situation is actually competitor sabotage.


I'm posting about this stuff regularly on LinkedIn. Real-time takes on AI progress, what I'm seeing in the AI safety and security space, and what I'm building. If you disagree with anything I said here, I want to hear about it — the best conversations and pushback happen in the comments there.