Is AI Coming FOR You, or for YOU?
The noise is louder than the signal. Let's fix that.
Is AI Coming FOR You, or for YOU?
The noise is louder than the signal. Let’s fix that.
A note on how this was written. Practicing what I preach: this post was written with Claude Opus as a writing partner. I directed the argument, shaped every claim, and reviewed the final result. The AI helped draft the prose. That’s the what vs. how distinction this entire series is about — and yes, it works for writing too.
If you’re a chip design engineer and you’ve been reading the headlines, you’d be forgiven for thinking your career has an expiration date.
Jensen Huang says he wants every engineer at NVIDIA burning through a massive number of AI tokens per year. The productivity studies say 50x. The conference keynotes show demos where natural language becomes RTL in seconds. The implication, if you take it all at face value, is clear: the craft you spent a decade mastering is about to be automated out from under you.
I want to make two arguments in this post. The first is that the fear is understandable but wrong. The second is that the hype is understandable but dangerous — not because AI isn’t real, but because the distorted version of it is causing engineers to freeze and managers to plan against a fantasy.
Give Jensen his due
Let’s start with Jensen, because he’s being misquoted by implication.
When Jensen talks about token consumption as a metric, he’s not saying engineers are replaceable. He’s saying something much more specific: if you’re not integrating AI into your daily workflow right now, you’re falling behind. It’s a call to action, not a eulogy.
Jensen knows perfectly well that AI is not ready to design chips autonomously. He’s said as much — he’s talked publicly about how NVIDIA is improving AI’s ability to generate code, which is an admission that it’s not there yet. This is a man whose company tapes out some of the most complex silicon on the planet. He understands the gap between generating plausible-looking RTL and shipping a chip that works.
The message isn’t “AI is replacing you.” The message is “tool up or fall behind.” That’s a fundamentally different statement, and it’s one that every serious engineer should take seriously. The mental switch Jensen is signaling is real and important: stop treating AI as a threat to resist and start treating it as a capability to develop.
This isn’t theoretical for Jensen. NVIDIA built ChipNeMo, a domain-adapted LLM trained on 23 billion tokens of their own internal chip design data — 30 years of design documents, bug reports, verification scripts, and engineering decisions — and deployed it to over 11,000 engineers. They’ve seen firsthand where AI delivers real value: bug triage, EDA script generation, onboarding junior engineers who suddenly have access to three decades of institutional knowledge in a five-second query. And they’ve seen where the human remains irreplaceable: the spec, the intent, the architectural judgment.
Here’s the telling detail: at GTC 2026, Jensen noted that 100% of NVIDIA’s software engineers use off-the-shelf AI tools — Claude Code, Codex, Cursor. Three different companies, no internal solution needed. For chip design, NVIDIA had to build their own. As CTO Bill Dally put it, the thing that makes ChipNeMo work is 30 years of design data that doesn’t exist anywhere else. General-purpose AI isn’t enough for this domain. Jensen knows that better than anyone making the headlines.
Now here’s the uncomfortable part for the rest of the industry: NVIDIA can do this because they’re NVIDIA. So can Intel, Apple, Qualcomm — a handful of companies with decades of proprietary design data at massive scale. Everyone else — and that’s the vast majority of chip design organizations — is working with what the commercially available models offer. Models trained overwhelmingly on software, not hardware. Models that have seen billions of lines of Python and JavaScript, and a vanishingly small amount of SystemVerilog. For these companies, the gap between the AI hype and what the tools can actually deliver today is even wider. The disillusionment hits harder, because there’s no internal corpus to fall back on.
But that nuance evaporates the moment the quote hits a headline. What engineers hear is: even Jensen thinks we’re done.
The 50x problem
Then there’s the productivity story, which makes things worse.
I wrote about this in detail in a recent LinkedIn post: the 50x productivity claims, the studies that contradict them, the perception gap where engineers think they’re faster but measurably aren’t. I won’t rehash all the data here — go read that post if you want the specifics.
The short version: the numbers don’t survive contact with reality, and they’re doing active damage when they get repeated uncritically in planning meetings.
But here’s what I want to add, because it’s the part that matters most for hardware: even in software, where you can test in seconds and ship a fix tomorrow, the productivity story is far messier than the headlines suggest. Now imagine applying those same inflated expectations to a domain where the planning alone takes months, where the verification pipeline runs overnight, and where a mistake in silicon costs tens of millions of dollars with no hotfix available.
The gap between what AI can do today and what chip design demands isn’t just a matter of model capability — it’s a matter of planning complexity. A chip is not a codebase you iterate on. It’s an artifact you must get right before it exists physically. That requires a level of upfront specification, constraint definition, and cross-domain coordination that has no equivalent in software. And that planning layer — the hardest, most valuable part of the work — is exactly the part that AI cannot do for you.
The damage on both sides
This matters because the distortion hits from two directions simultaneously.
Engineers hear the inflated claims and conclude they need to either become AI prompt wizards overnight or start updating their resumes. The anxiety is real — I talk to engineers who feel it. Some are paralyzed, unsure what to invest in learning. Others are churning out AI-generated code to look productive without understanding whether what they’re producing is actually correct. Neither response is healthy.
Managers hear the same claims and conclude they can do more with less. They walk into planning meetings expecting the mythical 50x and staff accordingly. When reality delivers something closer to 1.2x — with new categories of bugs to deal with — the gap between expectation and delivery creates its own set of problems. Teams get squeezed. Schedules get set to fantasy numbers. The engineers who are supposed to be benefiting from the tools end up under more pressure, not less.
The irony is that both sides are reacting rationally to bad information.
Where the engineer’s value actually lives
Moshe Zalcberg recently published an excellent analysis of why AI adoption in chip design structurally lags software — the training data gap, the slow feedback loops, the correctness bar, the proprietary toolchains. I’d encourage you to read it; he’s right on all four counts, and I won’t repeat his argument here.
What I want to focus on is something different: the nature of the work itself, and specifically what happens before anyone writes a single line of RTL.
A chip design project begins with months of planning that has no real equivalent in software. Before a gate is synthesized, engineers must define clock architectures, reset strategies, power domains, interface protocols, and the timing relationships between all of them. They must specify what every block does, how blocks talk to each other, what assumptions each block makes about its neighbors, and what guarantees it provides in return. This is not boilerplate. This is the intellectual core of the work.
I’ll put it simply: the engineer’s job is moving up the abstraction stack, not disappearing.
The AI can write your SystemVerilog. It can generate your SDC constraints. It can produce your SVA assertions. And it will remember the syntax options and corner-case flags that you forgot existed. That’s genuinely valuable.
But only the engineer can review a spec and determine whether the intent is correct. Only the engineer can look at a clock domain crossing definition and understand whether the synchronization strategy actually matches the system’s timing requirements. Only the engineer can evaluate whether a test plan covers the failure modes that matter, not just the ones that are easy to test.
The distinction is between what and how. AI is getting very good at how. The what — the specification, the intent, the architectural judgment — that’s where the engineer’s value lives, and it’s not going anywhere.
And here’s the part that gets underappreciated: that what layer doesn’t just need to be correct. It needs to be precise enough that both humans and machines can execute against it. A vague spec was tolerable when the same engineer who wrote it also wrote the RTL — the ambiguity lived in their head. In an AI-assisted workflow, ambiguity in the spec becomes bugs in the output. The spec has to become a contract: formal, unambiguous, and verifiable. That’s harder than writing the code, and it’s a skill the industry needs to develop deliberately.
What comes next
This is the first post in an ongoing series. Three threads will run through everything that follows.
The first is the spec as the nexus of the design. If the engineer’s job is shifting from how to what, then the spec — the formal expression of design intent — becomes the most important artifact in the entire flow. I’ll explore what a spec actually means in chip design, why it’s harder to write than the code it describes, and how concepts like Design by Contract apply to hardware interfaces: clock domains, resets, protocols. The spec isn’t just documentation. It’s the contract that everything else — implementation, verification, signoff — must be measured against.
The second, and arguably the bigger topic, is verification. Verification is where the majority of chip design effort and cost already lives, and it’s where AI has the most potential to change the economics — but only if we get the approach right. I’ll dig into how we actually measure AI’s effectiveness in a verification flow, how to write assertions in natural language and have them mean something formal, and how AI-assisted verification can close gaps that simulation alone never will. I’ll show working examples, including an AI-assisted CDC verification proof, to make this tangible rather than theoretical.
The third is the model matters more than you think. Not all AI is equal, and in chip design the differences are stark. I’ve tested smaller open-source models on something as fundamental as writing a clock domain synchronizer — and watched them use the wrong clock. Worse, when shown the error, they couldn’t understand what was wrong. The frontier models get this right every time. In a domain where a subtle bug costs millions, the gap between a model that understands hardware semantics and one that’s pattern-matching syntax is not academic — it’s existential. I’ll share concrete comparisons.
The goal isn’t to sell a methodology or push a product. It’s to have an honest, technically grounded conversation about where this is actually going — from someone who works in the trenches, not from a keynote stage.
If that sounds useful, stick around. And if you disagree with anything I’ve said here, I want to hear it. The best thinking comes from friction.
Marco Brambilla is a semiconductor industry veteran with 25 years in chip design, most recently as Senior Technical Director at Meta Reality Labs. He writes about AI, chip design, and the future of hardware engineering at Above the RTL.
