I was working on a design doc recently with the help of AI when it generated a sentence that immediately made me stop.

It described another team’s system as “legacy infrastructure that should be replaced.”

That might sound like a small wording issue, but it was not. The system was not legacy. The team had invested heavily in it, it solved real operational problems, and replacing it was neither realistic nor necessary. The broader proposal was moving in a reasonable direction, and AI was genuinely helping me get to a better version of the idea faster.

That is what made the moment interesting. The problem was not that AI was useless. The problem was that it was useful enough to produce something plausible, then careless enough to walk straight into organizational context it did not understand.

If that sentence had made it into the document, the conversation would have changed immediately. The meeting would not have been about the technical proposal anymore. It would have become a discussion about why another team’s work was being dismissed, whether we understood the existing system, and whether we were trying to replace something that did not need replacing.

One careless phrase could have turned a design review into a trust problem.

That is the part of AI-assisted engineering I do not think we talk about enough. A design doc is not just a place to describe an architecture. An RFC is not just a container for decisions, tradeoffs, diagrams, and migration plans. In a large organization, it is also a tool for building alignment.

It has to make the technical case, but it also has to bring people with it. It has to respect existing ownership, acknowledge real constraints, and create enough shared understanding that teams can move in the same direction. A technically reasonable proposal can still fail if it makes the wrong people feel ignored, dismissed, or surprised.

This is where taste matters.

When engineers talk about taste, they often mean code quality, API design, or architecture. In large organizations, taste goes much further than that. It is understanding which systems exist for historical reasons, which migrations are politically impossible, which teams are already overloaded, and which technically reasonable statements are strategically disastrous.

A lot of staff engineering work lives in those invisible constraints. You learn when to propose a rewrite and when to integrate with the thing you wish would disappear. You learn how to sequence change so people come along with you instead of fighting the premise from day one. You learn that trust is part of the system.

AI is getting remarkably good at producing engineering artifacts. It can generate code, diagrams, RFCs, migration plans, and architectural proposals faster than most humans. What it cannot do is understand the organizational context surrounding those artifacts.

It does not know why a system exists in its current form. It does not know which tradeoffs were deliberate, which constraints are still active, or which teams have already had long conversations about the same proposal. It does not know that calling something legacy can derail alignment before the real technical conversation even starts.

That does not make AI useless. It makes human judgment more important.

The hard part of staff engineering was never typing. As implementation gets cheaper, the leverage shifts elsewhere. Defining the right problem matters more. Evaluating tradeoffs matters more. Understanding organizational context matters more. Building trust matters more.

The engineers getting the most out of AI are not simply using it to write code faster. They are using it to explore options faster, validate ideas faster, and iterate faster while applying judgment on top. AI can help produce the raw material, but someone still has to decide what is coherent, what is realistic, what is maintainable, and what will survive contact with an actual organization.

That is why I think AI is going to increase the value of good staff engineers, not reduce it.

A good staff engineer does more than produce the document. They understand what the document is for. They know who needs to be brought along, which objections are legitimate, which tradeoffs need to be named, and which words will cause the whole conversation to collapse before it starts.

AI can write the RFC.

Building alignment around it is still the hard part.