Integrity Scales Faster Than Intelligence

AI multiplies whatever you point it at.

Sound judgment gets amplified into good decisions at scale. Poor judgment gets amplified into bad decisions at scale. The multiplication is indifferent to which one you’re feeding it.

I don’t think we’ve fully absorbed what this means. Before AI, bad judgment was rate-limited by how fast one person could act. You could only make so many poor calls in a day. There was friction built into the system—meetings, approvals, the time it takes to draft an email, the pause between thinking something and doing something about it. That friction was also a buffer. It gave you time to reconsider, to notice when something felt off, to catch yourself.

That buffer is largely gone.

#The Amplification Problem

Daniel Pink identifies integrity as one of six skills AI cannot replace. His argument is that AI can generate, analyze, and optimize, but it can’t decide whether something should be done. That judgment remains human. I think he’s right, and I think the implication is sharper than it first appears.

Integrity used to be primarily a character question. Are you honest? Do you keep your word? Do you act consistently when no one’s watching? Those questions still matter. But AI adds a dimension: whatever your character produces, it now produces at speed and scale that weren’t previously available. A person with sound judgment and a powerful model can make good decisions faster across more domains. A person with poor judgment and the same model can produce damage at a pace that wasn’t possible before.

The tool doesn’t care which one you are. It amplifies equally.

#How Ethics Fade

Ann Tenbrunsel’s research at Notre Dame describes a process she calls “ethical fading.” It’s the gradual disappearance of ethical considerations from a decision. Each individual step feels minor. The framing shifts slightly. What was once an ethical question becomes a business question, then an efficiency question, then just a question of execution. By the time you’re acting, the ethical dimension has quietly dropped out of the frame.

This happens to good people. Tenbrunsel’s point is that ethical fading doesn’t require malice. It requires inattention. The small rationalizations accumulate beneath conscious awareness. You don’t decide to cross a line. The line moves, imperceptibly, until you’re on the wrong side of it without having noticed the crossing.

AI accelerates this. When the output arrives instantly—polished, confident, ready to deploy—there’s even less friction to slow down and ask whether this particular application of the tool is something you’d stand behind if it were examined closely. The speed removes the pause. And the pause was where the judgment lived.

#Rules vs. Values

Dov Seidman’s argument in How: Why How We Do Anything Means Everything draws a distinction between rule-based compliance and values-based behavior. Organizations built on rules focus on what you can’t do. Organizations built on values focus on who you are.

The difference matters because rules are finite and specific. Human ingenuity races along, generally complying with the rules while creating new behaviors that exist outside of them. Values operate differently. They’re internalized principles that apply to novel situations, including situations no one anticipated when the rules were written.

AI generates novel situations constantly. Every new capability creates decisions that didn’t exist before. Rule-based thinking can’t keep up with that pace. Values-based thinking can, because it doesn’t depend on someone having written the specific rule for the specific situation. It depends on the person asking: does this align with who I am and what I stand for?

#The Front-page Test

Warren Buffett’s observation applies here with more force than when he first made it: “It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.”

Five minutes was already fast. AI compresses it further. A single automated campaign, a single AI-generated analysis deployed without adequate review, a single decision made at machine speed with human-speed judgment—any of these can undo years of carefully built trust.

Pink mentions a version of the front-page test as part of his integrity framework. I think it’s the most practical tool available for AI-era decision-making. Before deploying any AI-assisted decision or output at scale, ask: if this were attributed to me personally on the front page, would I stand behind it?

If the answer requires hesitation, that’s the answer.

The test works because it collapses the distance between the decision and its consequences. AI creates distance. The output feels separate from you. It was generated, not written. Deployed, not decided. The front-page test closes that gap. It forces you to own the output as if you’d produced every word and made every judgment yourself. Because functionally, you did. You chose to use it. You chose to deploy it. Your name is on it.

#Character as Infrastructure

In The Long Game of Character I wrote about how your behavior over time creates an ecosystem that either amplifies or undermines everything you’re building. And in The Reputation You Have With Yourself, the internal track record that determines whether you trust your own judgment when it matters.

AI makes both of these more consequential. The ecosystem you’ve built around yourself now gets amplified by more powerful tools. The internal reputation you carry now applies to higher-stakes decisions made at faster speed.

Character is infrastructure. And like all infrastructure, you don’t notice it until it fails.

Get new posts delivered to your inbox.