The Red Light Theory
Sometimes rules need to be broken so new ones can be made.
There’s a red light at an intersection you drive through every day. You stop at it every time. But why?
Most people say because it’s the law. That’s the wrong answer.
You stop because if you don’t, you’ll likely get hit. The rule exists because of the risk. Not the other way around.
I’ve spent my career in Technical GRC and I’ll tell you something — this distinction is everything. It’s the difference between compliance professionals who change industries and the ones who just audit them.
I don’t see rules as boundaries. They’re guidance. They’re red lights installed at intersections that used to be dangerous. But here’s the thing — roads get redesigned. Better infrastructure gets built. Sometimes the whole intersection has been replaced by an overpass and the old red light is still hanging there while everyone keeps stopping at it, never bothering to look up.
Right now, AI is the overpass.
And a whole lot of people are still sitting at the red light.
Here’s the thing — we’ve seen this exact story before. When electricity first showed up in factories in the late 1800s, nobody rethought anything. They just ripped out the steam engine and dropped in an electric motor to power the same system of belts and shafts that had been there for decades. Same layout. Same workflow. Same thinking. Just a different power source spinning the same old machine.
It took almost 25 years before manufacturers figured out what electricity actually made possible. You didn’t need one giant motor driving everything through a web of belts and gears. You could put individual motors on individual machines. You could redesign the entire factory floor. You could arrange production around the work instead of around the power source. The factories that figured this out became massively more efficient. The ones that kept running electric motors through steam-age belt systems? They got left behind.
That’s exactly where we are with AI right now. Most organizations are doing the electric-motor-on-the-old-belt-system thing. They’re taking AI and cramming it into existing frameworks, existing controls, existing ways of thinking about risk. They’re using a revolutionary capability to power the same old machine. And then they’re wondering why it doesn’t feel all that transformative.
I get it. AI is scary. It’s moving fast, the risks are real, and there’s a very human instinct to pump the brakes on something we don’t fully understand yet. That instinct isn’t wrong — it’s how good risk management starts. But it can’t be where it ends.
Because here’s what’s actually happening — organizations are freezing. They’re treating AI like a threat to be contained instead of a capability to be governed. They’re slapping old control frameworks onto new technology and calling it compliance. That’s not risk management. That’s fear dressed up in a policy document.
The rules we built for a pre-AI world were the right rules for that world. They made sense for the risks that existed at the time. But the landscape has fundamentally changed, and some of those red lights are now hanging over intersections that don’t exist anymore. Advanced security methods, smarter architectures, AI-driven controls — we can move forward without some of the old ways, not because the old ways were wrong, but because better ways exist now.
Now here’s the part nobody in this space wants to talk about publicly.
Every outdated control, every legacy framework written for a technological moment that no longer exists — that’s not just a policy gathering dust. That’s someone’s career. Their reputation. Their identity. And when you suggest that AI changes the game enough that their framework needs to evolve, you’re not having a technical conversation anymore. You’re having an existential one.
That’s why 90% of this work is human engagement. It’s being willing to sit across from the people who enforce the status quo and lead them to the change. Help them see that their deep understanding of why the old controls mattered is actually the foundation for governing what comes next. Their expertise isn’t obsolete — it needs to be translated. Just like the engineers who understood steam weren’t useless when electricity arrived — they were the ones who understood thermodynamics, mechanical stress, production flow. They just needed to apply what they knew to a new reality.
But sometimes diplomacy runs its course and decisive action is the only way forward.
I’ve been in rooms where a proposal was being pushed through using the standard flow — present, take questions, move on. The kind of flow that treats silence as consent. I could feel the room. The majority didn’t support what was being presented. But everyone was doing the same math in their head — “if I speak up, I’m the difficult one” — while the whole room was thinking the exact same thing.
So I asked a simple question. Can we see a show of hands, who actually supports this path forward? And then, who does not?
The majority raised their hands against it. We stopped something from rolling out that the people closest to the work didn’t want. If nobody had spoken up, we’d all be living with it today.
That’s the magic — knowing when leading people to the change is serving the mission, and recognizing the exact moment when it’s not and decisive action is needed.
AI is one of those moments. We can keep bolting electric motors onto steam-age belt systems and wondering why nothing changes, or we can do the hard work of understanding the actual risks, building the right governance around them, and redesigning the factory floor. Not recklessly. Not by ignoring what we’ve learned. But by applying what we know to what’s actually in front of us instead of what used to be.
I know you’re out there. The ones who’ve sat in rooms where silence was about to become bad policy. The ones who traced a regulation back to the incident that created it and realized the world has moved on. The ones who had to choose between patience and action and had the guts to choose right in the moment.
We the few. We the brave. Changing the world one act at a time.
I want to hear your stories. Tell me about the time you spoke up when staying quiet was easier. The control you challenged not to cut corners, but because you understood the risk better than the rule did. The moment you helped someone translate their expertise instead of making them feel left behind.
Share it here. Let’s build something — not a newsletter, a community of people who believe compliance should be the department of possibility, not the department of no.
The world doesn’t move forward because everyone followed the old rules. It moves forward because someone understood them well enough to know when they needed to evolve.
That someone is us.
