top of page

No More Secret Rulebooks: What AI Will Expose About Your Culture

ree

No More Secret Rulebooks: What AI Will Expose About Your Culture


Why serious AI projects force change leaders to confront the gap between stated values and real behavior—and how to turn that into an advantage.


If you lead change, you’ve always had to navigate two versions of your organization:

  • The one in the slide decks, values statements, and town halls.

  • And the one people quietly describe over coffee: “Here’s how things really work.”

Until now, those two versions could coexist with a manageable level of tension and plausible deniability. AI changes that.

The moment you embed AI agents into real processes—approvals, service, HR, compliance—you are forced to decide:

  • What do we actually want these systems to follow?

  • The public rulebook?

  • The unwritten one?

  • Or something in between?

And you won’t be the only one asking. Internal and external audits will increasingly review:

  • What documents and data you feed into AI systems

  • How those systems make decisions

  • What they recorded about why they did what they did

In other words: no more hidden rulebooks.

For change leaders, that can feel threatening—or it can be one of the most powerful levers for alignment you’ve ever had.

Two Rulebooks: The One on the Wall and the One in the Hall

Every organization runs on two overlapping operating systems.

1. The public rulebook

This is what’s written down and presented:

  • Values and purpose statements

  • Codes of conduct and ethics

  • Policy manuals, process maps, training decks

  • ESG reports and public commitments

This is where virtue signalling lives:

“We put people first.”“We never compromise on safety.”“We are customer-obsessed.”

2. The shadow rulebook

This is what actually shapes outcomes:

  • Who really gets promoted

  • Which “exceptions” are quietly acceptable

  • How we treat large vs. small customers when something goes wrong

  • What never appears in writing but everyone “just knows”

This is where unfiltered priorities live:

“Hit the quarter, no matter what.”“Don’t escalate that, it makes us look bad.”“Don’t block this deal—even if it bends the rules.”

Humans are fluent in both rulebooks. We read tone, watch who’s rewarded, learn which policies are decorative, and internalize the unwritten code that secretly runs the show.

AI agents are not.

AI Systems Only See What You Feed Them

An AI agent embedded in your workflow doesn’t overhear corridor conversations. It doesn’t see facial expressions in the Monday meeting. It doesn’t “get a feel” for how things really work.

It sees only what you deliberately expose:

  • Policy and procedure documents

  • Configurations and business rules

  • Historical tickets, emails, chats, and logs (if you include them)

  • Metrics and outcomes

So by design:

AI systems live entirely in the world you are prepared to document and feed them.

Feed them only the public rulebook, and they will behave like the organization you claim to be.

Feed them rich historical behavior, and they will learn the organization you actually are—with all of its workarounds, inconsistencies, and biases.

Either way, the gap between “say” and “do” becomes harder to ignore.

The Audit Angle: No More Secret Inputs

Here’s the new reality that’s easy to underestimate:

Internal and external audits will increasingly include AI itself.

Auditors will ask:

  • Exactly which documents, datasets, and policies were used to train or configure this AI?

  • Which systems could it access?

  • How do you know it’s following your stated standards?

That means:

  • If you encode the shadow rulebook into prompts, fine-tuning data, or special rules, it stops being shadow. It becomes visible, reviewable, and reportable.

  • If you only feed in the public rulebook, but people constantly override the AI, that pattern of overrides becomes data too.

On top of this, AI agents can (and should) be required to generate transaction-level audit records, such as:

  • The decision or recommendation made

  • The policies and data used to reach it

  • Any risk score or evaluation applied

  • Whether a human confirmed, modified, or overrode the recommendation—and why

You move from:

“We generally follow the spirit of the policy.”

to:

“Last quarter, in 4,327 cases, the AI applied section 4.3.1. In 1,102 of those, humans overrode the AI to take a different action, usually to accelerate revenue or bypass a control.”

That is a fundamentally different level of transparency.

Three Patterns You’ll See in Real AI Projects

As AI moves from talk to production, three patterns tend to emerge.

1. Policy-First Agents: The Aspirational Mirror

Here, you train AI mainly on:

  • Policies

  • Procedures

  • Codes of conduct

  • Values and guidelines

The agent:

  • Declines “business as usual” exceptions

  • Flags risks the culture is used to ignoring

  • Insists on safety, fairness, or compliance where shortcuts are normal

Staff complain: “This thing doesn’t understand how we really work.”

They’re right—it understands how you say you work.

Auditors examining the training corpus will see an AI faithfully following your published standards. The friction reveals where practice has drifted from principle.

2. Behavior-First Agents: The Brutally Honest Mirror

Here, you lean heavily on:

  • Historical workflows

  • Past decisions and outcomes

  • Email and chat histories

The AI learns:

  • Which customers truly get special treatment

  • Which policies are routinely ignored

  • How different groups get different outcomes

You gain speed and consistency—and you productize your real culture.

When auditors ask, “Why did the AI treat these two similar cases differently?” the answer will come straight from your own history. Your shadow rulebook is now traceable logic.

3. Hybrid Agents: Turning Conflict into a Feature

The most mature pattern is intentional:

  • Combine formal policies and values

  • Add historical behavior

  • Explicitly tell the AI what to do when they conflict

These agents might produce outputs like:

“Policy says we should deny this exception. In 78% of similar past cases, the organization approved it. Would you like to follow formal policy or historical practice?”

Each of these moments is logged, along with the human choice and rationale.

Now AI isn’t just executing. It’s generating a rich dataset of the gap between what you say and what you do.

For change leaders—and for internal audit—that’s incredibly useful.

What Change Leaders Need to Do Differently

AI projects are often framed as:

  • Cost reduction

  • Efficiency and speed

  • Better customer and employee experience

All true. But once you add auditable inputs and decision logs, they also become:

  • Culture X-rays

  • Evidence generators

  • Hypocrisy detectors

That calls for a different kind of leadership.

1. Map the Shadow Rulebook on Purpose

Before or alongside AI design, deliberately explore:

  • Where do people feel they must choose between policy and survival?

  • Which workarounds are coping mechanisms for broken processes?

  • Where are policies right but routinely ignored?

You’re doing this not to punish people, but to decide what belongs in the future:

  • What should become explicit and executable?

  • What needs to change before we ask an AI to follow it?

2. Decide What Becomes Executable—and Therefore Auditable

You must consciously answer:

  • Do we want AI to embody our aspirational values, accepting friction and change?

  • Do we want AI to reproduce actual practice, accepting risk and exposure?

  • Or do we want AI to surface the tension and force real decisions?

Each choice:

  • Determines which documents and data you feed the AI

  • Defines what auditors will see as your operational rulebook

  • Shapes where culture work and policy work are truly needed

Leaving this choice to vendors or technical teams is itself a cultural decision—usually the wrong one.

3. Build Governance Around Overrides and Logs

Design your AI governance so that:

  • Every AI-driven transaction creates an interpretable record: what it did and why

  • Every human override carries a short, required reason

  • Override patterns are reviewed regularly by change, risk, and business leaders

Over time, these logs become:

  • A list of policies that no longer match reality

  • A map of hot spots where culture resists stated values

  • A guide to where training, redesign, or tough conversations are needed

Governance shifts from a policing exercise to a learning system.

The Payoff: Less Hypocrisy, More Coherence

AI doesn’t automatically make organizations more ethical or humane. That’s still leadership work.

But AI, combined with tighter audit expectations, does remove some of the fog that allowed hypocrisy to thrive:

  • You can no longer rely on secret rulebooks and unwritten exceptions without them eventually showing up in logs and training sets.

  • You can no longer claim one set of values while industrializing another set of behaviors in your agents.

  • You can no longer ignore misalignment once decisions and overrides are quantified.

So the real question for change leaders is not:

“How do we stop AI from exposing the gap between what we say and what we do?”

The useful question is:

“How do we use AI—and the transparency it brings—as a structured, data-rich way to close that gap on purpose?”

If you approach AI initiatives this way, they become more than automation projects. They become levers for cultural coherence, integrity, and trust.

The unwritten rulebook isn’t going away. But in the age of AI and audit trails, it can’t stay unwritten and unexamined.

Your job is to bring it into the open, decide what belongs in your future, and ensure that what your systems execute, what your auditors see, and what your leaders profess finally start to match.

Comments


Stay connected with the latest Maypop events, courses, and insights designed to fuel your growth and leadership journey. Sign up for updates and never miss what’s next.

Land Acknowledgement:

As a business based in Seattle, Washington, we acknowledge that we live and work on indigenous land: the traditional and unceded territory of Coast Salish peoples, specifically the Duwamish, Coast Salish, Stillaguamish, Muckleshoot, Suquamish, and Chinook Tribes. 

 

We support Real Rent Duwamish

Contact us:

grow@maypopgrove.com

Seattle WA

© 2025 by Maypop Grove
 

  • Linkedin
bottom of page