top of page

From Automation to Agency: Why 2026 Will Break Your Project Playbook



ree

From Automation to Agency: Why 2026 Will Break Your Project Playbook

Perspective from a CIO with 40+ years in the chair


In more than four decades in technology leadership, I’ve ridden every major wave: mainframes, client–server, ERP, internet, cloud, mobile, analytics, and now AI.

Each one forced us to adjust how we run projects.

Agentic AI — systems of semi-autonomous agents that can perceive, decide, and act — is different.

Beginning in 2026, many of your “digital transformation” initiatives will quietly become agentic AI initiatives: finance agents, marketing agents, procurement agents, HR agents. And the uncomfortable truth is this:

If your project management knowledge looks like it did in 2016, you will not be ready for 2026.

This isn’t about learning a new tool. It’s about re-learning what it means to scope, govern, and deliver change in organizations where non-human “teammates” are making decisions every second.

This article is aimed at change and project leaders who feel that a shift is coming and want to get ahead of it.

1. The Shift: From Process Automation to Digital Teammates

Most of our careers have been about automation:

  • We took a process.

  • We analyzed steps.

  • We implemented a system to make those steps more efficient, consistent, and trackable.


Agentic AI breaks that mental model.

We are now deploying agents that:

  • Interpret messy inputs (documents, emails, speech).

  • Decide what to do next.

  • Call tools and APIs.

  • Adapt their behavior based on feedback and context.


In other words:We’re no longer just automating processes; we’re designing and managing autonomous behaviors.

That has profound implications for project management knowledge:

  • Scope is no longer just “features” — it’s capabilities, behaviors, and limits.

  • Requirements are no longer purely deterministic — they become guardrails and outcome envelopes.

  • Governance is no longer just sign-offs — it is ongoing supervision of digital teammates.


2. New Knowledge Domains Every PM Will Need

Here’s where traditional PM knowledge stops being enough, and where new knowledge must be layered on.

a) Designing Agent Roles & Behaviors

Tomorrow’s PMs must be fluent in questions like:

  • What is this agent’s mission?

  • What tools and data is it allowed to use?

  • What is its decision authority vs. human authority?

  • When must it escalate to a human?


In practice, that means PMs need to know how to create:

  • Agent Charters / Role Cards

    • Purpose

    • Inputs & outputs

    • Allowed actions / APIs

    • Hard constraints (“never do X”)

    • Escalation rules

This is a new kind of requirement — one that talks about how the system behaves in an open environment, not just what it does step-by-step.


b) Data & Knowledge as First-Class Scope

We’ve always talked about data in projects. But for agentic AI, data and knowledge design are the project:

  • What data will the agents rely on?

  • Where does it live, and who owns it?

  • How will we curate knowledge bases, FAQs, policies, SOPs?

  • How will agents retrieve and interpret that knowledge safely?


PMs will need practical understanding of:

  • Knowledge bases, retrieval patterns, and “source of truth” design.

  • Data quality, lineage, and access control as core scope, not peripheral issues.

  • How project artifacts themselves (requirements, policies, test cases) can be structured so they become live inputs to the agents later.


c) Translating Policy and Risk into Machine-Enforceable Guardrails

For most of my career, policies lived in PDFs and SharePoint folders. People were expected to read and apply them.

With agentic AI, that isn’t enough. We must translate:

  • Regulatory rules

  • Internal P&P

  • Risk tolerance

  • Brand and tone guidelines

…into machine-readable guardrails:

  • System prompts (“You must never…”).

  • Tool permissions (what the agent can and cannot call).

  • Approval workflows and thresholds.

  • Automated checks and alerts.


PMs will need to understand:

  • How to work with compliance, legal, and risk to codify rules.

  • How those rules are implemented technically (at least conceptually).

  • How to structure a project so policy translation is not a last-minute patch, but a central workstream.


d) Evaluation, Experimentation, and Statistical Quality

Traditional QA is built on deterministic tests:

  • Given input X, system must respond Y.

Agentic AI output is inherently probabilistic:

  • We talk about accuracy bands, error rates, confidence levels, not universal “pass/fail”.

PMs must be comfortable with:

  • Scenario-based testing: wide sets of realistic prompts and situations.

  • Statistical thresholds: “≥ 95% of outputs meet standard X.”

  • Red-teaming and adversarial testing for policy breaches.

  • Continuous evaluation and tuning as an ongoing function, not a phase we close.

This is new knowledge: PMs will need to understand basic concepts of model behavior, evaluation metrics, and experimentation to manage these projects responsibly.


3. Methodology Must Bend: Discovery, Pilots, and Continuous Tuning

Our classic playbooks — whether waterfall, Agile, or SAFe — assumed that:

  • Requirements would converge.

  • Behavior would become stable.

  • “Go-live” would flip us into steady state.


Agentic AI projects don’t fit neatly into that pattern.

The new reality:

  • The first phases are discovery and feasibility, not build.

  • Pilots are about learning where the agents break, not just where the code breaks.

  • “Go-live” is the beginning of a continuous learning curve, not the end of a project.


Practically, that means your PM knowledge must expand to include:

  • How to design waves:

    • Wave 1: Feasibility/PoC (Is this safe and useful at all?)

    • Wave 2: Pilot (Can we stabilize behavior with real users?)

    • Wave 3: Scale (Can we integrate and govern this at production level?)

  • How to set expectations with stakeholders that experimentation is not a sign of failure, but a core ingredient of success.

  • How to plan for ongoing tuning and governance as part of the project deliverables, not optional aftercare.


4. AI Inside the Project: Your Artifacts Become the Product

The other major shift is subtle but powerful:

The way you run your project will directly shape the quality of the eventual agents.

If you use AI wisely inside the project, your artifacts become fuel for the solution:

  • Meeting summaries become structured decision logs for future governance.

  • Requirements, written with clear scenarios and rules, become knowledge sources and test cases.

  • Policy documents, rewritten with explicit “never/always” rules and examples, become guardrails.

  • Test cases and edge-case lists become the evaluation harness for agents in production.

That demands new PM knowledge:

  • How to bring AI into the project process as a working copilot.

  • How to standardize and structure project outputs so they are machine-usable from day one.

  • How to manage a “project knowledge base” that turns into the production knowledge base instead of being thrown away at go-live.


5. What CIOs Must Do Differently Starting Now

If you’re a CIO, here’s the uncomfortable part: this isn’t a PMO training checkbox. It’s a capability shift you must own.


a) Redesign Your PM Standards

Update your PM methods and templates to include:

  • Agent Charters / Role Cards as core requirements artifacts.

  • Data & Knowledge Readiness as a mandatory workstream.

  • Policy-to-Guardrail translation as a defined responsibility.

  • Evaluation and tuning as a standing activity, not a phase.


b) Invest in New PM Knowledge, Not Just New Tools

Your PMs don’t need to become data scientists. But they do need literacy in:

  • Agent design concepts.

  • Data and knowledge architecture basics.

  • Policy encoding and AI risk management.

  • Experimentation and statistical evaluation.

That means structured upskilling, not a one-hour “AI 101” lunch-and-learn.


c) Build a Cross-Functional AI Governance Backbone

Agentic AI projects cut across:

  • IT / architecture

  • Security & compliance

  • Legal & risk

  • Business owners

  • PMO

CIOs must help stand up a governance backbone that:

  • Defines policy and risk frameworks.

  • Provides reusable guardrails and evaluation standards.

  • Supports project teams with shared AI expertise.

  • Monitors production behavior across all agents.

Your projects should not be reinventing AI governance from scratch every time.


6. A Personal Closing Note

After 40+ years in this field, I’ve learned that most technology revolutions are overestimated in the short term and underestimated in the long term.

Agentic AI feels different.

  • It changes who does the work (humans + agents).

  • It changes how decisions are made (distributed, continuous, probabilistic).

  • And it demands that project leaders learn to build systems that behave, not just systems that run.


If your 2026 project playbook looks like a slightly tweaked version of your 2016 playbook, you’re already behind.

But if you start now — updating your PM standards, enriching your team’s knowledge, and treating project artifacts as future agent fuel — you can turn this wave into an advantage instead of a crisis.


For Change and Project leaders: What’s one concrete change you’re planning to make to your project methodology for agentic AI initiatives in 2026? I’d be very interested to hear how you’re approaching this shift in your own organization.


Comments


Stay connected with the latest Maypop events, courses, and insights designed to fuel your growth and leadership journey. Sign up for updates and never miss what’s next.

Land Acknowledgement:

As a business based in Seattle, Washington, we acknowledge that we live and work on indigenous land: the traditional and unceded territory of Coast Salish peoples, specifically the Duwamish, Coast Salish, Stillaguamish, Muckleshoot, Suquamish, and Chinook Tribes. 

 

We support Real Rent Duwamish

Contact us:

grow@maypopgrove.com

Seattle WA

© 2025 by Maypop Grove
 

  • Linkedin
bottom of page