How Type 1 and Type 2 Thinking Powers Scalable AI-Human Collaboration


🔑 Summary Insight

If you want to scale AI inside your company without introducing chaos, you need to do more than “implement tools” — you need to design governance systems that clearly define who makes what kind of decision and how.

The best way I know to do this is by applying Type 1 vs. Type 2 decision-making and the 3 Agreements. These frameworks give you a structure for distributing authority between humans and AI agents in a way that increases execution speed, preserves accountability, and lowers entropy as your organization scales.


Why Governance — Not Technology — Is Your Real Bottleneck

Most companies aren’t struggling with AI tools. The tools are here. They’re getting better by the week.

What companies are struggling with is governance:

  • Who makes which decisions?
  • What do we automate vs. delegate vs. own?
  • How do we avoid shadow processes and permission paralysis?

Left unclear, these questions lead to confusion, duplication, and low-trust cultures — exactly the kind of entropy that kills momentum in scaling businesses.

So here’s the answer: don’t build more rules or policies.

Instead, build decision-making agreements based on principles.

The Core Framework: Type 1 vs. Type 2 Decisions

Not all decisions are created equal — and if you treat them the same way, you’re going to slow down execution or crash the ship.

From Designed to Scale, using the Amazon metaphor of decisions as one-way or two-way doors, I define:

  • Type 1 decisions as one-way doors — hard to reverse, strategically significant, with high consequences. These must go through your Strategic Execution Team (SET).
  • Type 2 decisions are two-way doors — reversible, operational, and lower risk. These are owned by individuals, not committees.

Your job as CEO is to teach your culture to ask and clarify upfront:

“Is this a Type 1 or a Type 2 decision?”

Once that’s clear, you can structure human-AI collaboration accordingly.

The 3 Agreements: A Governance Layer for Human + AI Decision Rights

When you combine Type 1/Type 2 thinking with the 3 Agreements, you create a governance framework that scales — no matter how much AI you introduce.

Agreement #1: The SET makes cross-functional Type 1 decisions

These are your most important decisions. They’re strategic, cross-functional, and require the full clarity and buy-in of your SET. With AI in the mix, here’s how this plays out:

  • AI agents can inform decisions (data synthesis, modeling, forecasting).
  • Humans still own the decision and must bring proposals to the table — in writing, not Power Point, using the Type 1 Proposal Template.
  • Proposals must be accepted by the implementer, not assigned top-down.

AI can enhance the input, but you still need human judgment to make irreversible choices.

Agreement #2: Everyone seeks perspective and accepts accountability for their role(s)

This governs Type 2 decisions — everyday execution.

When assigning these to AI systems:

  • You must define where AI has operational authority (e.g., routing tickets, adjusting pricing within bounds).
  • The human responsible for the outcome must still seek perspective and own downstream impacts.
  • AI is not a scapegoat. It’s an agent executing decisions that humans remain accountable for.

This agreement ensures you don’t end up with a bunch of automation islands no one understands or controls.

Agreement #3: We commit to listen, understand, decide, and act

Once a decision is made — whether by a human or in coordination with AI — it’s time to move.

In an AI-native org, this means:

  • AI-generated insights or actions are treated as legitimate — not ignored because they didn’t come from a VP.
  • But humans have the final veto on whether to implement a course correction if it crosses a threshold of risk.

This agreement supports the cultural norm of “disagree and commit” — which is vital when AI is surfacing answers faster than your comfort zone allows.

AI Governance in Practice: What Gets Delegated to AI?

Here’s a practical breakdown you can use today:

Decision Type Owner Role of AI Execution Rule
Type 1 (one-way door) SET Insight, modeling, simulation Proposals in writing, reviewed and accepted
Type 2 (two-way door) Individual or team Action, automation, optimization Seek perspective → execute → iterate
Operational rule System Fully automated Clear thresholds, audits, fallbacks

Over time, Type 2 decisions can be safely delegated to AI agents — as long as your team is trained to:

  1. Know when to seek perspective
  2. Know who is downstream
  3. Know what to escalate back to Type 1

That’s governance. That’s alignment. That’s what allows you to scale.

Final Thought: Structure Is the Strategy

In the AI era, everyone’s buying the same tools. The differentiator isn’t access — it’s structure.

Your structure must:

  • Define who decides what
  • Enable reversible decisions to flow fast
  • Prevent strategic decisions from becoming consensus sludge
  • Assign real accountability to AI-enabled workflows
  • Reinforce cultural principles, not policies

This isn’t theoretical. I’ve seen organizations go from bogged-down to high-velocity by adopting the 3 Agreements and embedding Type 1/Type 2 thinking across teams.

With AI in the mix, these frameworks become even more critical.

Because you’re not just building a company that can use AI.

You’re building a company that can govern AI — without losing speed, trust, or control.

And that starts with how you make decisions.