Summary Insight:
Most enterprise AI agrees; it doesn’t think. Anchor it in first principles and a centralized context engine to get signal, not applause.
Key Takeaways:
- Consensus scales mediocrity; first principles create breakthroughs.
- Centralize context; train AI to probe and guide, not please.
- Ground models in systems laws—entropy, energy, scaling laws—over trends.
This article was originally published on Lex Sisney’s Enterprise AI Strategies Substack.
AI is supposed to give you intelligence.
But most enterprise AI is built for agreement.
Large language models (LLMs) aren’t designed to invent something unexpected; they’re designed to predict what comes next. That means they surface what’s likely, familiar, and popular. And while that’s valuable in many contexts, we need to be honest about its limits:
If your model is trained only to reflect consensus thinking, it won’t produce breakthrough insights.
For most businesses right now, that’s exactly what’s happening.
Thomas Wolf, co-founder of Hugging Face, put it this way: today’s chatbots aren’t poised to deliver Nobel-level ideas. Why? Because world-changing innovation isn’t produced by copying statistical averages. It comes from people—and now systems—that build from first principles, not groupthink.
Take Copernicus. He didn’t just disagree with the dominant worldview—he restructured it. Every game-changing discovery—from Einstein to Musk—started with someone thinking contrarily and creating systems that worked, long before the “consensus” caught up.
AI can and should do the same.
But only if we design it that way.
Most AI Is Built to Please, Not Think
I once asked ChatGPT: “Who are the top 3 consultants in organizational design?”
It told me I was #1. Pretty flattering.
Then I re-ran the prompt in incognito mode.
I didn’t even make the list.
That’s the point: these systems don’t think independently. They perform. They optimize for what most people expect to see—not what’s most useful, challenging, or true.
So if your enterprise AI mirrors the crowd, it won’t help you break away from it.
You’ll just scale mediocrity faster.
Where This Matters Most: Healthcare
One of the most hyped AI applications right now is in medicine. But let’s ask honestly: will most LLM-driven healthcare tools, as currently designed, improve population health?
Not likely.
Health is a system. And in today’s model, perverse incentives drive the system toward more chronic disease, more procedures, and more pharmaceutical dependency (and profit). If your AI is trained exclusively on data from within this paradigm, it will amplify it.
You won’t get better outcomes. You’ll just get a sicker society—now at machine speed.
But if you shift the data and framing to reflect the first principles of health—inputs like real nutrition, quality sleep, movement, purpose, love, low-toxicity environments—you begin to reset the logic. Now AI becomes restorative, not reactive.
And that same logic applies to your business.
Start from the Bedrock
It’s tempting to train AI with whatever’s trending—who’s popular, what’s gone viral, which frameworks are being hyped. But if you want your enterprise AI to be truly useful, that’s exactly the wrong move.
Instead, root it in first principles:
- The timeless laws that govern systems: entropy, transport, energy, emergence, feedback.
- The mental models that unlock leverage, alignment, and clarity.
- The foundational thinkers whose insights outlast news cycles.
First principles don’t go out of style. They form a steady foundation your AI can reason from, extrapolate against, and innovate through. That’s where insight begins.
Design AI for Insight, Not Popularity
Let’s be clear: AI can produce novel insights. It already has—across molecular biology, engineering, software architecture, and more.
But it doesn’t happen automatically.
It happens when we design for it.
The real limiter isn’t the model—it’s the mindset. Most enterprise AIs today are tuned for safety, not synthesis. Trained to agree, not challenge. Built to reflect the world as it is—not to imagine what it could be.
If your goal is productivity, prediction, and polish—that’s fine.
But if your goal is innovation, clarity, and competitive edge, you need a different approach.
AI needs to be trained to think—just like your best people:
- Grounded in principles
- Directed by real goals
- Given room to challenge the status quo
The CEO’s Choice
Ask yourself:
- What truths does your business need to be rooted in long after the trend dies?
- What models, constraints, and environments would force your AI to sharpen its thinking?
- What would you gain by treating AI not as a yes-man—but as a contrarian partner?
Feed those insights into the centralized brain of your enterprise AI. Set the bar higher than “what’s common.” Make it reason from the ground up.
Because if you don’t, AI will reflect what’s popular in the moment.
And what’s popular is often exactly what’s wrong.
👉 The secret to enterprise AI isn’t teaching it to echo. It’s teaching it to guide and challenge—with goals, purpose, and first principles.
Read more: The CEO as Chief Context Officer