How to Present AI to the Board: The One-Slide Story That Wins Funding (and Trust)
- James F. Kenefick
- 11 minutes ago
- 7 min read
Every boardroom I visit is wrestling with the same paradox: AI is simultaneously the biggest value driver of the decade and the fuzziest line item on the roadmap. Directors are bombarded with headlines, pilots, and vendor pitches—but what they want is simple: a clear view of how AI advances the company’s competitive position, how much it will cost, how you’ll govern it, and what results to expect when.
You do not win that discussion with a scatterplot of use cases or a tour of model architectures. You win it with a portfolio view that shows how AI initiatives map to value creation, risk, and feasibility—and with a narrative that connects those dots to your moat, customers, and P&L.
This post lays out a practical, board-ready approach—grounded in Working Excellence’s playbook for governance and compliance in AI-driven enterprises and enterprise-scale AI agents in real workflows. It uses a four-quadrant portfolio (internal ↔ external; everyday ↔ game-changing), a simple feasibility score (tech readiness, internal readiness, external readiness), and a cadence of metrics and governance that travels with the work from pilot to scale. If you’re a CIO, CDO, CISO, CPO, or business unit leader, this is the blueprint to move from “We’re experimenting” to “We’re scaling where it matters” using a board-level AI business briefing that stays focused on outcomes, feasibility, and control.

Why boards struggle with AI (and how to make it easier)
Boards are guardians of risk and allocators of scarce capital. They don’t need to understand transformers or retrieval-augmented generation to do that well.
They need three things:
Clarity of intent: How does AI strengthen our competitive stance? (Faster ops, better experiences, differentiated products, or all three?)
Confidence in feasibility: Can we deliver safely and reliably with the data, talent, partners, and controls we have?
Line of sight to results: What leading and lagging indicators will tell us the program is working—and when to double down or stop?
Your job is to translate your AI program into those terms without dumbing it down. A powerful way is a one-slide portfolio map—a circle cut into four quadrants:
Back Office (Internal / Everyday): Admin, HR, Legal, Finance, ITFront Office (External / Everyday): CX, Sales/Marketing, Customer ServiceCore Capabilities (Internal / Game-changing): R&D, Supply Chain, OperationsProduct/Service (External / Game-changing): AI-enhanced value propositions.
Each initiative sits in a quadrant with a color or ring representing feasibility (low/medium/high). Around the outside, you annotate the KPIs you’ll move. Beside it, you place a 70/20/10 budget mix (Run/Grow/Transform)—a portfolio allocation directors recognize from innovation management and portfolio discipline, reinforced by prioritizing technical projects strategically.
Build the portfolio: everyday wins AND game-changing bets
Think of your AI strategy as two speeds that fund and de-risk each other.
Everyday AI (prints cash; proves discipline)
These projects shorten cycles, reduce errors, and free up talent. They’re also the fastest way to build credibility with the board because they show near-term results.
Back Office examples:
AP/AR automation with document understanding and policy-as-code approvals
Forecasting for cash, demand, and workforce planning
IT service desk triage and agent-assist to lift first-contact resolution
Front Office examples:
Intelligent self-service that actually resolves (not just deflects)
Next-best action in sales motions and account health scoring
Proactive outreach on churn or fraud risk with explanation trails
Board-level KPIs: cycle time, working capital, forecast accuracy, CSAT, conversion rate, cost-to-serve, SLA compliance.
If you want Working Excellence examples to anchor these, tie “everyday” work to AI-powered analytics, data governance for trusted AI, and data quality as a board-level priority.
Game-changing AI (expands the moat)
These are the bets that change the slope of your value curve: product features customers will pay for, or capabilities that competitors can’t easily replicate.
Core Capabilities:
R&D copilots accelerating iteration and experiment throughput
Autonomous supply-chain planning with real-time constraint solving
Operations control tower that predicts and prevents downtime
Product/Service:
Embedded agents that act on a customer’s behalf (not just chat)
Pricing, packaging, and usage models aligned to AI-driven value
New service lines (e.g., insights-as-a-service) built on proprietary data
Board-level KPIs: R&D cycle time, time-to-market, supply chain service levels, gross margin, net revenue retention, AI-attributed ARR.
To frame these bets cleanly for directors, mirror Working Excellence’s approach to execution discipline with AI agents for business and an AI Center of Excellence, and use the sequencing logic in everyday vs. game-changing bets.
The point isn’t to do everything. It’s to sequence: everyday wins build muscle and cash; game-changers get staged with clear learning milestones and guardrails.
Score feasibility with rigor (no hand-waving)
A single feasibility score keeps the portfolio honest and comparable. Use a 1–5 scale across three dimensions; the lowest score becomes the constraint.
Technical feasibility
Data availability/quality/lineage (start with a governed foundation like this data governance playbook)
Model approach maturity (fine-tune vs. from-scratch; latency/cost envelope)
Platform readiness (MLOps/ModelOps, observability, rollback)
Internal readiness
Executive sponsor and cross-functional team named
Process owners committed to change adoption
Talent in place (engineers, product, risk, security, analytics)
External readiness
Customer or market validation (willingness to adopt or pay)
Regulatory/contractual constraints known and addressable
Partner ecosystem available (vendors, integrators, data providers)
Color your initiatives: green (4–5) for scale, yellow (3) for pilot/learning, red (1–2) for backlog or dependency work. The visual tells the board where money will work now—and where it would be wasted.
The metrics boards actually care about
Many AI dashboards are noise. At board altitude, keep it to five categories and tie each initiative to at least one:
Financial impact – run-rate savings, revenue lift, margin expansion
Velocity/throughput – cycle time, lead time, experiment rate
Quality/accuracy – error rates, forecast accuracy, right-first-time
Customer outcomes – CSAT/NPS, resolution rate, time-to-resolution
Risk posture – policy violations prevented, incident MTTD/MTTR, audit-ready artifacts
Add a simple confidence bar next to each KPI (low/medium/high), reflecting the robustness of your data and modeling assumptions. Boards don’t need guarantees; they need you to show your work and your uncertainty.
Governance on one page: control without suffocation
AI governance is not a paperwork factory; it’s the operating system for safe speed. Show the board that your guardrails are threaded through the stack, not bolted on at the end.
Policy-as-code: Access, data use, retention, and action permissions defined in code and enforced at runtime, aligned to governance and compliance in AI-driven enterprises.
Risk register tied to telemetry: Each material risk has a monitor (bias, drift, abuse, cost runaway) and an owner—grounded in a practical control model like AI-driven enterprise compliance.
Model cards: Purpose, training data, evaluation suite, known limits, and rollback plan documented for transparency—supported by clear governance patterns and documentation discipline in data governance for trusted AI.
Human-in-the-loop points: Where humans approve, override, or audit actions—especially critical as autonomy increases, as covered in governance and compliance.
Change management: Stage gates from idea → sandbox → pilot → limited production → scale, with clear acceptance criteria—best operationalized through a formal AI Center of Excellence.
The message: we can move fast because we can prove control.
Funding the mix: 70/20/10 that boards understand
Use a portfolio allocation directors already recognize:
70% Run/Everyday AI: Back and front office efficiency, scaled where feasibility is high and payback is within 12 months.
20% Grow/Directional Bets: Extending proven patterns into adjacent domains; often yellow on the feasibility map.
10% Transform/Moonshots: High-uncertainty, high-upside moves in product or core capabilities with strict stage gates.
Two pro tips:
Commit to rebalancing quarterly based on evidence. If a moonshot clears feasibility, it can earn more budget. If an everyday program hits diminishing returns, harvest and redirect.
Share capacity, not just cash. Boards respond well when you show the talent and partner capacity allocated and how that unlocks throughput.
If you need a repeatable way to run that cadence, use the same prioritization mechanics outlined in prioritizing technical projects strategically.
The one-slide board pack (what to bring, exactly)
You can run a crisp board session with five visuals:
AI Portfolio Map: The four-quadrant circle with feasibility rings and each initiative labeled by function.
Value Tree: How each initiative rolls up to the P&L or moat (cost-to-serve ↓, CSAT ↑, churn ↓, ARR ↑).
Feasibility Table: Technical/Internal/External scores with owners and near-term dependencies.
KPI Scorecard: The five categories above with current baseline, target, confidence, and time horizon.
Governance & Risk Matrix: Policy-as-code coverage, monitoring, human-in-the-loop points, and stage gate status.
If you only have time for one, bring the Portfolio Map and talk through the others verbally. It creates the frame for every question—and it pairs cleanly with a board-level AI business briefing so the conversation stays on outcomes, feasibility, and control.
Common pitfalls—and how to avoid them
Laundry lists of use cases. Group by quadrant and speak in outcomes, not features.
Over-rotating on model talk. Keep architecture in the appendix; lead with value, feasibility, and control.
Skipping external readiness. A product AI with no pricing, packaging, or early design partners is theater.
No change management. If process owners aren’t signed up, your model is a slide, not a solution.
Measuring vanity metrics. If it doesn’t show up in cost, velocity, quality, customer, or risk, cut it.
AI isn’t a monolith. It’s a portfolio of capabilities that, when sequenced and governed well, rewires how your business creates value. The board doesn’t need a tour of your models; they need to see how AI advances the moat, how you’ll deliver safely, and how you’ll know it’s working.
Bring the portfolio map. Score feasibility with honesty. Anchor the discussion in five KPI categories. Thread governance through the stack with policy-as-code and model cards. Fund a disciplined 70/20/10 mix and rebalance with evidence. Do that, and the conversation shifts from hype management to capital allocation. That’s how you win funding—and, more importantly, how you build an AI program the whole company can trust.
Q&A (Board-Ready)
1) What exactly are we funding when we approve “AI”?
A portfolio with two speeds—not a single initiative. “Everyday AI” delivers measurable efficiency and discipline; “game-changing AI” earns the right to scale through clear learning milestones. This sequencing is the core of everyday vs. game-changing bets.
2) How do we avoid pilot theater—proof-of-concept after proof-of-concept?
Use a feasibility gate and scale only what clears it. The discipline looks like a living portfolio with explicit tradeoffs, not a backlog of wishes—modeled in prioritizing technical projects strategically.
3) What should we measure at board level so we get signal, not dashboard noise?
Tie every initiative to one of five categories: financial impact, velocity, quality, customer outcomes, or risk posture—then baseline and track. This “insight-to-action” cadence is reinforced in closing the insight-to-action gap.
4) What governance is “enough” to move fast without creating unacceptable risk?
Governance that ships with the work: clear owners, enforceable controls, and audit-ready evidence from day one—not policy PDFs after the fact. Use governance and compliance in AI-driven enterprises as the operating model.
5) What’s the most common reason AI fails at scale even when the model is “good”?
Data that isn’t trustworthy, complete, or governed—so results aren’t reliable and adoption stalls. Fix the foundation with data governance for trusted AI and the discipline behind trustworthy insight.




Comments