A few days ago, after reading my post on bridging the AI literacy gap, a HR leader approached me with a pointed question: “We’ve started with AI literacy workshops. People are curious and energized. But the next challenge feels harder: how do we turn that energy into structured experimentation that drives results—without overwhelming teams, triggering resistance, or increasing risk?”
It’s a question I’ve heard often. AI literacy is a crucial starting point—but it’s just that: a start. Eventually, the conversation shifts from understanding AI to operationalizing it. And that’s where many organizations stall. Not because they lack tools, but because they lack a playbook.
The Missing Middle in AI Adoption
Most GenAI resources fall into two buckets: high-level strategy frameworks that sound impressive but lack tactical guidance, or vendor-specific implementation guides that prioritize technology over transformation. Neither addresses what I call the “missing middle” – the bridge between AI awareness and functional adoption.
While this gap exists in every function experiencing AI transformation, it is particularly acute in HR, where AI applications touch both operational systems and human trust. When launching transformation initiatives, teams often optimize for capability first, and only later discover the hidden costs in cultural resistance, governance challenges, and fragmented experiences. As I discussed this with the HR Leader, I developed a blueprint of how to champion adoption in the HR function through the lens of jobs-to-be-done. Instead of asking, “Where can we plug in AI?” we ask, “What job is HR hiring AI to do?”
Start with Jobs, Not Tools
One shift that’s made all the difference in the transformation work I’ve led, whether agile, digital growth, or operating model redesign, is starting with the job to be done, not the technology. That same shift works here. Jobs-to-Be-Done (JTBD) helps frame opportunities in terms of outcomes:
- When HR is screening resumes, the job is identifying qualified candidates efficiently
- When HR responds to policy questions, the job is scaling consistent guidance
- When HR onboards new hires, the job is accelerating integration into culture and systems
This reframing matters because it connects technology decisions to human and business outcomes. It’s the difference between implementing an “AI chatbot” and designing a “policy guidance system that scales HR expertise.”
Making Pilots Matter Through Strategic Scoring
The most successful pilots I’ve observed use a multi-dimensional scoring framework to select use cases worth pursuing. With GenAI pilots, the scoring criteria I consider are:
- Repetition Volume: How frequently does this task recur?
- Text/Data Intensity: How much natural language or structured data is involved?
- Risk Sensitivity: What is the compliance and brand exposure?
- User Friction: How painful is the current process for stakeholders?
- Time Cost: How much manual effort does it consume?
- Value if Optimized: What’s the potential impact on business outcomes?
When applied rigorously, this scoring often surfaces opportunities that balance quick wins with strategic value – for example, the HR policy Q&A use case typically scores higher than say a resume screening assistant or interview scheduling automation use case.
The Four-Phase Implementation Engine
Moving from use case selection to real transformation requires structure. In my experience, I have found that a four-phase iterative approach that balances speed with risk management helps set up experimentation with pilots for success:
- Discover (Weeks 1-2): Map the jobs-to-be-done, identify metrics that matter, and align stakeholders on what success looks like.
- Design (Weeks 3-4): Co-create workflows with teams and employees, establish risk guardrails with Legal and IT, and build change enablement mechanisms.
- Deliver (Weeks 5-10): Launch tightly-scoped pilots with instrumented feedback loops, adjust based on usage patterns, and manage the tension between adoption and governance.
- Decide (Weeks 11-13): Evaluate outcomes against predetermined KPIs, make evidence-based decisions to scale or pivot, and capture organizational learning regardless of the outcome.
This approach is deliberately lean—generating meaningful insights in 90 days without massive investment or disruption. You don’t need a steering committee or a 12-month plan. You need a container for learning, iterating, and demonstrating responsible momentum.
Trust Is Infrastructure
Perhaps the most overlooked element in pilot implementations is how we build and maintain trust with the different stakeholders. In most organizations, trust isn’t just a nice-to-have – it’s infrastructure. Without it, even the most elegant AI solutions will struggle to gain adoption.
The most effective GenAI initiatives treat trust as a design requirement, not a communication challenge. This means:
- Involving stakeholders in problem definition, not just solution testing
- Making human oversight visible and accessible, not just present
- Creating feedback loops that demonstrably shape the system
- Designing for transparency in language, not just policy
When trust is built into the design process, resistance decreases and adoption accelerates. It’s the difference between pushing technology onto users and pulling them into the experience.
From Literacy to Lift-Off
This post extends the thinking behind my AI Learning Stack into a practical playbook for action. If literacy is about understanding what’s possible, this is about designing what’s next.
Start with one function. One job-to-be-done. One measurable pilot.
Then learn, refine, and build the muscle to scale.
If you’re on this path—or thinking about it—I’d love to hear how you’re moving from learning to leading.
Want the comprehensive playbook for implementing GenAI in HR? I’ve developed a detailed guide with scoring matrices, stakeholder strategies, and ROI templates designed for practical application. Contact me for a copy.
Discover more from DEEPLY Product
Subscribe to get the latest posts sent to your email.