Ignis AI: De-Risking a Pre-Revenue AI Product
Context
Ignis was building an AI-powered behavioral assessment platform to help companies improve hiring decisions through its proprietary PowerSkills assessments – which sought to assess key soft skills important in any workplace.
My Role
Where we started
The Challenge
Validate real business demand and trust in AI assisted hiring tools, especially our under development PowerSkills assessment. We wanted to ensure we wouldn’t build an impressive but non-adopted product.
Core Strategic Question
How do you create adoption for an AI-driven hiring assessment in a market that distrusts AI and has a slow validation loop to determine if a hire was successful?
Three explicit business risks emerged
My role was to architect a validation system to systematically de-risk all three and lead my team to execute.
Phase I: Validating Real Pain (8 Interviews)
Participants
Director/VP-level HR and TA leaders across tech, finance, retail, healthcare.


Participants
Director/VP-level HR and TA leaders across tech, finance, retail, healthcare.
What We Learned
1. Time + Volume Overwhelm
Recruiters were buried in applications — especially with the rise of AI-generated resumes.
2. Trust & Bias Concerns
AI was seen as promising but risky:
3. Values Alignment > Technical Skill
Across interviews, hiring leaders prioritized:
It’s not that technical skills weren’t important, it’s that they felt easier and more efficient to assess. Determining soft skills fit was often challenging and time consuming.
Phase II: Quantifying Opportunity Gaps (49 Survey Responses)
To avoid building for anecdotes, we ran an opportunity gap survey and mapped the results like so:

For each pain point
High importance + High dissatisfaction = MVP priority.
Outcome
This significantly narrowed MVP scope and reduced distraction.
Trust as the Central Adoption Constraint
The most important insight: Recruiters were open to AI, but only as decision support, not decision maker. Trust would need to be built over time. So we ran structured concept testing to understand whether our applicant scoring system gave actionable results to recruiters.
Recruiter Results View Concept Testing (16 Sessions)
Research Goals
What We Found
Product Changes Driven by Research
These changes aligned the product with recruiters’ mental models rather than academic assessment logic.


Design Efficiency
To increase our velocity of iterations and research under time and runway pressure, I:
Built a Pre-Recruiting Pipeline
Instead of recruiting from scratch for each study, we:
This removed typical 1–2 week recruiting delays per study and let us start building the relationships that ultimately became design partners.

Compressed Prototype Cycles


Established Cross-Functional Cadence
Adoption Signal
While long-term hiring outcome data required 3+ months, early signal indicated strong interest when transparency and workflow fit were addressed.
What I’d Do Differently
If starting over / as CEO