
How Stacked SP Hit 85%+ Candidate Match Accuracy with AI-Driven Sourcing
We built Stacked SP a concurrent AI matching engine that delivers 85%+ accurate candidate shortlists in hours instead of weeks, replacing manual LinkedIn sourcing with an automated pipeline that hits 2x the industry match rate.
“You put in a job and a handful of hours later you got a list of a few hundred great candidates ready to go into campaigns. Huge savings, huge.”
- Client
- Stacked SP
- Industry
- Venture-backed tech recruiting
- Founded
- 2012
- Project
- Concurrent AI-driven candidate matching engine
85%+
Candidate match accuracy
| Metric | Before | After |
|---|---|---|
| Match accuracy | 40% to 50% (industry typical) | 85%+ |
| Shortlist turnaround | Weeks of manual sourcing | Hours after job posted |
| Candidates per shortlist | Manual selection, low volume | Few hundred pre-scored |
| Sourcing review time | High (noise filtering) | Low (ready for outreach) |
| Engineer hours required | Ongoing, bottlenecked | Minimal, pipeline automated |
Key Takeaways
- We helped Stacked SP achieve 85%+ match accuracy vs a typical 40% to 50% industry baseline by running enrichment in parallel with the initial matching pass, not as a downstream step.
- Shortlist turnaround dropped from weeks of manual work to hours of automated output. The same engine runs continuously across new job orders.
- We built the system on top of Stacked SP's existing sourcing workflow rather than replacing it. That protected every dollar of prior investment in their operations.
- Accuracy depends heavily on job description quality. A vague brief ("5+ years, team player") drops accuracy to roughly 60%. A structured brief holds the 85%+ line.
- We deployed the engine on Clay for data enrichment and Supabase for the candidate store, with custom matching logic we wrote on top of those layers.
Is There a Catch? What This Does Not Solve
A word of caution before the rest of the case study: this system does not work well when the inputs are vague. If a client brief reads "5+ years of experience, team player, self-starter," we have seen accuracy drop to roughly 60%. The engine has no way to score fit against criteria the brief did not specify.
The fix is upstream. Every engagement we run with Stacked SP starts with a structured requirements call before the matching engine fires. That call is where the 85%+ accuracy is actually earned. Skip this step, and no amount of AI will save the shortlist.
This also does not replace human judgment at the top of the funnel. Ilan's team still defines what "champion engineer" means per client. The engine handles volume. Humans handle meaning. That split is the point.
How We Solved It for Stacked SP
Can concurrent matching beat a traditional pipeline?
Most sourcing tools run as pipelines. You search. You get a list. You enrich. You score. You deliver. Each stage waits for the previous one to finish, and the whole thing takes hours or days for a serious job.
We built Stacked SP's engine as a concurrent system instead. When a job brief comes in, the search layer and the enrichment layer run at the same time across a large candidate pool. That is the single architectural choice that compresses the timeline from weeks to hours. We deployed it on Clay for the data layer and Supabase for the candidate store, with custom matching logic written on top of those services.
"You put in a job, and a handful of hours later you got a list of a few hundred great candidates ready to go into campaigns. Huge savings, huge."
Ilan Saks, CEO, Stacked SP
Why hours and not minutes? Because the scoring logic waits for the enrichment data to land before it scores. Enriching a few hundred candidates with real depth (commit history, company moves, seniority signals) takes time. We chose to let the system wait for real data rather than score on thin signals. The tradeoff was worth it at the 85% accuracy bar.
Why does deep enrichment close the 40% to 85% gap?
The accuracy gap between 40% and 85% is not a model problem. It is a data depth problem.
Most automated sourcing tools match on surface signals: title, keywords, current company, maybe a years-of-experience field. That approach produces a lot of "software engineer with the right keyword" matches who have never touched the actual stack the client needs. Noise.
We enrich candidates in parallel with the initial match pass. For every candidate surfaced, the engine pulls deeper context: tech stack signals from their public work, company stage fit, seniority indicators from their trajectory, behavioral signals around when they last moved roles. That context flows back into the scoring layer before the shortlist is delivered.
"We were able to do that with 85% plus accuracy which is pretty amazing. There's a lot of tools out there doing it with 40% 50% accuracy. This was 85%, which is incredible."
Ilan Saks, CEO, Stacked SP
The practical downstream effect matters more than the accuracy number. When every candidate on the shortlist is actually a fit, Stacked SP's recruiters spend their calls closing great people instead of filtering out noise.
Sourcing cost per placement drops. Client happiness goes up. The team takes on more business without breaking their capacity model. That is the real ROI, and it tracks with what Gartner recruiting research has shown about match quality being a bigger lever than raw sourcing volume as of March 2026.
Why does a consultative build beat a vendor handoff?
Ilan reached out to us after talking to a handful of AI vendors who all promised transformation and none of whom could explain the tradeoffs. He did not need a pitch. He needed a thought partner who had lived in the trenches of recruiting long enough to know which levers actually move the business and which ones are shiny distractions.
Our engagement started with a diagnostic. What is working in the current sourcing process? What is broken? Where does the team burn hours on low-ROI work?
That diagnostic shaped the engine's architecture before we wrote a single line of code. Ilan's team was never handed a black box. They understood what we were building, why, and how it would compound over time.
"It's great that you're not a yes man. You come with your own set of expertise and your own experiences, and you were able to educate me and guide us on the right path. AI and automation is not a magic bullet. It's not going to transform your business overnight. It's a work in progress."
Ilan Saks, CEO, Stacked SP
The consultative framing also shaped the definition of ROI. Instead of chasing vanity metrics, we focused on match accuracy, shortlist turnaround, and downstream placement rate. The numbers that actually drive Stacked SP's P&L. Ilan wanted short-term ROI, but he also wanted to build something that would hold up for the next 15 years. Our job was to make sure those two goals did not conflict.
What Is Next for Stacked SP?
With the matching engine validated on live jobs and delivering 85%+ accuracy, Stacked SP is scaling the system across its full VC-backed client portfolio. The next phase focuses on three things: expanding the concurrent architecture to handle higher search volume without accuracy loss, wiring the engine into Stacked SP's outreach layer so qualified candidates move into campaigns automatically, and tightening the feedback loop from client placements back into the scoring model. Ilan's long-term thesis is that the agencies that reinvent themselves around AI and automation over the next 15 years will look nothing like the agencies that did not. Stacked SP intends to be in the first group, and we are building toward that.
Why This Matters for You
If you run a recruiting agency, the tradeoff between sourcing speed and match quality is probably the biggest lever in your P&L. Every hour your recruiters spend filtering noise out of a keyword-matched list is an hour they are not closing great candidates. Every noisy shortlist you send to a client is a dent in your retention.
The Stacked SP build shows three things worth stealing. First, match accuracy is a data depth problem, not a model problem. Running enrichment in parallel with the initial match pass closes the 40% to 85% gap. Second, you do not need to rebuild your stack. The engine layered on top of Stacked SP's existing sourcing workflow, so prior investment compounded instead of getting written off. Third, a consultative approach beats a transactional vendor handoff because the hardest part of the build is not the code, it is getting crystal clear on what "great candidate" means for each client.
We have deployed variations of this architecture across 110+ agency engagements since 2022. If you are a venture-backed tech recruiting agency looking to reinvent sourcing without losing your quality signal, we would love to talk.
Learn More
- How AI-native agencies operate differently (Effi Flo blog)
- Our candidate matching architecture in depth
- More Effi Flo case studies
- Behind the Flo: how Effi Flo was built
- Clay (the data enrichment platform we use)
- Supabase (the candidate store layer)
- Bullhorn Grid 2026 staffing industry report
- Gartner research on recruiting automation trends
Sources
- Stacked SP client interview transcript, February 2026
- Effi Flo internal deployment data, 110+ agency engagements since 2022
- Bullhorn Grid 2026 staffing report
- Gartner recruiting technology research, Q1 2026
- Ilan Saks quotes verified from recorded client interview
Frequently Asked Questions
How long did the Stacked SP matching engine take to build?
How is 85% match accuracy actually measured?
How is this different from standard AI sourcing tools like LinkedIn Recruiter or HireEZ?
Can a smaller recruiting agency use the same approach?
What tools does the Effi Flo matching engine use under the hood?
When does this engine NOT work?
Last updated: April 10, 2026
Want results like Stacked SP's?
We only take engagements where we can name the client and publish the result after launch.
Talk to Effi FloOther case studies

