background

AI Intelligence Platform for Talent Marketplace

With our AI tool Lens developed by HashRoot, recruiters cut screening time to 1 min and deliver faster, smarter hiring for clients.

d

Client Overview


RapidBrains, our group company specializing in global talent deployment, encountered a significant challenge in managing the exponential growth of applications received through emails, website submissions, and direct database uploads - surpassing 500,000 resumes. Despite the capability of the in-house recruitment team, the sheer volume of data proved unmanageable. Recruiters were compelled to spend extensive hours on manual tasks such as opening, reviewing, and shortlisting profiles. The situation became even more complex when clients required niche technical positions demanding highly specific skills. Ultimately, the manual process was no longer sustainable in meeting client expectations.

To address this challenge, RapidBrains collaborated with HashRoot to develop Lens, an AI-driven intelligence platform designed to streamline candidate screening, enhance precision, and enable recruiters to focus on strategic hiring priorities.

Problem Statement


Recruiters at RapidBrains described their daily workflow as “a constant race against time.”

  • Manual Screening – Recruiters allocated 4–6 hours daily, only opening resumes and screening them as per client needs.
  • Low Precision – Several of the shortlisted candidates turned out not to meet the technical requirements, so the process had to be repeated.
  • Scalability Issues – With thousands of resumes arriving each week, the team could not scale this process without exhausting itself.

As one of the recruiters explained:


"By the time I had found a suitable one, I had wasted half a day already. And if the client was looking for some special skill, it was like searching for a needle in a haystack."

Solution Approach


The solution wasn’t about deploying random AI tools. It required thoughtful decisions, strategic trade-offs, and continuous iteration

Why LangChain + FastAPI?

We compared Django and n8n for orchestration but chose LangChain + FastAPI. LangChain offered modular integrations for LLM workflows, while FastAPI kept the API lightweight and fast.

Why ChromaDB + PostgreSQL?

We needed both semantic search (to understand recruiter prompts) and structured search (to filter by fields like location, years of experience). Using ChromaDB for vector embeddings alongside PostgreSQL for structured queries gave us the best of both worlds.

Why LLaMA (on Groq) over OpenAI?

OpenAI was accurate but costly at scale. LLaMA 3.2, deployed locally on Groq, provided cost-efficient inference. We still kept OpenAI as a fallback for complex queries. This hybrid approach balanced cost and quality.

Execution & Milestones


Milestone Timeline Key Highlights
Requirement Analysis & Stack Finalization Week 1 Composed multiple frameworks, finalized LangChain, FastAPI, ChromaDB, and LLaMA.
Parsing & Embedding Pipeline Week 2 Integrated PDF/DOCX/email ingestion; added OCR for scanned resumes. Early tests showed parsing large resumes (>20 pages) was slow, requiring pipeline optimization.
Search & Ranking Engine Week 3 Rolled out hybrid retrieval (semantic + keyword). Initial recruiter tests flagged irrelevant matches → refined chunking + embeddings per resume section.
Frontend & Recruiter Interface Week 4 Built a Next.js dashboard. Feedback loop added so recruiters could “thumbs up/down” search results for continuous improvement.
Security & Deployment Week 5 GDPR-compliant encryption, RBAC with JWT, and Kubernetes scaling for production.

Delays: Parsing mixed-format resumes (scanned PDFs especially) caused a week's delay until OCR + fallback handling stabilized.

Results


The transformation was visible within weeks of rollout:

  • Resume Screening Time → Cut from 4–6 hours per recruiter per day to under 1 minute per query.
  • Recruiter Productivity → Recruiters reported a 70% reduction in time spent manually filtering.
  • Precision of Matches → Recruiter feedback showed ~40% improvement in relevance of shortlisted candidates.
  • Cost Optimization → Running LLaMA on Groq + caching common recruiter queries reduced inference costs by 40%.
  • Scalability → The system can now comfortably handle 10x the current resume volume without slowing down.

Technologies Used


Languages: Python, JavaScript

Frameworks: LangChain, FastAPI, Next.js, Tailwind CSS, Celery

Databases: PostgreSQL, ChromaDB

LLMs: LLaMA 3.2 (Groq console), OpenAI GPT (fallback)

Parsing Tools: Tika, PyMuPDF, Tesseract OCR

Infra & Monitoring: Kubernetes, Docker, Prometheus, Grafana, ELK Stack

Conclusion


Building Lens for RapidBrains wasn’t just a tech exercise—it was about reclaiming recruiter time and improving match quality for clients.

  • Hybrid retrieval (semantic + keyword) worked better than relying on embeddings alone.
  • Recruiter feedback loops were critical. AI matching improved only after multiple iterations.
  • Cost efficiency came from balancing local models (LLaMA) with cloud-based fallbacks.

Next steps: Fine-tuning the ranking engine further and exploring RAG (retrieval-augmented generation) to allow recruiters to “chat” directly with candidate databases.

Let's discuss your project

Subscribe our newsletter to stay updated!