Published: 2026-04-01
10 min read

I Got Tired of Tailoring Resumes Manually — So I Open-Sourced a Solution

Resume Job Search Automation Open Source AI LaTeX
I Got Tired of Tailoring Resumes Manually — So I Open-Sourced a Solution

The Breaking Point

The advice is everywhere: personalize your resume for every job you apply to. A single generic resume limits your chances during screening. The problem is that doing this manually across hundreds of applications is exhausting.

I was on my 40th application when it hit me: I had spent more time tweaking my resume than actually preparing for interviews. I had a folder of ChatGPT and Claude conversations, each one asking the same thing — rate my fit for this role, rewrite my bullets, suggest keywords. It worked, but it was manual. Copy-paste JD, wait for response, copy-paste back into the resume generator(latex, Word). Repeat each time for a new role.

Every job description was slightly different. Slightly different keywords, slightly different priorities, slightly different wording for the same role. And I was manually editing my resume each time — reordering bullet points, swapping skill emphasis, rewriting the summary to sound like I actually cared about this specific company's mission.

At some point I stopped asking "does this application need a tailored resume" and started asking "why am I doing this by hand."

I'm a developer. This is a data problem. My career history is structured data. A job description is structured requirements. Matching them is an algorithm, not a creative exercise.

So I built one. And I open-sourced it.

What I Actually Needed

I didn't need another resume builder with drag-and-drop templates and gradient backgrounds. I needed:

A single source of truth for my career data. One place where my experience, skills, and projects live. When I learn something new or finish a project, I update it once — not across five different resume versions. (Even manually updating can be automated — see the bonus tip at the end.) This is the same JSON data layer that powers my automated portfolio system — one set of files feeds both the website and all resume variants.

ATS-friendly output. Most resume builders produce PDFs with columns, icons, and fancy layouts that ATS parsers choke on. I wanted clean, single-column LaTeX output that actually gets read by the software on the other end.

Tailoring without starting over. The ability to paste a job description and get back a version of my resume that emphasizes the right things — without me manually rewriting each bullet.

Automation, not friction. If I have to install 12 dependencies and learn a new UI, I won't use it. The tool needs to work with how I already work: edit JSON, push to git, get a PDF. The simplest possible workflow. I cover the rest of my development tooling in Tools I Use as a Backend Engineer.

How the Tool Works

The architecture is simple:

JSON data → Jinja2 template → LaTeX → PDF

Your career data lives in plain JSON files:

data/
├── profile.json      # Name, title, bio, social links
├── experience.json   # Work history with bullet points
├── education.json    # Academic background
├── projects.json     # Highlighted projects
└── contact.json      # Email, location, phone

That's it. No database. No ORM. No migration scripts. Just JSON files you can read, edit, and version control.

The LaTeX template (templates/resume.tex.j2) is a Jinja2 template that injects your JSON data into a professional, single-column layout. Clean lines, no columns, no graphics — the kind of resume that looks good printed and parses correctly in every ATS.

Two Ways to Use It

Option 1: CI/CD (zero setup)

Fork the repo, edit the JSON files, push to main. GitHub Actions picks up the change, renders the LaTeX, compiles the PDF, and attaches it to a release. You download it from the Releases page. No local installation. Same Github action can be triggered manually from repo's Action tab as well. No dependencies. If you can edit a JSON file and push to git, you're done. JSON editing works on mobile too — handy for quick updates on the go.

Option 2: Local web UI (power users)

Run python customizer/server.py, open localhost:8000, and you get a live editor with real-time PDF preview. Edit JSON on the left, see the PDF update on the right. Hit "Save to Backend" to persist changes. This is what I use for nuanced edits when tailoring for specific roles.

The AI Tailoring Pipeline

This is the part that actually saves time.

The naive approach would be to dump the entire resume JSON and job description into one prompt and hope the LLM figures out what to change. It works sometimes. It also hallucinates metrics, drops experience entries, and occasionally changes company names. Not great. While all of this can be tolerated while doing this process manually using AI tools, it's not ideal for an automated system.

The tailoring pipeline uses a 4-stage architecture that decomposes the problem:

Stage 1: JD Analysis       — Extract structured requirements from the JD
Stage 2: Match & Score     — Deterministic keyword matching, relevance scoring
Stage 3: Section Tailoring — Parallel LLM calls (profile, experience, projects)
Stage 4: Validate & Assemble — Schema validation, immutable field checks, eval metrics

Why Multi-Stage Beats Single Prompt

This isn't an architectural preference. It's a reliability problem.

A single mega-prompt asks the LLM to do six things at once: parse the JD, identify requirements, rate relevance, rephrase bullets, reorder sections, and return valid JSON. The more tasks you stack, the more each one degrades.

The multi-stage approach gives each step one job:

Stage 1 (JD Analysis): Converts raw job description text into structured requirements — skills, responsibilities, qualifications, each tagged with priority (must-have, nice-to-have, bonus). Uses instructor for Pydantic validation so the output is always well-formed. Temperature is 0.1 because this is extraction, not creative writing.

Stage 2 (Match & Score): No LLM involved. Deterministic keyword matching between your resume and the extracted JD requirements. Produces a relevance score (1-10) with gap analysis — what you have, what you're missing. If relevance is below 2, the pipeline exits early. No point tailoring a resume for a role you're not qualified for.

Stage 3 (Section Tailoring): Three parallel LLM calls — one for profile, one for experience, one for projects. Each call gets only the data it needs and only the JD requirements relevant to that section. Temperature is 0.3. Rules are explicit: rephrase wording, don't fabricate achievements, preserve company names and dates exactly.

Stage 4 (Validate & Assemble): Catches LLM mistakes before they reach you. Checks that immutable fields (company names, dates, locations, URLs) weren't mutated. Runs evaluation metrics — job alignment score, content preservation rate, hallucination detection. Auto-fixes any violations by restoring original values.

The whole thing streams progress via Server-Sent Events. You see each stage complete in real-time instead of watching a spinner and guessing.

BYOK — Bring Your Own Key

The tool doesn't lock you into any provider. You pick: OpenAI, Gemini, Cerebras, OpenRouter, Nvidia, or any other OpenAI-compatible API. You provide the API key. The pipeline works with any of them through the instructor library's OpenAI-compatible interface.

What You Actually Get

One JSON update, everywhere reflected. Update your experience, add a project, change your title — it reflects in the PDF instantly. No updating five different Word documents.

ATS-compatible PDFs every time. Single-column LaTeX output. No tables for layout. No images. No columns. The kind of resume that parses correctly in Workday, Greenhouse, Lever, and whatever ATS the company is using this week.

Tailored resumes in minutes, not hours. Paste a JD, pick your LLM provider, hit tailor. You get back a version of your resume that emphasizes the right keywords, reorders bullets by relevance, and rewrites your summary to match the role. Takes 30-60 seconds depending on the model.

Visual diff so you see exactly what changed. After tailoring, the UI shows a side-by-side diff of your original and tailored JSON. You can review every change before saving. Nothing gets applied without your approval.

Evaluation metrics on every run. Each tailoring run produces scores for job alignment, content preservation, and hallucination detection. You can see whether the tailoring actually improved things or just shuffled words around.

How to Use It

The fastest path:

  1. Fork the repogithub.com/jangwanAnkit/resume-builder
  2. Edit the JSON — Replace the sample data with your own in data/
  3. Push to main — GitHub Actions generates the PDF automatically
  4. Download from Releases — Your resume PDF is attached to the "latest" release

For AI tailoring:

  1. Run the local server: python customizer/server.py
  2. Open localhost:8000
  3. Paste a job description into the tailoring panel
  4. Select your LLM provider and enter your API key
  5. Hit "Tailor" and watch the pipeline run stage by stage
  6. Review the diff, save if you're happy

What's Next

This is the first piece of a larger system I'm building to automate the job search pipeline.

The next piece is a job tracker — a separate tool that will let you track every application from a web interface( and possibly Telegram bot). Log applications, track status changes, set reminders for follow-ups, and eventually tie it back to tailored resumes so each application has its own version attached.

The end goal: automate the hell out of the job search workflow. The parts that can be automated should be. The parts that require human judgment — interview prep, actual conversations with people — should get more of your time.

Key Takeaways

  1. Your career data is structured data. JSON beats Word documents for version control, diffing, and programmatic access. Treat it like code.

  2. ATS-friendly means boring. Single column, no graphics, standard fonts. The resume that looks impressive to a human often fails the parser before anyone reads it.

  3. Multi-stage pipelines beat mega-prompts. Give each LLM call one job. Validate the output. Measure the results. The single-prompt approach works until it doesn't, and when it fails it fails silently.

  4. Never let the LLM touch your dates. Company names, employment dates, locations — immutable. The pipeline auto-restores them if the model mutates them, which it will.

  5. Tailoring is keyword alignment, not fabrication. The tool rephrases and reorders what you already have. If a JD requires something you don't have, the gap analysis tells you. It doesn't pretend you have it.

  6. Automate the friction, not the judgment. Resume tailoring is friction. Interview prep is judgment. Spend your time where it matters.

Conclusion

If you're manually editing your resume for every application, you're spending hours on something a script can do in seconds.

Fork the repo. Put your data in JSON. Push. Download the PDF.

This resume builder shares its data layer with my portfolio website — both systems generate output from the same JSON files, so updating one updates everything.

Repository: github.com/jangwanAnkit/resume-builder

Sample PDF: View the generated resume

Bonus: If you have an existing resume as a PDF or Word doc, you can use the resume-builder-tailor AI skill with your preferred LLM to extract your data into the JSON format. Upload your existing resume, paste the skill instructions, and it will generate the JSON files for you. No manual data entry.

All my projects — including architecture diagrams, tradeoff analysis, and failure mode documentation — are at ankitjang.one/projects.

About me: I'm Ankit Jangwan, a Senior Software Engineer building backend systems with Django, PostgreSQL, Celery, and Go. See my case studies at ankitjang.one/case-studies.