You Hit Apply. Now What?
Your resume disappears into a portal and you wait. What's actually happening on the other side of that button is a multi-step automated process that happens in seconds — and that process determines whether a recruiter ever sees your name.
Applicant Tracking Systems are used by 98% of Fortune 500 companies and the majority of mid-sized employers. The most common platforms — Workday, Greenhouse, Lever, iCIMS, Taleo — each have slightly different parsing engines, but they all follow roughly the same pipeline. Understanding that pipeline is the difference between a resume that clears the filter and one that doesn't.
Step 1 — File Parsing: The Format Problem
Before any content is read, the ATS has to extract text from your file. This is where a surprising number of resumes fail silently — not because of what they say, but because of how they're built.
What ATS Can and Can't Read
A plain .docx file is the most universally parseable format. The text is structured, the reading order is unambiguous, and virtually every ATS handles it correctly. Clean, text-based PDFs work well too — when the file was created digitally (exported from Word or Google Docs), the text layer is intact and readable.
The problems start with design-heavy formats. Multi-column layouts look clean to a human eye, but most ATS parsers read documents linearly — left to right, top to bottom — treating the entire page as a single text stream. A two-column resume where your job title is on the left and your dates are on the right often gets read as a scrambled string of text with no coherent structure. Tables and text boxes are frequently skipped entirely. Graphics, icons, and skill bars register as nothing — just whitespace in the extracted text.
The Scanned PDF Problem
A scanned PDF — a physical document photographed or photocopied and saved as a PDF — is essentially an image file. There's no text layer to extract. Some ATS systems run OCR (optical character recognition) on these, but the results are inconsistent and error-prone. If your resume was designed in Canva, exported as an image-heavy PDF, or passed through a scanner at any point, the parsed version may be unrecognizable. When in doubt: export as .docx, or generate a clean PDF from a text-based source.
Step 2 — Data Extraction: Sections and Fields
Once text is extracted, the ATS tries to sort it into structured fields: name, contact information, work experience, education, skills. This is where resume formatting choices have outsized consequences.
How ATS Systems Identify Sections
Section detection relies on header recognition. The parser looks for known keywords — "Experience," "Work History," "Education," "Skills," "Certifications" — and uses them as anchors to categorize everything that follows. Standard headers work reliably. Creative alternatives often don't.
"Where I've Been" instead of "Experience." "My Toolkit" instead of "Skills." "Academic Background" where most parsers expect "Education." These feel distinctive and human in person. In a parser, they frequently cause the content beneath them to be miscategorized or dropped into an unstructured overflow field that recruiters rarely see.
What Gets Lost
Text in document headers and footers — a common place to put contact information or page numbers — is often ignored entirely. Information inside tables gets extracted in reading order, which may bear no resemblance to the visual layout. Bullet points using special characters or custom symbols sometimes parse as garbled text instead of clean list items. None of this is visible in your PDF preview. It only surfaces when a recruiter looks at what the ATS actually captured — which is sometimes a garbled shell of what you submitted.
Step 3 — Keyword Matching and Scoring
After extraction, the ATS compares your parsed content to the job description and generates a relevance score. This is the step most people have heard of — but the mechanics matter more than the general concept.
Exact Match vs. Semantic Match
Older systems — Taleo, legacy iCIMS configurations — rely heavily on exact keyword matching. "Project management" and "managing projects" are different strings; only one might score. This is why mirroring the job description's exact language has always been the standard advice, and why it still holds.
More modern platforms, particularly Greenhouse and newer Workday configurations, use natural language processing that understands semantic similarity. "Led cross-functional initiatives" and "managed interdepartmental projects" would score similarly. But exact matches still outperform near-matches in every system — NLP reduces the penalty for synonyms, it doesn't eliminate the advantage of precision.
How Scores Are Calculated
Most systems weight keywords by their prominence in the job description. A skill mentioned once in a list of preferred qualifications matters less than one repeated in the responsibilities section, the required qualifications, and the job title itself. The placement on your resume also matters: keywords in your summary and job titles carry more weight than identical terms buried in a bullet point halfway down the page.
Some systems go beyond keywords. They flag employment gaps above a certain threshold, calculate tenure averages, cross-reference job title progression, and even check whether your listed employers appear in their database of known companies. These secondary signals influence how your profile surfaces to recruiters, separate from the keyword score.
Step 4 — Ranking and the Threshold Problem
Scored resumes get ranked. Recruiters typically set a minimum score threshold — often without fully understanding how that number was generated — and only open files above it. In a competitive posting that receives 400 applications, a recruiter might only review the top 40. If your score is 61 and the cutoff is 65, the content of your resume is irrelevant. It wasn't seen.
This is why incremental optimization matters more than most people expect. The difference between a resume that clears the filter and one that doesn't is often a handful of missing keywords, a section header that didn't parse, or a skill buried in a format the system couldn't read.
The Most Common Ways Resumes Fail ATS Parsing
Formatting Failures
- Two-column layouts — parsed in reading order, producing scrambled text
- Text boxes and tables — frequently skipped or extracted out of sequence
- Contact info in the document header — often not captured
- Graphics, icons, and skill bar charts — invisible to parsers
- Unusual fonts or special characters — can produce garbled extraction
- Creative section names — break category detection
- Scanned or image-based PDFs — may have no readable text layer at all
Keyword Failures
- Using synonyms instead of the job description's exact language — loses exact-match weight
- Listing acronyms without spelling them out — "PMP" without "Project Management Professional" misses one search pattern
- Burying skills in low-weight sections — skills mentioned only in a footer or afterthought section score less
- Sending the same resume to every job — keyword priorities differ significantly between postings
- Missing required qualifications entirely — some ATS use knockout filters: if the required keyword isn't present, the application is auto-rejected before scoring
Is Your Resume ATS-Ready? A Checklist
- Is your file a .docx or clean, text-based PDF — not a scanned image or Canva export?
- Is your layout single-column with no tables or text boxes?
- Are your section headers standard — Experience, Education, Skills, not creative alternatives?
- Is your contact information in the body of the document, not the header or footer?
- Have you identified the most-repeated keywords in the job description and used them verbatim?
- Have you written out both the full term and acronym for certifications and technical tools?
- Are your most important keywords placed in your summary and job title descriptions — not just a skills list?
- Have you tailored this version of your resume specifically to this posting?
How Rejectly Reads Your Resume the Way an ATS Does
The problem with manually checking all of this is that you can't see your own parsing errors — they're invisible in the PDF you're looking at. Rejectly analyzes your resume the same way an ATS does: extracting text, identifying what was captured and what wasn't, comparing your content to the target job description, and showing you exactly where your match score breaks down.
You see the gaps before the ATS does. Which keywords are missing, which sections may have parsed incorrectly, and what specific changes would push your score past the threshold that gets your resume in front of a recruiter.
Check how your resume parses →
Conclusion
An ATS isn't making judgments about you. It's running a structured extraction and matching process that has known failure modes — and most of those failure modes are avoidable once you know what they are. Clean formatting, standard section headers, exact keyword matching, and a tailored version for each application. That's the checklist. The candidates who clear the filter consistently aren't doing anything extraordinary. They're just not making the preventable mistakes that knock most resumes out before a human ever opens them.
Get Your Resume ATS-Ready
Upload your resume and get instant AI-powered analysis. See your ATS score, find missing keywords, and get actionable suggestions to land more interviews.
- ATS Score Check
- Keyword Analysis
- Instant Results