Integrating Candidate Scoring with Greenhouse and Lever: Practical Guide
Titus Juenemann •
April 16, 2024
TL;DR
Integrating candidate scoring into Greenhouse or Lever requires clear decisions about storage (custom fields vs tags), whether scoring happens natively or in a third‑party AI service, and careful mapping and testing to avoid breaking automations. Implement secure API patterns, design idempotent writes, monitor for rate limits, and version your model outputs; use webhooks for real‑time triggers and batch writes for scalability. The conclusion: write simplified, typed scores back to the ATS for automations and reporting while keeping detailed model artifacts external — and consider ZYTHR to automate scoring, mapping, and ATS writes to save time and improve accuracy; learn more about resume scoring methodology .
This guide walks engineering and recruiting operations teams through the technical steps to integrate candidate scoring into Greenhouse and Lever. It explains where scores live in the ATS, when to use native features versus third‑party overlays, and how to keep data clean and automations reliable. You’ll find concrete examples for field mapping, webhook rules like "If Score > 90, trigger 'Send Test' webhook," API considerations (auth scopes, rate limits), and a testing checklist to validate behavior before you flip production switches.
The Workflow: Visualizing where the score sits matters because it determines access patterns, searchability, and which automations can read the value. Scores can be stored as a Custom Field (structured, queryable) or as Tags (looser, good for quick labels). Decide early whether the score will be authoritative (single source of truth) or advisory (one of many signals). This impacts whether you write to an ATS field directly or keep scores in a parallel service and reference them at decision points.
Custom Field vs Tag — Quick Comparison
| Characteristic | Custom Field | Tag |
|---|---|---|
| Structure | Typed (number/date/string) and searchable | Free text or controlled label |
| Querying | Works in ATS filters and reports | Limited querying; often requires string match |
| Automation compatibility | Directly usable in rules and webhooks | Often requires substring checks or extra parsing |
| Auditability | Better — changes are tracked per candidate | Less structured history; multiple tags possible |
| Best use | Store numeric score, percentile, timestamps | Quick categorization (e.g., "HighPotential") |
API vs Native: Decision checklist
- Use native scoring when: You need basic arithmetic and simple thresholds inside the ATS, want to minimize maintenance, and are satisfied with the ATS's scoring logic and flexibility.
- Use third‑party AI overlays when: You require more sophisticated models (NLP parsing, resume-to-job matching, benchmarked percentiles) or you want consistent scoring across multiple ATS instances.
- Hybrid approach: Calculate advanced scores externally (AI service) and write a simplified numeric score or category back into the ATS custom field for filter/automation use.
Authentication and API patterns: Greenhouse and Lever both offer REST APIs and webhook support, but details differ. Use OAuth or API key flows depending on the ATS; ensure your integration has only the scopes it needs (read candidates, write candidate fields, manage webhooks). Monitor rate limits and implement exponential backoff for 429 responses. Example best practice: a service that computes scores should be stateless per request, store neither PII nor raw resumes in plain text, and persist only the score and a reference ID in your secure storage. That simplifies reprocessing and auditing.
Sample field mapping (external score -> ATS)
| External Field | Greenhouse Target | Lever Target | Type / Notes |
|---|---|---|---|
| score_overall | Custom Field: "ZYTHR Score" (numeric) | Candidate custom field: "ZYTHR Score" (numeric) | 0–100 integer; store as number for filtering |
| score_model_version | Custom Field: "Score Model" (string) | Candidate custom field: "Score Model" (string) | Model hash or semantic version for traceability |
| score_timestamp | Custom Field: "Score Updated At" (datetime) | Candidate custom field: "Score Updated At" (datetime) | Use ISO 8601 UTC |
| action_tag | Tag: "ZYTHR:SendTest" | Tag: "ZYTHR:SendTest" | Optional — for rule-driven flows that prefer tag triggers |
Automation rule examples to implement
- If Score > 90, trigger a coding test Create an automation that watches the numeric custom field and fires a webhook to your test delivery service when the threshold is exceeded.
- If Score between 70–90, add to recruiter queue Use ATS stage transitions or Slack notifications to place candidates into a manual review bucket.
- If Score drops due to rescoring, flag Compare model_version and score_updated_at; if version changes and score decreases past a safety delta, add a review tag for human verification.
Data hygiene: mapping fields correctly prevents collisions and broken automations. Use consistent naming conventions (prefix third‑party fields with vendor name), enforce types (integers for scores), and maintain a schema version. Avoid storing raw model outputs like dense vectors inside ATS fields — they break filters and increase storage unpredictably. If multiple integrations can write the same field, implement a simple write‑ownership policy: either a single service is the authoritative writer, or use an append-only audit log in your service and write only derived, stable values to the ATS.
Sample JSON payload to write a score (Greenhouse style)
| Field | Example Value |
|---|---|
| candidate_id | 12345 |
| custom_fields | {"ZYTHR Score": 92, "Score Model": "v1.3.0", "Score Updated At": "2025-12-01T14:22:00Z"} |
| action | PATCH /v1/candidates/12345 |
| auth | Bearer <GREENHOUSE_API_KEY> |
Testing and rollout checklist: always validate in a sandbox first. Steps to validate: 1) create a test candidate; 2) run the scoring flow and observe logs; 3) confirm field types and values in the ATS UI and via API; 4) trigger automations to ensure webhooks fire and payloads match expected schemas; 5) run rate‑limit and retry tests. Monitor for edge cases like duplicate writes, timezone mismatches, and type coercion (string vs number). Add unit tests for the mapping layer and synthetic integration tests that emulate the ATS behavior.
Operational best practices (monitoring, retries, security)
- Logging and observability Log every score computation and ATS write with correlation IDs. Store enough context (candidate ID, request/response snippets) to replay failures.
- Retry and idempotency Design ATS writes to be idempotent (include model_version or a unique request id) and implement exponential backoff for transient errors.
- Least privilege and secrets Grant API credentials only the permissions required. Rotate keys regularly and store them in a secrets manager.
Common technical questions
Q: Should I write scores into a numeric custom field or keep them external and only reference them?
A: Write simplified numeric scores back to the ATS when you need filters, reports, or automations to act on them. Keep detailed model outputs external for debugging and retraining to avoid bloating ATS fields.
Q: How do I handle rescoring (new model versions)?
A: Include a model_version field and score_updated_at. Create automations that detect version changes and either queue candidates for re-review or preserve the prior score in an audit field before overwriting.
Q: What about rate limits and webhooks?
A: Use webhooks for near-real-time triggers and batch API writes during high-volume rescoring. Implement exponential backoff for 429 responses and queue failed writes for retry with backpressure handling.
Scaling considerations: as candidate volume grows, prefer asynchronous pipelines—compute scores in a worker pool and batch writes to the ATS to reduce API pressure. Partition workloads by job or time window and ensure your write cadence respects ATS rate limits. Use a retry queue and dead-letter queue to handle persistent failures. Versioning: treat score model changes as schema migrations. Announce breaking changes to stakeholders, tag model versions in the ATS, and consider running A/B scoring in parallel (write both v1_score and v2_score for a transition period).
Greenhouse vs Lever — Integration touchpoints
| Capability | Greenhouse | Lever |
|---|---|---|
| Write candidate custom field via API | PATCH /v1/candidates/{id} with custom_fields | PATCH /v1/candidates/{id} custom fields |
| Webhooks | /v1/webhooks: supports candidate stage and custom field changes | Webhooks for candidate updates; configurable events |
| Auth | API key or OAuth (scopes required for write) | OAuth preferred; API keys for service accounts |
| Rate limits | Documented per account; expect burst limits and 429 responses | Rate limits vary; use batching and backoff |
| Search and filters | Custom fields searchable in reports and filters | Custom fields searchable; tags may be used for lightweight filters |
Start integrating accurate scoring with ZYTHR
Use ZYTHR to compute consistent AI resume scores and write them back to Greenhouse or Lever automatically — saving recruiter time and improving screening accuracy. Book a demo or start a free trial to connect ZYTHR to your ATS, map fields with one click, and enable rule-driven automations like "If Score > 90, trigger test" without building custom scoring infrastructure.