We’ve organized every stage and persona in the AI supply chain, informed by real recruiting at frontier companies. Click any row to see matching profiles from our talent graph.







Summary
Known as: Trust & Safety Lead, Content Policy Manager, AI Safety Operations, Abuse & Misuse Analyst
Post-deployment monitoring and response for AI systems in production. Content moderation, abuse detection, incident triage, usage policy enforcement, and real-time intervention when models produce harmful outputs at scale. Defines acceptable use policies, coordinates rapid response across engineering, legal, and policy when novel abuse patterns emerge, and feeds production incidents back into evals and mitigations.
Specializations
Where the Work Lives
Content moderation systems, abuse detection, and incident response for deployed AI.
Protects users at scale through policy enforcement and real-time intervention.
Candidate Archetypes
Writes enforceable rules and adjudicates gray zones under real incident pressure.
Builds detection pipelines, investigation workflows, and feedback loops that feed into mitigations and training filters.
Runs the human review machine and the escalation/on-call layer for safety incidents.
Company Scale
Any company shipping user-facing LLM products needs basic content policy and abuse monitoring from launch. Dedicated T&S orgs scale at growth+; large teams at frontier labs and big tech.
Featured Roles
If you’re hiring at the AI frontier, let’s talk.