Security Engineer (AI & Agentic Systems)
Company: Uber
Location: New York, NY
Salary: $171,000 - $190,000 a year
Type: Full-time
Posted: 2026-03-19
About this role
About the Role
As AI systems-especially agentic and autonomous AI-become deeply embedded in our products and internal platforms, the security model must evolve. Traditional application security alone is no longer sufficient. We are looking for an AI Red Team Engineer to help us proactively identify, understand, and mitigate AI-native and agent-specific security risks before they reach production.
In this role, you will build and execute adversarial red-teaming exercises against AI models and AI agents, focusing on how they can be manipulated into unsafe, unintended, or harmful behavior. You will work closely with AI platform teams, product engineers, and security partners to stress-test agent logic, tool usage, memory, and autonomy-and translate findings into concrete guardrails and defenses.
This role is ideal for someone who enjoys thinking like an attacker, understands modern AI systems, and wants to work at the intersection of security, AI, and real-world impact.
- What the Candidate Will Do -
This role sits at the intersection of offensive security and AI engineering. You will not be limited to traditional penetration testing; instead, you will focus on behavioral, logical, and contextual attacks that cause AI systems to fail in subtle but dangerous ways-often without exploiting classic vulnerabilities. Success in this role means uncovering unknown unknowns," clearly articulating risk, and helping teams build safer AI systems by design.
Design and execute AI red-teaming exercises against LLMs and AI agents, including:
- prompt injection (direct & indirect)
- jailbreaking and policy bypass
- model and tool poisoning
- memory and context poisoning
- behavioral drift and unsafe autonomy
- tool misuse and emergent privilege escalation
Analyze agent workflows, logic, and tool graphs to identify systemic security weaknesses beyond prompt-level attacks.
**Develop reusable adversarial test cases, attack libraries,...