Uber

Security Engineer (AI & Agentic Systems)

UberLocation Not Available

Stellenbeschreibung:

About the Role

As AI systems—especially agentic and autonomous AI—become deeply embedded in our products and internal platforms, the security model must evolve. Traditional application security alone is no longer sufficient. We are looking for an AI Red Team Engineer to help us proactively identify, understand, and mitigate AI-native and agent-specific security risks before they reach production.

In this role, you will build and execute adversarial red‑teaming exercises against AI models and AI agents, focusing on how they can be manipulated into unsafe, unintended, or harmful behavior. You will work closely with AI platform teams, product engineers, and security partners to stress‑test agent logic, tool usage, memory, and autonomy—and translate findings into concrete guardrails and defenses.

This role is ideal for someone who enjoys thinking like an attacker, understands modern AI systems, and wants to work at the intersection of security, AI, and real‑world impact.

What the Candidate Will Do

This role sits at the intersection of offensive security and AI engineering. You will not be limited to traditional penetration testing; instead, you will focus on behavioral, logical, and contextual attacks that cause AI systems to fail in subtle but dangerous ways—often without exploiting classic vulnerabilities. Success in this role means uncovering “unknown unknowns,” clearly articulating risk, and helping teams build safer AI systems by design.

Design and execute AI red‑teaming exercises against LLMs and AI agents

  1. prompt injection (direct & indirect)
  2. jailbreaking and policy bypass
  3. model and tool poisoning
  4. memory and context poisoning
  5. behavioral drift and unsafe autonomy
  6. tool misuse and emergent privilege escalation

Analyze agent workflows, logic, and tool graphs

to identify systemic security weaknesses beyond prompt‑level attacks.

Develop reusable adversarial test cases, attack libraries, and red‑team playbooks

for AI systems.

Collaborate with AI platform and product teams

to translate red‑team findings into actionable mitigations, guardrails, and design changes.

Basic Qualifications

  1. 3+ years of experience in security engineering, offensive security, red teaming, or AI security.
  2. Hands‑on experience red‑team­ing AI models or AI agents, including testing for prompt injection, jailbreaks, unsafe behavior, excessive agency, model DoS.
  3. Strong understanding of security fundamentals (threat modeling, secure design, least privilege, defense in depth).
  4. Ability to clearly document findings and communicate risk to both technical and non‑technical stakeholders.
  5. Proficiency in at least one programming language (e.g., Python, Go, Java, or similar).

Preferred Qualifications

  1. Familiarity with AI security tools and frameworks (e.g., PyRIT, AgentDojo, Promptfoo, custom harnesses).
  2. Good understanding of GenAI and LLM architectures, including embeddings, RAG, or agent frameworks.
  3. Hands‑on experience executing AI Red Teaming exercises, including prompt injection/jailbreaking, unsafe behavior/behavioral drift, model/tool poisoning.
  4. Offensive security / penetration testing background (e.g., red team, bug bounty, exploit development).

For New York, NY-based roles: The base salary range for this role is USD$171,000 per year - USD$190,000 per year.

For San Francisco, CA-based roles: The base salary range for this role is USD$171,000 per year - USD$190,000 per year.

For Seattle, WA-based roles: The base salary range for this role is USD$171,000 per year - USD$190,000 per year.

For Sunnyvale, CA-based roles: The base salary range for this role is USD$171,000 per year - USD$190,000 per year.

For all US locations, you will be eligible to participate in Uber's bonus program, and may be offered an equity award & other types of comp. All full‑time employees are eligible to participate in a 401(k) plan. You will also be eligible for various benefits.

Equal Opportunity Employer

Uber is proud to be an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please let us know by completing this form.

#J-18808-Ljbffr
NOTE / HINWEIS:
EnglishEN: Please refer to Fuchsjobs for the source of your application
DeutschDE: Bitte erwähne Fuchsjobs, als Quelle Deiner Bewerbung

Stelleninformationen

  • Veröffentlichungsdatum:

    24 Apr 2026
  • Standort:

    Einsatzort:

    Berlin, Germany
  • Typ:

    Vollzeit
  • Arbeitsmodell:

    Vor Ort
  • Kategorie:

  • Erfahrung:

    2+ years
  • Arbeitsverhältnis:

    Angestellt

KI Suchagent

AI job search

Möchtest über ähnliche Jobs informiert werden? Dann beauftrage jetzt den Fuchsjobs KI Suchagenten!