Sr. Responsible AI Researcher, AI.x

  • San Francisco, CA
  • Posted 2 days ago | Updated moments ago

Overview

On Site
USD 210,000.00 - 290,000.00 per year
Full Time

Skills

Creative Problem Solving
Product Engineering
Innovation
Regulatory Compliance
eXist
Decision-making
Collaboration
Evaluation
Computer Science
Science
Generative Artificial Intelligence (AI)
Analytical Skill
Conflict Resolution
Problem Solving
Communication
Machine Learning (ML)
Research and Development
Publishing
Research
Testing
Risk Assessment
Artificial Intelligence
Data Science
SQL
Python
Pandas
Data Visualization
Writing
Finance

Job Details

Your Opportunity

At Schwab, you're empowered to make an impact on your career and the financial industry. Here, innovative thinking meets creative problem solving as we challenge the status quo together. We believe in the power of collaboration and value being together in the office, which is why this role is based on-site in our San Francisco office. Joining Schwab means joining a company committed to transforming the financial industry and putting clients at the center of everything we do.

Schwab's AI Strategy & Transformation team, known as AI.x, is the central hub for Artificial Intelligence at Schwab. We are an integrated product, engineering, strategy and risk team, all based in San Francisco. We help set the enterprise vision for AI, invest in the most promising opportunities, and accelerate delivery across the company. We also build the research platform that powers AI at scale and explore next-generation GenAI efforts that will redefine how we serve our clients.

As a Sr Responsible AI Researcher, you will bridge the gap between regulation, ethical principles, and technical innovation, shaping how AI is safely and ethically deployed across our products and services. You'll collaborate across research, engineering, product, legal, compliance, and risk teams to design technical guardrails, evaluation frameworks, and monitoring systems that set industry benchmarks for fairness, transparency, and trust.

In this highly visible and cross-functional role, you will ensure Schwab's AI systems set industry benchmarks for safety, fairness, and transparency. You'll work closely with teams across research, engineering, product, legal, compliance, and risk to address complex and emerging challenges in responsible AI. This role offers significant opportunities to innovate-designing and implementing solutions for problems where established best practices may not yet exist-and to help shape the technical and organizational standards that guide Schwab's AI systems. Your work will directly inform executive decision-making, regulatory engagement, and the development of trusted AI products at scale.

What You'll Do
  • Design and implement innovative methods for bias detection and develop technical guardrails aligned with ethical AI principles.
  • Collaborate with cross-functional teams to ensure Schwab's AI systems meet regulatory and ethical standards.
  • Build and maintain systems for ongoing monitoring and evaluation of AI models, integrating human-in-the-loop and automated metrics.

If you're passionate about responsible AI and eager to innovate in a dynamic environment, this role is for you.

What you have

We're looking for someone who thrives in ambiguity, is passionate about AI ethics, and enjoys solving complex challenges where no playbook exists.

Required Qualifications

  • Master's degree (or equivalent) in Computer Science, Engineering, Data Science, Social/Applied Sciences, or related field, or equivalent experience.
  • 10+ years of relevant experience in AI ethics, AI research, Security, Trust & Safety, or similar roles (academic doctoral experience counts).
  • Expertise in fairness, alignment, adversarial robustness, or interpretability/explainability.
  • Experience with responsible generative AI challenges and risk mitigations.
  • Strong analytical, problem-solving, and communication skills for technical and non-technical audiences.
  • Curiosity and passion for AI policy and governance.

Preferred Qualifications
  • 7+ years of experience in AI/ML research and development using Python.
  • Familiarity with regulatory frameworks (AI-specific or financial sector) and responsible AI standards.
  • Track record of publishing research in AI safety, alignment, or governance (e.g., FAccT, NeurIPS).
  • Experience working with LLMs and deploying LLM-powered applications to production.
  • Experience with adversarial testing, red-teaming, and risk assessment for AI deployments.
  • Strong data science fundamentals, including SQL, Python data frames (e.g., pandas), and data visualization.
  • Experience writing unit tests and building robust data pipelines.
  • Demonstrated business domain knowledge related to financial products.

We welcome applicants from diverse backgrounds and encourage you to apply even if you don't meet every requirement.

In addition to the salary range, this role is also eligible for bonus or incentive opportunities.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.