June 16, 2025

Imagine applying for 100 jobs over seven years—only to be rejected within hours, every time. That’s what Derek Mobley alleges happened to him. Mobley, a Black applicant who is over 40 and has disabilities, claims the rejections were due to Workday’s AI hiring tools. On May 16, 2025, a federal judge greenlit a landmark lawsuit against Workday, allowing applicants who are over 40 and were rejected since 2020 to join the suit, under the Age Discrimination in Employment Act (ADEA). This case is a wake-up call: your vendor’s AI could expose you to multimillion-dollar lawsuits, regulatory fines, or brand damage.

Why Your Vendor’s AI Matters

This is one of the first major legal tests of federal anti-discrimination laws applied to automated decision-making systems. The implications are broad:

  • Algorithmic Bias: AI models may replicate or even amplify biases present in training data.
  • Lack of Transparency: Many AI systems are “black boxes,” making it difficult to audit outcomes or detect discrimination.
  • Legal Liability: Employers can be held accountable for the effects of third-party AI tools—even if they didn’t develop the algorithm.

This isn’t just an HR issue. In healthcare, biased AI diagnostics have misdiagnosed minority patients. The EU’s AI Act (effective 2026) and NYC’s 2023 AI hiring law already demand bias audits, signaling global scrutiny.

The Case for AI Governance & Third-Party Risk Management (TPRM)

The Workday lawsuit underscores the urgent need for organizations to implement robust AI governance frameworks that ensure responsible, explainable, and compliant use of artificial intelligence. Additionally, traditional third-party risk management (TPRM) focuses on data security and compliance but often misses AI risks. A biased vendor AI model can trigger the same fallout as a data breach. Integrating AI governance into TPRM is critical to protect your reputation and bottom line.

Best Practices for AI Governance in TPRM

As organizations increasingly adopt AI-powered tools—from chatbots and fraud detection engines to automated underwriting and cybersecurity analytics—many are doing so through third-party vendors. But while outsourcing accelerates innovation, it also introduces new dimensions of risk—particularly when AI is involved. Traditional TPRM programs were built to evaluate financial viability, data security, and regulatory compliance. Today, those same programs must evolve to assess vendor’s AI-related risks, including bias, data misuse, explainability, and model governance.

Below are steps organizations can integrate AI considerations into their TPRM processes to stay ahead of emerging risks:

1. Update Your Vendor Classification Criteria

Not all AI tools carry the same level of risk. TPRM teams should identify AI vendors and classify them by impact:

    • Does the vendor use AI to make or inform decisions that affect your customers or operations?
    • Is the AI model trained on proprietary or client data?
    • Does the tool access PII, PHI, or other sensitive data?
    • Is the AI system autonomous or human-in-the-loop?

Vendors with high-risk AI use cases should be subject to enhanced due diligence, just like critical IT or financial service providers.

2. Add AI-Specific Questions to Your Due Diligence Questionnaires

Go beyond SOC 2 and ISO 27001. For vendors leveraging AI, ask:

    • Are you compliant with ISO/IEC 42001 (AI Management Systems)?
    • How do you monitor for bias, fairness, or explainability?
    • What training data is used, and how is it sourced?
    • Can you describe the AI’s decision logic or provide documentation (e.g., model cards)?
    • Is AI output auditable and subject to human review?

These questions help determine whether the vendor has mature AI governance—or is exposing you to reputational and compliance risk.

3. Establish Ongoing AI Monitoring for High-Risk Vendors

TPRM doesn’t stop after onboarding. For vendors using AI in critical processes:

    • Require annual attestations of compliance with AI standards.
    • Monitor for drift, new model versions, or updates that may introduce new risks.
    • Watch for headlines involving AI failures, lawsuits, or ethical violations—just as you would data breaches or regulatory actions.

Encourage vendors to notify you proactively if they change their AI models, retrain them with new data, or expand their functionality.

4. Align Contract Language with AI Expectations

Contractual protections should now include AI-specific provisions, such as:

    • No use of client data for training external models (e.g., public LLMs)
    • Clear obligations around bias mitigation, auditability, and explainability
    • Right-to-audit or obtain model documentation if material risks emerge
    • Termination clauses in the event of irresponsible AI behavior

TPRM and legal teams must work together to modernize standard contract templates for the AI era.

5. Collaborate Internally on AI Risk Ownership

AI risk doesn’t sit in one department. TPRM leaders should collaborate with:

    • IT/Security: to evaluate infrastructure and data handling
    • Legal/Compliance: to understand regulatory exposure (e.g., EEOC, GDPR, FTC AI transparency rules)
    • Business Owners: to confirm alignment with use case goals
    • Internal Audit: to assess control design and monitoring

Bringing AI under the same governance umbrella as other operational and compliance risks allows for centralized risk visibility and smarter decision-making.

AI Governance Starts with the Vendors You Trust

Your organization is only as secure—and as ethical—as the third parties you rely on. As artificial intelligence reshapes industries, AI governance must be a core pillar of your TPRM strategy. Now is the time to update your frameworks, train your staff, and hold your vendors to a higher standard. Because responsible innovation doesn’t just protect your data—it protects your reputation.

GRF’s Risk Advisory Services practice helps organizations proactively manage emerging risks through tailored solutions in internal audit, cybersecurity, enterprise risk management, and third-party risk management (TPRM). As AI technologies continue to evolve, our team is uniquely positioned to help clients integrate AI governance into their existing TPRM frameworks—ensuring compliance, mitigating risk, and safeguarding reputation. To learn more or to begin designing an AI-compliant TPRM policy for your organization, contact us online, or reach out to Melissa Musser at the contact info below.

Melissa Musser, CPA, CIA, CITP, CISA

Partner and Director, Risk & Advisory Services