AI & Employment Law Update: Harper v. Sirius XM Radio, LLC

A visualization of AI making a hiring decision, which was at the heart of a potential artificial intelligence law update on the heels of the recent Harper v. Sirius XM Radio, LLC, AI discrimination lawsuit.

Technology is re-engineering every aspect of the workplace, including how we find and hire talent. Businesses of all sizes are increasingly turning to Artificial Intelligence (AI) to streamline hiring, from sifting through thousands of resumes to conducting screenings. Embracing innovation is a key trait of successful businesses, but it’s important to do so responsibly, as is evidenced by a recent headline-grabbing AI discrimination lawsuit helping to shape the law in this fast evolving area.  After all, ostensibly, using this tool seems like a win-win situation that results in not only efficiency for the employer but a faster process for the applicant. But what happens when these high-tech tools potentially carry the same biases that have plagued traditional hiring for decades? Well, for those following the news, we are seeing what happens in real-time with a recent case against Sirius XM Radio, LLC, that could very well result in an artificial intelligence law update

As an employment law expert, lawyer, and mediator, I offer expert commentary and analysis on how AI continues to shape and transform the modern workplace. Harper v. Sirius XM Radio is a perfect case study for navigating this new, complex environment, offering vital lessons for both employers and employees whose careers are being shaped by these algorithms. Here’s what to know.

Harper v. Sirius XM Radio, LLC: Artificial Intelligence Law Update

Harper v. Sirius XM Radio, LLC, is a perfect example of the importance of strategically using and implementing this transformative technology. Let’s begin with an overview of the lawsuit in question that could very well result in a significant artificial intelligence law update

Related Article: Implications of the Trump AI Executive Order: What Employers and Employees Need to Know

What is the Sirius XM AI Discrimination Lawsuit?

Filed on August 4, 2025, in the U.S. District Court, Eastern District of Michigan, this lawsuit brings a powerful new challenge for employers who use AI-powered hiring tools. 

The lawsuit, brought by an unsuccessful job applicant named Arshon Harper (who is representing himself), alleges that Sirius XM Radio’s AI-powered hiring tool, the iCIMS Applicant Tracking System, discriminated against him based on his race. 

The core of the complaint isn’t that the AI system was intentionally programmed to be biased. Instead, the lawsuit claims that the system used historical hiring data that perpetuated past discrimination. In short, the AI learned to be biased by studying biased human behavior.

According to Harper, the AI system analyzed his application materials and assigned scores based on data points that could be proxies for race, such as educational institutions, home zip codes, and even employment history, which historically disproportionately impact African-American applicants.

As a result, his application for approximately 150 IT positions was allegedly downgraded and eliminated before he could even be considered by a human reviewer, despite his qualifications. Harper’s complaint brings forth two key legal arguments:

  • Disparate Treatment: He claims intentional discrimination in the AI tool’s design.
  • Disparate Impact: He claims that the outcomes had an illegal, discriminatory impact, even if the bias was not intentionally built into the tool in the first place. 

Harper has brought race discrimination claims under Title VII of the Civil Rights Act and Section 1981, and he also seeks to expand his lawsuit into a class action for all similarly situated applicants. The fact that this case is now seeking to be a class-action lawsuit is a warning sign. 

Harper is not only seeking compensatory and punitive damages but also injunctive relief to force Sirius XM to stop or significantly modify its use of the AI tool. Plaintiffs and their attorneys are increasingly seeing a legal avenue to hold employers accountable for what their AI tools are doing.

As of right now, we are still waiting to see how this lawsuit will play out and how its ruling could influence the use of AI in the workplace. We will be watching this case carefully. 

Related Article: What to Know About AI Ethics in the Workplace in 2025

What Can Employers Learn from the Harper v. Sirius XM Radio, LLC case? 

The Sirius XM Radio AI discrimination lawsuit should serve as a critical wake-up call. Employers should regularly and proactively audit their use of AI in the workplace and consider some of the following takeaways from this case:

  1. Create an AI Governance Program: This is the foundational step. You need a dedicated program to develop clear systems and guardrails for AI use. This includes defining who is responsible for AI oversight, setting up review processes, and ensuring accountability.
  2. Verify and Audit Vendors: You cannot simply trust a vendor’s claims. Require your AI tool vendors to document their bias testing and ensure your contracts include strong assurances for non-discrimination. Being able to prove that you have done your due diligence is absolutely essential should any problems arise. 
  3. Be Transparent With Candidates: Transparency is essential during these times. Communicate clearly with applicants when and how AI tools are being used in the hiring process. While this is key to building trust and bolstering your reputation, keep in mind this could also help you stay ahead of the curve in regard to compliance, as it could also be required by new state and local laws and AI anti-discrimination policies if it is not already. 
  4. Provide and Explicitly Publicize Accommodation Options: Just as you would with any other step in the hiring process, you must provide a pathway for applicants to request alternative assessments or a human review if they feel disadvantaged by an AI tool. Publicize this option widely. While not always possible, document any pathway you offer and ensure it is explicitly communicated in job postings and application portals. 
  5. Align AI-Driven Questions With Job Role Requirements: Ensure any questions or metrics used by your AI tool relate directly to the essential functions of the role. The AI should not be making judgments based on irrelevant data points.
  6. ALWAYS Maintain Human Oversight: This is, without a doubt, the most crucial point to emphasize. Never let an AI tool be the sole decision-maker. A qualified and well-trained human resources professional or hiring manager should always review and have the ability to override AI recommendations before any final action is taken. Also, revisit hiring outcomes continuously to ensure fairness and compliance. This provides a crucial check and balance.
  7. Record Your Process and Rationale: Maintain detailed records of how hiring decisions are made, including a justification for why an AI tool was used and how its recommendations were considered. This documentation will be invaluable if you ever face litigation.
  8. Run Regular Accessibility Audits: Just like any other technology, your AI systems must comply with disability accommodation requirements. Periodically test your systems to ensure they are accessible to all applicants.
  9. Stay Vigilant for Disparate Impact and Adjust: Regularly analyze your data to identify any disparities across protected classes. If you see that your AI tool is disproportionately affecting a certain group, you must take immediate action to investigate and correct the issue.
  10. Stay Up-to-Date on Legal Developments: The legal landscape around AI is evolving rapidly. Stay informed about new legislation, court rulings, and agency guidance from bodies like the EEOC. A proactive approach is the best way to stay ahead of the curve. 

What Insights Can Employees Gain from the Sirius XM Radio AI Discrimination Lawsuit?

As AI becomes the digital gatekeeper for career opportunities, employees must become fluent in their rights and aware of how these systems function. If you are an employee, here are insights you can take away to navigate this new AI-driven landscape, especially if Harper v. Sirius XM Radio, LLC, results in an artificial intelligence law update:

  1. Transparency and Accommodation: You have a right to know when an AI tool is being used to evaluate your job application or manage your performance. If you suspect an AI is disadvantageous to you, you should utilize any pathway provided to request an alternative assessment or a human review.
  2. Be Aware of Proxy Data: Understand that an AI’s score is based on data points. Be aware that the information you provide, such as educational institutions, past employers, or even home addresses, could inadvertently be used as proxies for protected characteristics that introduce bias.
  3. Align AI-Driven Questions With Job Role Requirements: Pay attention to whether the questions or metrics used by the AI tool actually relate directly to the essential functions of the job. If the AI seems to be making judgments based on irrelevant data points, this is a red flag.
  4. Ensure Accessibility: If you require a disability accommodation, your employer’s AI systems must comply. If the application or evaluation process seems inaccessible, immediately communicate this issue to the employer.
  5. Document and Report: If you believe you were qualified for a position but were eliminated by an automated system without sufficient human review, document the entire process. If you suspect discrimination based on race, gender, age, or disability, file a complaint with Human Resources or the relevant regulatory body (like the EEOC).

Related Article: Best Practices for Using AI in the Workplace in 2025: What Employers Should Know

Book Me for a Segment to Bring Law to Light as an Expert on All the Most Pressing Employment Law Issues for Your Audience, Including Cases that Result in Artificial Intelligence Laws Updates. 

The Harper v. Sirius XM Radio, LLC AI discrimination lawsuit is a powerful example of what can go wrong when companies adopt new technologies without a proper legal strategy for implementation. Make no mistake – AI can be a powerful asset, and it is not going anywhere, but it is not a silver bullet for hiring. As we eagerly wait to see if this case will result in a landmark artificial intelligence law update, it is clear that AI cannot replace the empathy, judgment, and oversight of human beings. By taking proactive steps to audit your systems, update policies, and maintain an element of human oversight, employers can leverage the power of technology while mitigating your legal risk. On the other side of that same coin, employees should also ensure that they are aware of their rights.

To learn more about my work as a mediator and neutral, including my focus on employment, Title IX, sex abuse, class action, and mass torts mediated cases, please reach out to me on LinkedIn @Angela J. Reddock-Wright, Esq., AWI-CH, or click here. 

You may also reach me at Signature Resolution.

For media inquiries, please reach out to josh@kwsmdigital.com.

This communication is not legal advice. It is educational only. For legal advice, consult with an experienced employment law attorney in your state or city.

Leave a Reply

Your email address will not be published. Required fields are marked *