AI in the Workplace Series – NYC’s Automated Employment Decision Tool Law & How It’s Addressing AI Bias


AI has altered a number of industries, and now, it’s beginning to change the workplace in very big ways. The recent WGA and SAG-AFTRA strikes are two examples of AI becoming a potential threat, as the two groups continue to fight for higher wages and assurances that AI tools won’t replace writers and actors in the near future. 

With this increased focus on how AI will play out in the workplace, it’s important for employers and employees alike to stay informed. For the next several weeks, I’ll be following these stories with an AI in the Workplace Series – starting with New York City’s newest restrictions on AI hiring tools.

New York City’s new Automated Employment Decision Tool (AEDT) Law went into effect last month, requiring employers in the area to adhere to a new set of AI regulations during the hiring process. 

The new legislation went into effect on July 5th, 2023 and carries a set of fines for companies that fail to comply. At the same time, similar AI bias laws have cropped up in select states across the country. 

Now, annual bias audits are mandatory for any AI hiring tools used by businesses located in New York City. It’s the first regulation of its kind in the U.S., and a potential sign of more to come if AI continues to pose a risk for unfair bias against certain candidates. 

Read on to learn more about NYC’s newest regulation on AI hiring tools, similar efforts across the country, and how employers can start complying and preparing for a possible wave of AI bias legislation.


NYC’s Automated Employment Decision Tool Law

As of July 5th, employers in New York City that use AI as a part of their hiring processes are required to perform annual audits of their automated employment decision tools (AEDTs). 

A third party is to carry out the audits, checking whether intentional or unintentional bias has been built into hiring systems. It’s mandatory for any company operating within New York City, and failure to comply can bring fines between $500 and $1,500. 

Although the law is limited to NYC, the attempts for regulations on AI bias aren’t. Across the country, numerous state legislatures have looked closer at whether certain AI tools need checks and balances to ensure potential bias isn’t rendering hiring practices unfair. 


AI Bias Laws Across the U.S.

New York isn’t the only legislature that’s vetting the fairness of AI. The New Jersey Assembly, for example, has also considered certain AI restrictions unless employers can provide a bias audit. 

Illinois is another one, as well as Maryland, both of which have proposed laws to restrict facial recognition and video analysis tools during job interviews without candidate consent. 

These types of restrictions are on the rise in the golden state, too. Here in California, the Fair Employment and Housing Council is considering mandates on certain AI tools, as well as AI-powered tests that screen applicants based on gender, race, alongside other protected traits. 

The fact of the matter is that most companies use AI at some point in the hiring process – 90%, as of early 2023

It’s an invaluable asset when it comes to filtering through waves of applications, since a human can’t skim through thousands of resumes at the same rate. 

Still, it can spiral into a biased model and skew your list of qualified applicants as a result, such as with Amazon’s automated recruitment system that formed a bias against female candidates.

There are a few steps employers can take to get ahead of the AI bias curve, remain compliant, and avoid potential fines. 


How Employers Can Prevent AI Bias

Considering most companies use AI in some way to process and filter applicants, these laws could make their way to your area sooner rather than later. But at the moment, they’re most relevant to business operating out of New York City:

Seek Out an Expert: With new laws come new best practices, and it’s crucial to ensure you’re compliant with these unfamiliar standards. With a third-party that holds deep knowledge of AI policy, employers can more quickly adapt to these regulations. 

Make Audit Results Public: The recent NYC regulation states that employers must make their bias audit results publicly available on their corporate websites. They also must inform candidates and current employees in NYC of their use of AI during hiring either through email, traditional mail, job postings, or their website. 

Review AI Hiring Practices: The line between proper and misuse of AI during the hiring process is thin, and your current practices may be breaching current laws if you’re in NYC. Using AI during the initial hiring stages to widen a candidate pool is allowed, while using it at the end of the process to make hiring decisions breaks new NYC regulations. Knowing this, it’s crucial that companies understand when AI is allowable and whether it could be generating bias that harms their selections.

Along with hiring processes, AI is impacting certain industries in historic ways – especially entertainment. 

Get my insights on how Hollywood writers and actors are responding to the threat of AI in this recent Fox News appearance, or in this blog post outlining the driving forces behind the SAG-AFTRA (Screen Actors Guild – American Federation of Television and Radio Artists) and WGA (Writers Guild of America) strikes


Experienced Employment & Title IX Mediator & ADR Professional

Twice-named a U.S. News Best Lawyer in America for employment and labor law, Angela Reddock-Wright is an employment, labor law & Title IX mediator and alternative dispute resolution professional.  Known as the “Workplace Guru,” Angela is an influencer and leading authority on employment, workplace/HR, Title IX, hazing, and bullying issues. Furthermore, she’s been named a “Top 50 Woman Attorney” in California by Super Lawyers, a “Top California Employment Lawyer” by the Daily Journal, and one of Los Angeles’ “Most Influential Minority and Women Attorneys” by the Los Angeles Business Journal.

Angela is a regular legal and media commentator and analyst and has appeared on such media outlets as Good Morning America, Entertainment Tonight, Law and Crime with Brian Ross, Court TV, CNN, NewsNation, ABC News, CBS News, Fox 11 News, KTLA-5, the Black News Channel, Fox Soul – The Black Report, NPR, KPCC, Airtalk-89.3, KJLH Front Page with Dominique DiPrima, the New York Times, the Washington Post, the LA Times,, Yahoo! Entertainment, People Magazine, Essence Magazine, the Los Angeles Sentinel, LA Focus, Daily Journal, Our Weekly and the Wave Newspapers.

Angela is a member of the panel of distinguished mediators and arbitrators with Signature Resolution, a California dispute resolution company. She also owns her dispute resolution law firm, the Reddock Law Group of Los Angeles, specializing in the mediation and resolution of employment discrimination, harassment, retaliation, and other workplace claims, along with Title IX, sexual harassment, assault, and misconduct conduct cases, along with hazing and bullying cases in K-12 schools, colleges, and universities, fraternities and sororities; fire, police and other public safety agencies and departments.

Be sure to follow me on Facebook and Instagram @iamangelareddockwright, LinkedIn at, and tune in to my weekly radio show, KBLA Talk 1580’s Legal Lens with Angela Reddock-Wright each Saturday and Sunday at 11 am PST, or catch past episodes on   You can learn more about the radio show here –




Also, learn more about my book – The Workplace Transformed: 7 Crucial Lessons from the Global Pandemic – here –

For media inquiries, please reach out to

For more information regarding resources mediation and dispute resolution resources for both employees and employers, connect with Angela on LinkedIn for new updates or contact her here. You may also follow her on Instagram.

This communication is not legal advice. It is educational only. For legal advice, consult with an experienced employment law attorney in your state or city.

Leave a Reply

Your email address will not be published. Required fields are marked *