Dec 18, 2022

Dec 18, 2022

Dec 18, 2022

NYC AI Bias Law - What You Need to Know to Avoid Fines

NYC AI Bias Law - What You Need to Know to Avoid Fines

NYC AI Bias Law - What You Need to Know to Avoid Fines

The Bryq Team

HR Experts

Bryq is composed of a diverse team of HR experts, including I-O psychologists, data scientists, and seasoned HR professionals, all united by a shared passion for soft skills.

Bryq is composed of a diverse team of HR experts, including I-O psychologists, data scientists, and seasoned HR professionals, all united by a shared passion for soft skills.

Artificial intelligence (AI) has taken the world of work by storm. Whether companies are looking to improve diversity or speed up their hiring process, it’s clear that organizations are looking toward the future of work through the use of AI. Research shows that over 99% of Fortune 500 companies and nearly 1 in 4 of all US employers currently use AI tools. With this in mind, New York City (NYC) has stepped in to make sure that the AI tools being used are free from bias.

The more AI tools we see become available in the marketplace, the more we see AI being misused within the hiring process. While some companies are misusing AI, other companies are unknowingly using AI tools that encourage bias and discrimination within the HR processes they were designed to automate. Noticing this, cities like NYC have decided to take action. The city has already implemented a new AI Bias Law to ensure this does not happen – and it’s likely going to affect your company in a significant way.

For more information on the NYC law and how your company can ensure its compliance, continue reading the blog below. 

What is the NYC AI Bias Law?

The New York City AI Bias Law refers to the “Local Law 144” that the NYC Council has voted to implement. The new law states that not only will companies need to regularly audit their AI tools through a third-party auditing company for bias, but they will have to let candidates and employees know in advance that they will be implementing these AI tools within their hiring and promotional processes. This law applies to any business using AI technology in employment matters within New York City, regardless of their location. If AI systems impact employment for NYC residents, compliance is required, even for companies based elsewhere.

The NYC AI Bias Law was passed by the NYC Council in November 2021. The enforcement of the law, originally scheduled for January 1, 2023, was delayed to April 15, 2023, due to a large number of comments received during the first public hearing on the Department of Consumer and Worker Protection's (DCWP's) proposed rules. After a second version of the proposed rules was introduced and a second public hearing was held, the DCWP set a final enforcement date of July 5, 2023. 

Companies found misusing AI tools or using unaudited tools in their talent management process will be fined through the NYC Office of the Corporation Counsel. These companies can be reported to the council and lawsuits can be filed against them for Title VII employment discrimination. Companies caught using unaudited tools face an initial fine of $500 and then $1500 daily for every subsequent violation. Each day the tool is used without audit will count as a separate violation. Each candidate or employee not notified in advance will also count as a separate violation. When adding up the costs, a lack of compliance can cause companies thousands of dollars in fines.

AI Bias Laws = The War on AI?

While this law might seem like an attack on artificial intelligence as a whole, the truth is that cities like NYC are implementing this law to make sure that only the best and most thorough AI tools are being used to help employers find the best possible talent.

We know that AI isn’t perfect. There have been plenty of accounts of artificial intelligence tools being used improperly or discriminating against minorities. Why does this happen? Because the output of AI tools is only as good as the input we provide when creating them. We need to think of ourselves as the teachers and our AI tools are the students. When we provide them with biased data and inaccuracies, that is the behavior that they will learn and their output will reflect this.

New York City doesn’t want to hinder AI from changing our lives for the better. On the contrary, NYC wants to monitor these AI tools so that artificial intelligence can continue to be put to good use. Without audits, things like bias or discrimination creep in and begin hindering people from reaching greater heights.

The Future of AI in the Workplace

Whatever your thoughts surrounding AI may be, one thing is certain: artificial intelligence is here to stay. These tools have transformed the way we hire and manage talent and there’s no going back. Society will simply have to adjust to the ‘new normal’ of AI and smart tools. With more of these tools being incorporated into our daily workflows, the more we will need to audit them for compliance.

New York City is just the first of what is likely to be many cities to introduce a law like this. We are already starting to see other locations around the world following the city’s lead. The state of California is considering introducing similar anti-bias laws and implementing AI audit mandates. Likewise, the EU has also proposed the Artificial Intelligence Act, which will leave AI tools subject to highly specific legal requirements.

While this might seem like society is trying to ban AI from existing, the existence of audits and regulations is to keep us safe. NYC and other places around the world are putting laws into place to protect our society, while holding companies accountable for making their AI tools more efficient and bias-free. 

To comply with the new law,  you need to make sure that the AI tools you are using have been impartially evaluated by an independent bias auditor in the past year. For instance, Bryq was audited by Holistic AI, a London-based risk management company, and has been verified to comply with the requirements of the law.

Would you like to learn more about how Bryq can assist you in making data-driven, unbiased talent decisions? Schedule a free demo with our team!

Artificial intelligence (AI) has taken the world of work by storm. Whether companies are looking to improve diversity or speed up their hiring process, it’s clear that organizations are looking toward the future of work through the use of AI. Research shows that over 99% of Fortune 500 companies and nearly 1 in 4 of all US employers currently use AI tools. With this in mind, New York City (NYC) has stepped in to make sure that the AI tools being used are free from bias.

The more AI tools we see become available in the marketplace, the more we see AI being misused within the hiring process. While some companies are misusing AI, other companies are unknowingly using AI tools that encourage bias and discrimination within the HR processes they were designed to automate. Noticing this, cities like NYC have decided to take action. The city has already implemented a new AI Bias Law to ensure this does not happen – and it’s likely going to affect your company in a significant way.

For more information on the NYC law and how your company can ensure its compliance, continue reading the blog below. 

What is the NYC AI Bias Law?

The New York City AI Bias Law refers to the “Local Law 144” that the NYC Council has voted to implement. The new law states that not only will companies need to regularly audit their AI tools through a third-party auditing company for bias, but they will have to let candidates and employees know in advance that they will be implementing these AI tools within their hiring and promotional processes. This law applies to any business using AI technology in employment matters within New York City, regardless of their location. If AI systems impact employment for NYC residents, compliance is required, even for companies based elsewhere.

The NYC AI Bias Law was passed by the NYC Council in November 2021. The enforcement of the law, originally scheduled for January 1, 2023, was delayed to April 15, 2023, due to a large number of comments received during the first public hearing on the Department of Consumer and Worker Protection's (DCWP's) proposed rules. After a second version of the proposed rules was introduced and a second public hearing was held, the DCWP set a final enforcement date of July 5, 2023. 

Companies found misusing AI tools or using unaudited tools in their talent management process will be fined through the NYC Office of the Corporation Counsel. These companies can be reported to the council and lawsuits can be filed against them for Title VII employment discrimination. Companies caught using unaudited tools face an initial fine of $500 and then $1500 daily for every subsequent violation. Each day the tool is used without audit will count as a separate violation. Each candidate or employee not notified in advance will also count as a separate violation. When adding up the costs, a lack of compliance can cause companies thousands of dollars in fines.

AI Bias Laws = The War on AI?

While this law might seem like an attack on artificial intelligence as a whole, the truth is that cities like NYC are implementing this law to make sure that only the best and most thorough AI tools are being used to help employers find the best possible talent.

We know that AI isn’t perfect. There have been plenty of accounts of artificial intelligence tools being used improperly or discriminating against minorities. Why does this happen? Because the output of AI tools is only as good as the input we provide when creating them. We need to think of ourselves as the teachers and our AI tools are the students. When we provide them with biased data and inaccuracies, that is the behavior that they will learn and their output will reflect this.

New York City doesn’t want to hinder AI from changing our lives for the better. On the contrary, NYC wants to monitor these AI tools so that artificial intelligence can continue to be put to good use. Without audits, things like bias or discrimination creep in and begin hindering people from reaching greater heights.

The Future of AI in the Workplace

Whatever your thoughts surrounding AI may be, one thing is certain: artificial intelligence is here to stay. These tools have transformed the way we hire and manage talent and there’s no going back. Society will simply have to adjust to the ‘new normal’ of AI and smart tools. With more of these tools being incorporated into our daily workflows, the more we will need to audit them for compliance.

New York City is just the first of what is likely to be many cities to introduce a law like this. We are already starting to see other locations around the world following the city’s lead. The state of California is considering introducing similar anti-bias laws and implementing AI audit mandates. Likewise, the EU has also proposed the Artificial Intelligence Act, which will leave AI tools subject to highly specific legal requirements.

While this might seem like society is trying to ban AI from existing, the existence of audits and regulations is to keep us safe. NYC and other places around the world are putting laws into place to protect our society, while holding companies accountable for making their AI tools more efficient and bias-free. 

To comply with the new law,  you need to make sure that the AI tools you are using have been impartially evaluated by an independent bias auditor in the past year. For instance, Bryq was audited by Holistic AI, a London-based risk management company, and has been verified to comply with the requirements of the law.

Would you like to learn more about how Bryq can assist you in making data-driven, unbiased talent decisions? Schedule a free demo with our team!

Artificial intelligence (AI) has taken the world of work by storm. Whether companies are looking to improve diversity or speed up their hiring process, it’s clear that organizations are looking toward the future of work through the use of AI. Research shows that over 99% of Fortune 500 companies and nearly 1 in 4 of all US employers currently use AI tools. With this in mind, New York City (NYC) has stepped in to make sure that the AI tools being used are free from bias.

The more AI tools we see become available in the marketplace, the more we see AI being misused within the hiring process. While some companies are misusing AI, other companies are unknowingly using AI tools that encourage bias and discrimination within the HR processes they were designed to automate. Noticing this, cities like NYC have decided to take action. The city has already implemented a new AI Bias Law to ensure this does not happen – and it’s likely going to affect your company in a significant way.

For more information on the NYC law and how your company can ensure its compliance, continue reading the blog below. 

What is the NYC AI Bias Law?

The New York City AI Bias Law refers to the “Local Law 144” that the NYC Council has voted to implement. The new law states that not only will companies need to regularly audit their AI tools through a third-party auditing company for bias, but they will have to let candidates and employees know in advance that they will be implementing these AI tools within their hiring and promotional processes. This law applies to any business using AI technology in employment matters within New York City, regardless of their location. If AI systems impact employment for NYC residents, compliance is required, even for companies based elsewhere.

The NYC AI Bias Law was passed by the NYC Council in November 2021. The enforcement of the law, originally scheduled for January 1, 2023, was delayed to April 15, 2023, due to a large number of comments received during the first public hearing on the Department of Consumer and Worker Protection's (DCWP's) proposed rules. After a second version of the proposed rules was introduced and a second public hearing was held, the DCWP set a final enforcement date of July 5, 2023. 

Companies found misusing AI tools or using unaudited tools in their talent management process will be fined through the NYC Office of the Corporation Counsel. These companies can be reported to the council and lawsuits can be filed against them for Title VII employment discrimination. Companies caught using unaudited tools face an initial fine of $500 and then $1500 daily for every subsequent violation. Each day the tool is used without audit will count as a separate violation. Each candidate or employee not notified in advance will also count as a separate violation. When adding up the costs, a lack of compliance can cause companies thousands of dollars in fines.

AI Bias Laws = The War on AI?

While this law might seem like an attack on artificial intelligence as a whole, the truth is that cities like NYC are implementing this law to make sure that only the best and most thorough AI tools are being used to help employers find the best possible talent.

We know that AI isn’t perfect. There have been plenty of accounts of artificial intelligence tools being used improperly or discriminating against minorities. Why does this happen? Because the output of AI tools is only as good as the input we provide when creating them. We need to think of ourselves as the teachers and our AI tools are the students. When we provide them with biased data and inaccuracies, that is the behavior that they will learn and their output will reflect this.

New York City doesn’t want to hinder AI from changing our lives for the better. On the contrary, NYC wants to monitor these AI tools so that artificial intelligence can continue to be put to good use. Without audits, things like bias or discrimination creep in and begin hindering people from reaching greater heights.

The Future of AI in the Workplace

Whatever your thoughts surrounding AI may be, one thing is certain: artificial intelligence is here to stay. These tools have transformed the way we hire and manage talent and there’s no going back. Society will simply have to adjust to the ‘new normal’ of AI and smart tools. With more of these tools being incorporated into our daily workflows, the more we will need to audit them for compliance.

New York City is just the first of what is likely to be many cities to introduce a law like this. We are already starting to see other locations around the world following the city’s lead. The state of California is considering introducing similar anti-bias laws and implementing AI audit mandates. Likewise, the EU has also proposed the Artificial Intelligence Act, which will leave AI tools subject to highly specific legal requirements.

While this might seem like society is trying to ban AI from existing, the existence of audits and regulations is to keep us safe. NYC and other places around the world are putting laws into place to protect our society, while holding companies accountable for making their AI tools more efficient and bias-free. 

To comply with the new law,  you need to make sure that the AI tools you are using have been impartially evaluated by an independent bias auditor in the past year. For instance, Bryq was audited by Holistic AI, a London-based risk management company, and has been verified to comply with the requirements of the law.

Would you like to learn more about how Bryq can assist you in making data-driven, unbiased talent decisions? Schedule a free demo with our team!

Gain a competitive edge with data-informed talent decisions.

Request a demo and see how our platform is Shaping the Future of Work.

Gain a competitive edge with data-informed talent decisions.

Request a demo and see how our platform is Shaping the Future of Work.

Gain a competitive edge with data-informed talent decisions.

Request a demo and see how our platform is Shaping the Future of Work.

Join our community!
Get the latest HR trends & tips delivered to your inbox.

Join our community!
Get the latest HR trends & tips delivered to your inbox.

Join our community!
Get the latest HR trends & tips delivered to your inbox.