Thursday, April 25, 2024

IBM will no longer develop or research facial recognition tech

Share

IBM CEO Arvind Krishna said on Monday, June 8th that the company will no longer develop or offer general purpose facial recognition or analysis software. In a letter addressed to the Congress, that was written in support of the Justice in Policing Act of 2020, Krishna advocated for new reforms for responsible use of technology and to combat systematic racial injustice and police misconduct.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote Krishna in the letter.

Krishna, who took over the chief executive role earlier this year in April, added that it’s time for Congress to begin a national dialogue on the implications of facial recognition technology and how “should be employed by domestic law enforcement agencies.”

The CEO also voiced his concerns for racial bias that is often found in artificial intelligence systems today. Krishna further called the need for more oversight to audit artificial intelligence tools especially when they’re used in law enforcement and national policies that “bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

People familiar with the matter told CNBC that the death of George Floyd and the attendant shift of spotlight on the topics of police reform and racial inequity convinced IBM to shut down its facial recognition products.

Over the last few years, facial recognition systems have dramatically advanced thanks to developments in fields such as machine learning. However, without any official oversight in place, they’ve been largely allowed to run unregulated and violate user privacy. Most notably, facial recognition tech was brought to the forefront of the national conversation by a startup called Clearview AI that was able to build a database of more than 3 billion images by mostly scraping social media sites. Clearview since then has faced a backlash from companies such as Twitter and is currently dealing with a myriad of privacy lawsuits.

Clearview AI is also reportedly being employed by law enforcement agencies in the ongoing BLM protests across the United States of America. Experts have argued that these systems can misidentify people as they’re largely trained on white male faces.

Krishna didn’t say whether the company would reconsider its decision if and when Congress introduces new laws to bring more scrutiny to technology such as facial recognition. We’ve reached out to IBM for the same and we’ll update the story when we hear back.

Read more

More News