Massachusetts lawmakers this week voted to ban the use of facial recognition by law enforcement and public agencies in a sweeping police reform bill that received significant bipartisan support. If signed into law, Massachusetts would become the first state to fully ban the technology, following bans barring the use of facial recognition in police body cameras and other, more limited city-specific bans on the tech.

The bill, S.2963, marks yet another state government tackling the thorny ethical issue of unregulated facial recognition use in the absence of any federal guidance from Congress. It also includes bans on chokeholds and rubber bullets in addition to restrictions on tear gas and other crowd-control weapons, as reported by TechCrunch. It isn’t a blanket ban on facial recognition; police will still be able to run searches against the state’s driver’s license database but only with a warrant and requirements that law enforcement agencies publish annual transparency reports regarding those searches.

Massachusetts joins cities like Portland, Maine, and Portland, Oregon, as well as San Francisco and Oakland in Northern California, that have banned police use of facial recognition. Earlier this year, Boston became the first major East Coast city to bar police from purchasing and using facial recognition services, but the Massachusetts bill goes a step further in making the ban statewide. S.2963 passed 28-12 in the state senate and 92-67 in the Massachusetts House of Representatives on Tuesday, and it now awaits signing from Massachusetts Gov. Charlie Baker.

MASSACHUSETTS’ BILL INSTITUTES A STATEWIDE BAN ON POLICE USE OF FACIAL RECOGNITION

Use of facial recognition has become a controversial topic in the artificial intelligence industry and the broader tech policy sphere because of a lack of federal guidance regulating its use. That vacuum has allowed a number of companies — most prominently controversial firm Clearview AI — to step in and offer services to governments, law enforcement agencies, private companies, and even individuals, often without any oversight or records as to how it’s used and whether it’s even accurate.

In August, Clearview AI — which has sold access to its software and its database of billions of images, scraped in part from social media sites to numerous government agencies and private companies — signed a contract with Immigration and Customs Enforcement. (In May, Clearview said it would stop selling its tech to private companies following a lawsuit brought against it for violating the Illinois Biometric Information Privacy Act, which, prior to these more recent city bans, was the only piece of US legislation regulating facial regulation use.)

A number of researchers have been sounding the alarm for years now that modern facial recognition, even when aided by advanced AI, can be flawed. Systems like Rekognition have been shown to have issues identifying the gender of darker-skinned individuals and suffer from other racial bias built into how the databases are constructed and how the models are trained on that data. Amazon in June banned police from using its facial recognition platform for one year, with the company saying it wants to give Congress “enough time to implement appropriate rules” governing the sale and use of the technology.

Amazon was following the lead of IBM, which announced that same month it would no longer develop the technology whatsoever after acknowledging criticism from researchers and activists over its potential use in racial profiling, mass surveillance, and other civil rights abuses.