Opinion | Are These States About to Make a Big Mistake on AI?

AI is potentially transformative. Whether that’s a good or bad thing depends on whether we set the right rules.

May 2, 2024 - 22:58
 0
Opinion | Are These States About to Make a Big Mistake on AI?

If you’ve looked for a job recently, AI may well have a hidden hand in the process. An AI recruiter might have scored your LinkedIn profile or resume. HR professionals might have used a different AI app to scrape your social media profiles and spit out scores for your cheerfulness and ability to work with others. They also might have used AI to analyze your word choice or facial expressions in a video interview, and used yet another app to run an AI background check on you.

It’s not at all clear how well these programs work. AI background check companies have been sued for making mistakes that cost people job opportunities and income. Some resume screening tools have been found to perpetuate bias. One resume screening program identified two factors as the best predictors of future job performance: having played high school lacrosse and being named Jared. Another assessment provided high scores in English proficiency even when questions were answered exclusively in German.

To make matters worse, companies don’t have to tell you when they use AI or algorithmic software to set your car insurance premiums or rent, or make major decisions about your employment, medical care or housing, much less test those technologies for accuracy or bias.

To consumer and worker advocates like us, this situation practically screams out for regulation. Unfortunately, Congress has struggled for years to pass meaningful new regulations on technology, and there’s no guarantee that will change soon. State legislatures are picking up the slack, with a number of AI bills currently in the works — but they won’t solve the problem either, unless they’re reformed.

When the main AI bias bills currently under consideration appeared in state capitols across the country earlier this year, we were disappointed to see that they contained industry-friendly loopholes and weak enforcement. Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.

In recent months, legislators in at least 10 states have introduced a spate of closely related bills that address AI and algorithmic decision-making in a wide range of settings, including hiring, education, insurance, housing, lending, government services and criminal sentencing. Key sections of many of these bills are strikingly similar to each other, and to model legislative text prepared by a major HR tech company.

Some of these bills have a real chance of passing this year. The Connecticut Senate passed a sweeping bill covering automated decision-making along with many other AI-related topics last week. The chair of the California Assembly’s Privacy Committee is sponsoring a bill that would establish similar rules for AI decision-making, and the Colorado Senate’s majority leader is sponsoring yet another version. If state lawmakers aren’t careful, we may see laws spread across the country that don’t do enough to protect consumers or workers.

The Devil’s in the Details

At first glance, these bills seem to lay out a solid foundation of transparency requirements and bias testing for AI-driven decision systems. Unfortunately, all of the bills contain loopholes that would make it too easy for companies to avoid accountability.

For example, many of the bills would cover only AI systems that are “specifically developed” to be a “controlling” or “substantial” factor in a high-stakes decision. Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision.

Sound policy would also address the fact that we often have no idea if a company is using AI to make key decisions about our lives, much less what personal information and other factors the program considers.

Solid regulation would require businesses to clearly and directly tell you what decision an AI program is being used to make, and what information it will employ to do it. It would also require companies to provide an explanation if their AI system decides you aren’t a good fit for a job, a college, a home loan or other important benefits. But under most of these bills, the most a company would have to do is post a vague notice in a hidden corner of their website.

The Connecticut and Colorado bills also let companies withhold anything they consider “confidential” or a “trade secret” from disclosures — but companies often claim lots of ordinary information is confidential or proprietary. Trade secret protections can be misused: Theranos, the famously fraudulent blood testing startup, reportedly threatened to sue former employees under trade secret laws for disclosing their concerns that the technology simply didn’t work.

Most of the bills also contain weak enforcement mechanisms, preventing consumers and workers from filing lawsuits, and instead putting enforcement in the hands of overstretched and understaffed agencies. Even if a bill requires all companies to tell consumers about their AI-driven decisions, it won’t do much good if companies know it’s unlikely they will be caught or face meaningful penalties. Similar flaws have allowed companies to almost completely ignore a 2021 AI hiring law adopted in New York City.

The upshot is that these bills would essentially let companies keep doing what they already do: Use AI to make key decisions about us without accountability.Democratic state Sen. James Maroney of Connecticut explains a far-reaching bill that attempts to regulate artificial intelligence during a debate in the state Senate in Hartford, Connecticut, on April 24.

What Should Happen Next

The first AI bias bill that becomes law will have ripple effects across the country.

People like to talk about states as “laboratories of democracy,” experimenting with different solutions and figuring out what works best. But that’s not how it always works in practice. If you look at the data privacy laws passed by 15 states, many of them are remarkably similar, down to the sentence level. That’s because state lawmakers often start by copying a privacy law that another state passed and then make minor tweaks. It’s not an inherently bad process, but it does mean that if one state sets a poor standard, it can spread to many others.

We’ve seen industry-friendly loopholes pop up in one privacy bill — such as an exemption for companies covered by a broad federal financial law — and then get copied into about a dozen others.

With that in mind, here is what legislators working on AI bias and transparency bills should do to ensure their legislation actually helps workers and consumers:

  • Tighten definitions of AI decision systems. If an AI program could alter the outcome of an employment, housing, insurance or other important decision about our lives, the law should cover it — full stop.
  • Require transparency, early. Companies should tell us how AI systems make decisions and recommendations before a key decision is made, so that we understand how we will be evaluated and so that disabled workers and consumers can seek accommodation if necessary.
  • Require explanations. When an AI-driven decision denies someone a job, housing or another major life opportunity, companies shouldn’t be allowed to rely on uninterpretable AI systems or keep us in the dark. A law should require explanations of how and why an AI decided the way it did.
  • Eliminate the loophole for “confidential information” and “trade secrets.” Companies have frequently used claims of confidentiality and trade secret protection to cover up illegal activity or flaws in their products.
  • Sync the bills with existing civil rights laws. Definitions of algorithmic discrimination shouldn’t include exceptions or loopholes that suggest discrimination by a machine somehow deserves special treatment, as compared to discrimination through other means.
  • Make enforcement strong enough to ensure companies will take the law seriously. Allow consumers and workers to bring their own lawsuits or, at a minimum, give enforcement agencies the resources and expertise to enforce the law vigorously and impose stiff fines on companies that violate the law.

This isn’t just the responsibility of state lawmakers. At the federal level, Congress and agencies in Washington may be able to help this from becoming a national problem. Congress could pass a law providing transparency and consumer protections when companies use AI for high-stakes decisions. In the meantime, agencies can issue rules and guidance under their existing authority. Some have already made some moves: The Consumer Financial Protection Bureau, for example, has clarified that creditors using complex algorithms still have to provide specific and accurate explanations when they deny people lines of credit. If an algorithm is so complex that the company can’t explain it, the CFPB says they can’t use it.

Right now, however, we are warily watching the bills moving through the statehouses in Hartford, Sacramento and elsewhere. The problems in this current crop of AI bills can be fixed: Loopholes can be closed, transparency provisions improved and accountability sections strengthened. We’ve seen some promising changes in Connecticut; in response to consumer and labor advocates’ demands, the pending version of Connecticut’s AI bill ditches the loophole-filled definition of covered AI systems and would require businesses to provide some explanation when they use AI to make a high-stakes decision about you.

But whether those and other, as yet unmade, changes to protect consumers will be included in Connecticut’s final law remains to be seen. Connecticut Gov. Ned Lamont has expressed skepticism of his state’s bill — not because the bill doesn’t do enough to protect consumers, but because he fears it will hurt Connecticut’s standing with the business community.

There’s far more work to be done — in Connecticut, California and across the country.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

viralnews360 I'm an IT whiz by day, a wordsmith by night. With a keyboard in hand and a head full of code, I translate the complexities of the digital world into engaging stories for the folks at ViralNews360. When I'm not deciphering algorithms or wrangling servers, you'll find me exploring the latest tech trends and crafting articles that inform, inspire, and maybe even spark a few laughs. Join me on the journey as I bridge the gap between tech and everyday life, one byte at a time!