Better News Network

US Lawmakers Grapple With AI Regulation With No Clear Consensus on Governing Rules

US lawmakers are grappling with what guardrails to put around burgeoning artificial intelligence, but months after ChatGPT got Washington's attention, consensus is far from certain.

Interviews with a US senator, congressional staffers, AI companies and interest groups show there are a number of options under discussion.

Some proposals focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance. Other possibilities include rules to ensure AI isn't used to discriminate or violate someone's civil rights.Meta Will Test AI-Powered Ads Tools That Can Create Text, Image Content

Another debate is whether to regulate the developer of AI or the company that uses it to interact with consumers. And OpenAI, the startup behind the chatbot sensation ChatGPT, has discussed a standalone AI regulator.

It's uncertain which approaches will win out, but some in the business community, including IBM and the US Chamber of Commerce, favour the approach that only regulates critical areas like medical diagnoses, which they call a risk-based approach.

If Congress decides new laws are necessary, the US Chamber's AI Commission advocates that "risk be determined by impact to individuals," said Jordan Crenshaw of the Chamber's Technology Engagement Center. "A video recommendation may not pose as high of a risk as decisions made about health or finances."OpenAI CEO to Testify Before US Senate Panel Next Week

Surging popularity of so-called generative AI, which uses data to create new content like ChatGPT's human-sounding prose, has sparked concern the fast-evolving technology could encourage cheating on exams, fuel misinformation and lead to a new generation of scams.

The AI hype has led to a flurry of meetings, including a White House visit this month by the CEOs of OpenAI, its backer Microsoft, and Alphabet. President Joe Biden met with the CEOs.

Congress is similarly engaged, say congressional aides and tech experts.EU Vice President Suggests to Speed Up Work on AI Regulation

"Staff broadly across the House and the Senate have basically woken up and are all being asked to get their arms around this," said Jack Clark, co-founder of high-profile AI startup Anthropic, whose CEO also attended the White House meeting. "People want to get ahead of AI, partly because they feel like they didn't get ahead of social media."

As lawmakers get up to speed, Big Tech's main priority is to push against "premature overreaction," said Adam Kovacevich, head of the pro-tech Chamber of Progress.

And while lawmakers like Senate Majority Leader Chuck Schumer are determined to tackle AI issues in a bipartisan way, the fact is Congress is polarized, a Presidential election is next year, and lawmakers are addressing other big issues, like raising the debt ceiling.AI Could Replace 80 Percent of Jobs in 'Next Few Years', Expert Says

Schumer's proposed plan requires independent experts to test new AI technologies prior to their release. It also calls for transparency and providing the government with data it needs to avert harm.

Government micromanagement

The risk-based approach means AI used to diagnose cancer, for example, would be scrutinized by the Food and Drug Administration, while AI for entertainment would not be regulated. The European Union has moved toward passing similar rules.Hollywood Strike Intensifies Due to Claim That AI Could Do Writers' Jobs

But the focus on risks seems insufficient to Democratic Senator Michael Bennet, who introduced a bill calling for a government AI task force. He said he advocates for a "values-based approach" to prioritize privacy, civil liberties and rights.

Risk-based rules may be too rigid and fail to pick up dangers like AI's use to recommend videos that promote white supremacy, a Bennet aide added.

Legislators have also discussed how best to ensure AI is not used to racially discriminate, perhaps in deciding who gets a low-interest mortgage, according to a person following congressional discussions who is not authorized to speak to reporters.

At OpenAI, staff have contemplated broader oversight.

Cullen O'Keefe, an OpenAI research scientist, proposed in an April talk at Stanford University the creation of an agency that would mandate that companies obtain licenses before training powerful AI models or operating the data centres that facilitate them. The agency, O'Keefe said, could be called the Office for AI Safety and Infrastructure Security, or OASIS.

Asked about the proposal, Mira Murati, OpenAI's chief technology officer, said a trustworthy body could "hold developers accountable" to safety standards. But more important than the mechanics was agreement "on what are the standards, what are the risks that you're trying to mitigate."

The last major regulator to be created was the Consumer Financial Protection Bureau, which was set up after the 2007-2008 financial crisis.

Some Republicans may balk at any AI regulation.

"We should be careful that AI regulatory proposals don't become the mechanism for government micromanagement of computer code like search engines and algorithms," a Senate Republican aide told Reuters.

Thomson Reuters 2023


Google I/O 2023 saw Google tell us repeatedly that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.

Monday, May 15, 2023 at 11:21 am

Full Coverage