Harris to meet with CEOs about artificial intelligence risks

WASHINGTON (AP) — Vice President Kamala Harris will meet on Thursday with the CEOs of four major companies developing artificial intelligence as the Biden administration rolls out a set of initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk.

The Democratic administration plans to announce an investment of $140 million to establish seven new AI research institutes, administration officials told reporters in previewing the effort.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There will also be an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

Harris and administration officials on Thursday plan to discuss the risks they see in current AI development with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI. The government leaders’ message to the companies is that they have a role to play in reducing the risks and that they can work together with the government.

Authorities in the United Kingdom also said Thursday they are looking at the risks associated with AI. Britain’s competition watchdog said it’s opening a review of the AI market, focusing on the technology underpinning chatbots like ChatGPT, which was developed by OpenAI.

President Joe Biden noted last month that AI can help to address disease and climate change but also could harm national security and disrupt the economy in destabilizing ways.

The release of the ChatGPT chatbot late last year has led to increased debate about AI and the government’s role with the technology. The latest technology’s ability to generate human-like writing and fake images has added to ethical and societal concerns about automated systems.

Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That’s made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it’s stealing from copyrighted works.

Companies worried about being liable for something in their training data might also not have incentives to rigorously track it, said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

“I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent and privacy and licensing,” Mitchell said in an interview Tuesday. “From what I know of tech culture, that just isn’t done.”

Theoretically, at least, some kind of disclosure law could force AI providers to open up their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy for companies to provide greater transparency after the fact.

“I think it’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not,” Mitchell said. “Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”

While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

The companies also face potentially tighter rules in the European Union, where negotiators are putting the finishing touches on AI regulations first proposed two years ago. The rules could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.

When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people’s safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.

But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

Foundation models are a sub-category of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of data.

A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.

Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and the European Data Protection Board set up an AI task force, in a possible initial step to draw up common AI privacy rules.

___

O’Brien reported from Cambridge, Massachusetts. AP Business Writer Kelvin Chan in London contributed to this report.