Can the Government Figure Out How to Regulate A.I.?



At a White House summit on July 21, the Biden administration brought together the heads of seven different A.I. companies. A lot of the big names were there—Meta, Google, OpenAI—and they all signed “voluntary commitments” to safeguard artificial intelligence. In the Senate, Chuck Schumer is proposing a framework that legislators can use to tackle A.I. issues. But while the A.I. industry is moving at a breakneck pace, Washington is, as usual, slow to regulate.



On Friday’s episode of What Next: TBD, I spoke with Makena Kelly, who covers politics and policy for the Verge, about whether Washington can keep up with A.I. Our conversation has been edited and condensed for clarity.






Lizzie O’Leary: What does this summit indicate about the Biden administration’s approach to regulating A.I.?












Makena Kelly: The Biden administration has been pushing for some kind of regulation on artificial intelligence because that industry has grown tremendously, not just in size but also in influence. I think that the summit at the White House was the Biden administration saying, “We need to do something, quick.”



What has Congress been doing in regards to regulation?



Senate Majority Leader Chuck Schumer has proposed a framework for how lawmakers should approach A.I. regulation, but Congress moves very, very slowly. The executive branch, however, has some powers that Congress doesn’t have, including the presidential bully pulpit and executive powers. The goal of this summit was to get all the major stakeholders together and set these standards, which is important, but there’s not really any way to enforce those standards.






The White House said on that press briefing with reporters that there is an executive order coming down the line now, but declined to give any real description of what that order would be, saying it was going to be an interagency effort. So, when we talk about why this meeting was important, it’s because the executive branch can really move more quickly than the other branches of government.






Can you tell me a little bit about your conversations with these companies?



The White House wanted universal requirements, but these companies all want to do different things. One specific example is the watermarking requirement the White House proposed. Something that’s trending on TikTok right now is that people want to use A.I. to place their faces on professional headshots so they don’t have to pay someone to take their photos. If you or I were to create something like that, the White House’s watermark standard would require these photos to have a watermark that says, “This is A.I.-generated content.” That has to do with combating misinformation, disinformation, and just building trust with users.









When I surveyed some of these companies, Google had already said that it was going to have some sort of watermark at its IO conference earlier this year, but OpenAI went further and said that they were going to create APIs, or application programming interfaces, for social media platforms and other platforms to integrate into their systems. That would allow for a disclosure on A.I.-generated content on social media, similar to the misinformation tags on Facebook.



Another thing that the companies weren’t in total agreement about was including information about A.I. generation in the metadata of A.I.-generated images, which would say that the image was generated by so-and-so platform.









OpenAI CEO Sam Altman told the Senate, “We believe the benefits of the tools we’ve deployed so far vastly outweigh the risks,” and “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” When members of Congress hear that, do they take those pleas into account and craft the regulations accordingly?







I wouldn’t say that they craft the regulations accordingly, but they do take these things into consideration. OpenAI and all the other A.I. companies have seen what has happened with other areas of the tech industry—the anger that is there, the aggressiveness that is there, the pervasiveness of trying to regulate the industry, and the hostility between the tech companies and lawmakers. It seems like A.I. companies are trying to put a shiny happy face on A.I. and saying, “We want to get this right.”



Are there distinct policy positions on the Democratic side and the Republican side?



I think most lawmakers are saying, “Look at this bad thing. We have to do something.” I don’t think it’s necessarily a party-line division as much as it is a division between lawmakers who are a bit more business-friendly and lawmakers who have more of a consumer advocacy focus.












We’ve talked about the executive side, we’ve talked about Congress, and then we have the agencies. At the FTC, Lina Khan has been very muscular in terms of going after various companies with a mixed record, but she has not been shy, and tech certainly knows that. How does the FTC regard A.I.?



Lina Kahn, like you said, hasn’t been shy. She has said that under current law, she can go after these companies for discrimination, for false advertising, for fraudulent behavior. This is all stuff the FTC is authorized to do under the FTC Act. With the Federal Elections Commission, when it comes to A.I. and political ads, the FTC has also said that under current law they’re able to go after disclosures in A.I. and political ads. They’ve declined to create specific rules, and there have been some petitions that have been voted 3–3, meaning they don’t go anywhere. A lot of those petitions have been revised, and I think they’re mulling over specific rules again. But when it comes to the regulatory agencies and enforcing laws that are on the books, I think there is some momentum there—not just to investigate, but to set standards at that level, as well.












While the White House may issue an executive order around A.I., it would likely only apply to the government’s use of it.





How much power does an executive order actually have? When it comes to regulating private industry, I don’t really know. The executive branch has far more power to regulate how federal agencies and government workers use A.I., so I imagine they might be talking to officials at the Pentagon or the Department of Education and saying, “Craft some rules around how the government can use A.I.” And I imagine that’s probably where we’re going to end up with that.






How much does the ghost of Washington’s failure to regulate social media hang over this conversation about A.I.?






They still have to work on that, too. Biden has made it a priority to have some kind of child privacy and online protection rules put in place before the election. Congress has to do that. When it came to the child privacy stuff earlier this year, that coincided with the debt ceiling. That was really important and they needed to get that done, and child privacy got tossed to the side because of other priorities.



We talk about how these past failures have loomed over this conversation.
Well, now they’re just getting a really big bucket of stuff they have to do, and then they have to reprioritize. So are they going to go ahead and do child privacy on the tech front, or will they prioritize A.I.?




Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.







Post a Comment

0 Comments