Skip to content

The AI
Impact

Presented by

Cybersecurity

Cybersecurity

As artificial intelligence gets more sophisticated, federal policymakers have to reckon with what that means for national security.

With AI, the weapon can also be the shield. We examine how the U.S. is embracing AI as a tool to bolster its cyber defenses and what Congress is doing on the issue.

We also preview what’s on the horizon for AI policy and cybersecurity as part of our larger project on the ever-evolving technology.

placeholder

How AI Can Strengthen Digital Security

 

To tip the cybersecurity scales in favor of the cyber defenders, rather than the attackers, Google has launched the AI Cyber Defense Initiative. Through this initiative, Google is awarding new research grants to leading academic institutions and training a new cohort of cybersecurity startups – commitments designed to help AI secure, empower and advance our collective digital future.

 

Learn More

Current State of Play in Washington

Hackers targeting major U.S. entities — whether they’re acting on behalf of nation-states or independently — have used artificial intelligence to their advantage. The result is a broadening of the cybersecurity threat landscape.

The irony, of course, is that AI can also help defend against cyberattacks.

Chief among America’s adversaries are China and Russia, which have the intelligence community concerned about the use of AI to conduct cyber espionage, according to a recent assessment of national security threats.

Beijing is mulling “aggressive cyber operations” targeting the United States if China’s leaders believe a major conflict is imminent, the report says. And Moscow can target critical infrastructure, including “underwater cables and industrial control systems.”

Meanwhile, the U.S. government is looking to use the technology to bolster cybersecurity at federal agencies and private companies.

Congressional considerations: Members of Congress concerned about cybersecurity are delving into the AI world by first working to mitigate the potential harms that inevitably come with the technology. 

The broad goal here is to ensure AI does more to bolster U.S. cybersecurity than it does to help facilitate attacks on critical infrastructure, including power grids and other basic government operations.

Earlier this year, the House established a bipartisan task force on AI that has been meeting regularly for several weeks now. Over in the Senate, Majority Leader Chuck Schumer is spearheading a bipartisan effort that began with educating lawmakers about AI, including the benefits and risks associated with it. Both efforts are in their early stages, though, so we don’t expect legislative action in the near term.

We’ll delve later in this edition into the legislative options Congress is considering and where they stand.

The Spotlight Interview

Rep. Jay Obernolte (R-Calif.) is the chair of the House’s AI task force. He’s also the only member of Congress with a master’s degree in AI and is playing a leading role in developing cybersecurity policy.

We recently spoke with him about what Congress can do regarding AI and how government agencies and private companies can use the technology to harden their cyber defenses. 

Here are Obernolte’s remarks. They have been edited for brevity and clarity.

“The bad news is that, like other fields, AI is going to enhance the productivity of malicious actors. It is going to increase the prevalence of cyber security threats. The good news is that the best defense against the use of AI for cyber attacks is the use of AI to defend against cyber attacks. And AI turns out to be very, very good at that.

“I am cautiously optimistic that we will eventually end up in a space that’s safer than the space that we are now because I think that AI will asymmetrically bolster defense more than it enhances offense.”

“We need to make sure that all of our government agencies are hardened against the malicious use of AI in cyber attacks.

“In a larger sense, I think we’ve discovered through events — like the Colonial Pipeline hack two years ago — that even cyber attacks on what we would think of as private industry can have an incredibly deleterious effect on our national security even though they don’t touch any aspect of government.

“You think about how much worse it could be if it was a whole sector of our economy that was targeted. So I think that that’s broadened our thinking about what it means to be cyber-secure.”

“There’s nothing inherently partisan about the regulation of artificial intelligence. Inevitably there will be areas of disagreement, but I think what you’re hearing is an acknowledgment that whatever we do has to be durable.

“And the only way to make it durable is to make it so that it doesn’t change every time the winds of political fortune shift a little bit. And that requires broad buy-in from both sides of the aisle. That’s what we’re trying to achieve.

“This is an issue that’s going to span multiple Congresses and the political prognosticators are all over the map on what their prediction is. Given that fact, I think it really highlights the need for anything that we do to be broadly bipartisan and bicameral because it does us no good at all to come up with the perfect framework of the House if we then can’t get it past the Senate.

“So we have to engage the Senate, we have to engage the White House because the executive branch’s buy-in is going to be critical to implementing this as well.”

The Policy Pipeline

AI technology is evolving at a rapid speed, posing a new quagmire for lawmakers who are accustomed to the slow churn of the legislative process. 

Any policy Congress passes on AI would need to withstand both the quick advancements of the technology and the political winds that change every two years.

The challenge for lawmakers is to get people from both parties and chambers as well as the industry on board if Congress is to come up with an iron-clad AI policy.

Working together: So far, the issue has proven to be a rare bipartisan effort in a deeply divided Congress as lawmakers in opposite parties team up on AI and cybersecurity legislation.

For instance, Sens. Eric Schmitt (R-Mo.) and Gary Peters (D-Mich.) recently introduced the AI and Critical Technology Workforce Framework Act. 

The goal of their bill is to grow the AI workforce and discourage foreign outsourcing by modernizing standards for the use of AI in cybersecurity and other fields. Schmitt said the bill has “endless” potential to boost defense capabilities.

Earlier in March, the House Oversight Committee approved the Federal AI Governance and Transparency Act. This legislation, co-authored by the panel’s top Republican and Democrat, would establish guidelines for the use of AI in the federal government.

Even as lawmakers sound the alarm about AI and its promises and perils, the path for legislation remains unclear.

In the meantime, the White House has been busy promoting various efforts on the issue, including a comprehensive executive order that, among other initiatives, promotes AI education in the federal government.

— Andrew Desiderio

Listen on the platform of your choice

Advertisement

Startups Using AI to Change the Future of Cybersecurity

 

The global market for AI-based cybersecurity products will surge to an estimated $135 billion by 2030. That’s why Google has launched the Google for Startups Growth Academy: AI for Cybersecurity. Over the next three months, startups across the US and Europe will work one-on-one with Google AI and cybersecurity experts. Through personalized workshops and mentoring sessions, they will be equipped with Google’s security tools, best practices, and connections to help grow their businesses.

 

Learn More

Unlock access for more

Sign up to receive our free morning edition every week day, and you'll never miss a scoop.

Already subscribed? Sign in.