Helen Toner never expected a board vote to make her a household name in tech policy. But when OpenAI’s leadership crisis spilled into public view in 2023, her role as a director—and as one of the AI safety community’s prominent voices—put her at the heart of Silicon Valley’s most consequential fight over the future of artificial intelligence.
That experience vaulted her into a rare position: someone trusted in both Washington, D.C., and Silicon Valley to speak plainly about the risks of AI. Now, as the new director of Georgetown University’s Center for Security and Emerging Technology (CSET), the D.C. think tank she cofounded earlier in her decade-long policy career, she’s channeling that hard-earned credibility into shaping how the U.S. confronts the technology’s national security stakes.
Fast Company spoke with Toner about U.S.-China competition in AI, the growing influence of industry lobbying, and the challenges of safeguarding AI in a rapidly evolving landscape. The conversation has been edited for length and clarity.
Why did you take on this new job at Georgetown? What work can be done via a D.C. tech think tank to influence the future of AI?
CSET is a 50-person organization within Georgetown and we focus on policy research. We’re not academics—we do analysis and write papers and brief policymakers, and it’s all focused on trying to help policymakers and other decision-makers understand the implications of emerging tech for their work, in particular on the national security side. In this new role, I will be leading the organization as a whole. So helping the whole 50-person team think through managing our analysis team, which is our main researchers; our data team, which is the core of CSET’s evidence-driven, data-driven model; our operations team; and external affairs team—helping that whole organization work together and succeed.
Is all this coming through the lens of national security?
Yes, that’s our driving focus—implications of emerging tech for national security. Of course, there are different interpretations of that. Some of our work is very squarely in that bucket, in particular work on military applications of AI or geopolitical dynamics of U.S.-China competition, which is a good chunk of our work, or AI and cybersecurity, AI and biosecurity. Different people set the boundaries of national security in different places, so we have a big effort on talent and workforce, for example, where it’s very easy to draw the national security implications of that, but it’s a little bit less DoD or intelligence community standard fare than some of our other work.
Is this mainly government-facing or is your organization going to have an influence on the wider AI industry, or on the way that the government works with the wider industry?
We definitely think of policymakers as our core audience. That includes the federal level, but also state legislators who are increasingly looking at AI as something they want to be active on. We also see a number of other audiences beyond just policymakers—decision-makers in industry for sure, the broader media to some extent, the broader public.
We’re always trying to estimate whether the U.S. still has a lead in AI models, and if we are leading in robotics, automation, etc. What’s your take on the state of play? Is it possible that this new “AI race” is being overblown?
This is something CSET has done a lot of work on, and we’re known for [offering] really grounded analysis of what is going on in China [which] clearly wants to be competing with the U.S. I think different people, different industry leaders, different policymakers mean different things by that. So I think on both sides some people mean a genuine race to AGI or race to superintelligence. I think other people mean competition in the same sense that we talk about competitive markets—trying to win users, trying to win revenue.
The military side is one area where there’s just very clear zero-sum competition where the U.S. wants our military to be stronger and more effective than the Chinese and the Chinese want the reverse. So I think it’s very important to be thinking about how to effectively adopt AI in the military. But that’s a different question than who has the newest, shiniest model release. So to try and sum that up, I think there is real competition. What exactly that means depends on who you ask and depending on which type of competition you want to zoom in on. You want to be looking at different indicators and considering different types of success. The answer you get of who is “ahead” comes out differently depending on which area you choose to focus on.
Is the competition happening on the level of wanting the world’s AI to run on U.S.-made AI models in the same way that we want the world’s business to run on the dollar?
I think China sees itself as being a great civilization that due to various reasons missed out on the first, as they would call it, the first three industrial revolutions, and was really trailing behind in a way that wasn’t in keeping with their conception of themselves as a great power in the world. So they see AI broadly as an opportunity for them to reverse that trend and to instead be a global leader. Within China-watching circles, there are big debates about what exactly that means. What exactly does China leading in the world look like in general? Is that something that involves expansionism? Is that something that purely involves taking Taiwan back and then being satisfied with their sphere of influence there?
How would you describe the way the Chinese government involves itself in the Chinese AI industry, especially defense applications? By making grants to Chinese AI companies, can the government steer the focus of the research?
Typically, the way the Chinese government will work is less that they will directly meddle and directly go in and say, “Hey, you have to do this, you have to do that.” Typically, their preference is to set broad guidance or provide some priorities or some overarching areas of emphasis and then they’ll let companies, provincial governments, local governments kind of figure out their own way of hitting that.
So what we tend to see is less that they invest through this fund and then they go in and tell the CEO what to do and more that they will have central pronouncements or they’ll have party members on boards or they’ll have party cells inside the companies that are more gently steering along the way and also making sure that there’s a channel for information between the Party and the companies, so that when things come up there’s an ability to exert influence.
What did you make of the Trump administration’s AI Action Plan?
There’s a lot to like in the substance of the AI Action Plan. The rhetoric is very different from under the Biden administration, but there’s continuity on many of the underlying points, like competing with China, facilitating infrastructure buildout, and building sensible guardrails to unlock innovation. I hope the relevant agencies have the resources and the AI expertise to implement the plan thoughtfully.
Regarding the way U.S. AI companies are working in Washington, D.C., my impression is that they’re adding staff and perhaps spending more money on lobbying. Do you perceive an overall strategy by those companies to, for example, make sure that no meaningful safety regulation starts to bubble up in Congress?
I think we’ve definitely seen a real ramp-up in the size and the sophistication of AI companies’ efforts in D.C. Some of them have had very sophisticated efforts for a long time. You know, Microsoft—this is not their first rodeo. But I think certainly as the companies are growing, as interest in D.C. is growing in their work, they’re staffing up to deal with that.
A big motivator for CSET in the work that we do is wanting to be able to bring a perspective that is really technically informed and technically accurate to these topics. Congressional staffers or other folks in government often get [this] from the [AI] companies. The companies will tell them here’s how the technology works, how the industry works, what’s realistic or not realistic. It’s important for policymakers to have that information, but you ideally want them to be getting it from a party that is operating in the interest of the public rather than the interest of the company. Our mission is to advance the public interest, not to advance our bottom line.
Do you believe U.S. AI companies are spending enough on safety research relative to their spend on regular model and application R&D? Is there even a way to measure that?
In general I don’t know that there is a clean distinction between regular R&D spending and safety R&D spending. Often there are connections between those two areas. For example, if a model tends to fail on a certain kind of question, from one lens you could say that that’s a safety problem, from another lens you could just say that that’s a usability or a capability problem.
I think the most relevant questions are more about when there are decisions that would be overall beneficial for the world but would be maybe not in their short-term business interest, what structures and processes do they have in place to make those decisions, and then do they actually follow through? Something you’ve seen, for example, is making commitments to do certain amounts of testing and then after the fact seeing off-the-record reporting that the testing they said they would do was rushed or was not completed because they were trying to launch before a competitor or something like that.
Is it your impression that the off-the-record reporting was true and that this might still be going on?
I don’t have any independent information. I just have what’s reported.
A lot of people on the West Coast are talking about whether or not there’s an AI bubble. Do you have any thoughts about that? Are AI companies focusing more on applying their models and generating revenue, and focusing less on loftier goals like AGI and superintelligence?
There’s definitely been chatter about whether we’re in a bubble here as well. The perspective that makes most sense to me is that it can both be true that some of the generative AI-focused, high-valuation VC investments in early-stage companies promising to build revolutionary products within a couple of years—that can be a bubble. There can be overinflated expectations there. And it can also be true that the underlying technological improvements in AI are continuing, and that the companies that are really investing in those underlying trends (the OpenAIs, the Anthropics, etc. of the world) are on to something and that they’re likely to continue succeeding and likely to see their revenues continue to rise.
Another way to say a similar thing is to point to the dot-com bust in the early 2000s, where there were investors who had gotten out over their skis and lost a lot of money. But the underlying trends were real and the underlying impacts on society were significant and continued after that bubble burst.
Many people were disappointed in OpenAI’s GPT-5, feeling like the pace of advancement toward artificial general intelligence (AGI) and superintelligence is slowing, if not stalled. What’s your take?
Two things are true about GPT-5. First, it’s evidence that we’re not on track for the very fastest scenarios toward AGI or superintelligence—for example, AGI by 2027. But second, it still fits on a trend line of steady continued advancements over the last five years. So I disagree with the sentiment that GPT-5 shows that progress is slowing down.
It seems like running an AI company, whether it’s developing models or applications, is just a really expensive business. Do you think the industry needs to find some fundamental research breakthrough to make the cost of doing this business more viable?
No, I’m not sure that they do. I think it’s actually really common for new technologies—especially technologies that are very flexible and general purpose—to take years or even decades for industry and society to figure out [how] to get the most value out of that technology. If you look back at a wide range of general-purpose technologies—electricity or the computer or different communications revolutions—that’s been the pattern. I don’t know if we’ll see the investments keep increasing at the same rate that they have been, going up by 10X every however many years. But I do think that we’re going to keep seeing the returns on those investments keep going up as people figure out how to make use of the advances that we’ve already seen.