At this year’s AWS Summit in Washington, DC, I had the privilege of moderating a fireside chat with Steve Schmidt, Amazon’s Chief Security Officer, and Lakshmi Raman, the CIA’s Chief Artificial Intelligence Officer. Our discussion explored how AI is transforming cybersecurity, threat response, and innovation across the public and private sectors. The conversation highlighted several key themes: how organizations can leverage AI to improve security outcomes, the rise of agentic AI and its impact on security, the importance of maintaining human oversight in AI systems, workforce development strategies, and practical approaches to implementing AI securely in enterprise environments. Below are a few excerpts from our conversation.
On leveraging AI to improve security outcomes
Steve Schmidt: “We’ve applied AI internally at Amazon in a couple of places that led to some significant benefits, including in the application security review process. By training our large language models internally on prior security reviews that we’ve done, it has allowed us to apply the knowledge and learning that our more senior staff have embodied in the documents that the LLM was trained on and expose that to our more junior staff. It really raises the bar on the absolute level of security that we can offer.”
Lakshmi Raman: “In the cybersecurity realm, we’re thinking about how AI helps us in our accreditation and authorization process, helping us ensure that the process to get systems accredited is going as quickly as possible, because the industry is moving so fast. Another area that we’re applying AI and machine learning is triaging data. We have vast amounts of data that comes in at an exponential rate, so we need to be able to go through it quickly so that we can surface insights. You can imagine a cybersecurity analyst who traditionally has gone through network data manually in order to think about blocking suspicious IP addresses or connections. Now there’s an opportunity to do all of that really efficiently and let the security analysts make the decision.”
On the rise of agentic AI and its implications for security
Steve Schmidt: “The biggest change we’re seeing right now in AI is the rise of agentic AI. The reason agentic AI is particularly interesting is that it brings with it a set of challenges about ensuring the software is taking actions within the context of the person who’s asking it…Think about that in the context of a government organization, where you have sets of information that are restricted to certain populations, there are classification decisions, access control limitations, and reasons that you can access certain data that have to be present before you can do so. Agentic AI brings opportunities—you can take actions using software automatically—but also challenges: how do we make sure that the software is doing exactly the right thing every single time, and more importantly, that we can prove what it did to stakeholders and regulators?”
Lakshmi Raman: “AI agents definitely have an opportunity to transform enterprise automation. Leveraging them to do complex multi-step workflows—to do tool calling across a variety of databases and other foundational tools—has tremendous potential, with a human as a crucial step to review what’s going on.”
On the importance of maintaining human oversight with AI
Lakshmi Raman: “In my world, I spend a lot of time thinking about how AI is impacting the workforce. One of the areas we’re looking at is the intersection between AI and our people. AI is able to speed up the processing and do automation, but at the end of the day, it’s really about who is taking on the risk, or deciding the intents and making the decisions. Whatever the machine output happens to be, really it’s about the human who’s deciding the level of oversight, the risk to take, and even whether to intervene.”
Steve Schmidt: “One thing that many people don’t realize about AI systems is that they’re nondeterministic. What nondeterminism means is you can ask an AI model the same question 100 times, and you will not get the same answer every time. So, having a human who can make a judgment about what the AI comes up with is critically important. We look at it this way: if you’re just asking a question and getting an answer, that may be one set of scrutiny that you have to get assistance. But if you’re going to take an action, you’ve got to be really sure the AI is correct. There has to be that skilled person that Lakshmi spoke about, at the end of the AI use process saying, ‘Yes, this is the right thing to do at this point in time with this context.’”
On building an AI-savvy security workforce
Steve Schmidt: “There’s a real problem in our industry: we don’t have enough security people. We simply can’t hire enough people with the right skills to do this job. What we’ve we found is that AI allows us to do a lot of the heavy lifting for the security staff, using tooling that used to have to be done by humans. Our staff is actually materially happier with their jobs if we remove a lot of that grunt work from them, which is super important. You want to keep the employees you have, so you give them tooling that helps them get the job done more efficiently, and they enjoy their job.”
Lakshmi Raman: “We’re looking for people who can live between the intersection of technology and social intelligence, people who can understand how those two areas can potentially interact around human behavior and how to think about future activities. When we’re thinking about analysts, for example, we’re thinking about people who have critical thinking skills, who can demonstrate analytic rigor, who can think multiple steps ahead with incomplete information. We’re also looking for people who have digital acumen with an understanding of cloud and cyber and AI, so that we have those technical skills in house. And finally, people who are interested in lifelong learning and curiosity, because threats change over the years. We need people who understand and are willing to learn about that.”
On advice for security leaders as AI accelerates
Steve Schmidt: “When you’re looking at making a decision, ask the person who’s bringing the information to you: ‘Why can’t AI do this?’ And if they don’t have an answer, ask ‘When will it be able to and under what condition?’ Move it into the now, the probable, the possible, and make it real for all of your staff all the time. If they’re not intentionally making that decision, they’re missing an opportunity.”
Lakshmi Raman: “You’ve got to get training out there for your users. We think of it at three different levels. First is our general workforce—which might be the most important user base—people who are sitting side by side with our AI practitioners and can help describe the workflows that need automation. Then we think about it for our practitioners, so they are keeping up with the latest. And then finally, our senior executives, who can think about how they can transform their organization with AI and generate that buy-in from the top level.”
AI is not just changing what we can do, but how we work. As Steve and Lakshmi emphasized, the most successful AI implementations will be those that thoughtfully balance automation with human oversight, focusing on use cases that deliver tangible value while managing risks appropriately. For security professionals, understanding both the technical and human dimensions of AI will be critical as we navigate this changing space.
Danielle Ruderman
Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.