As the artificial intelligence phenomenon rolls on, the question emerges: What are the cybersecurity-attack implications of AI? Now Carnegie Mellon University’s Software Engineering Institute has formed a team called the Artificial Intelligence Security Incident Response Team. It’s working with sponsors in the Defense and Homeland Security Departments. For more, the Federal Drive with Tom Temin spoke with the Director of the CERT division of the Software Engineering Institute, Greg Touhill.
Greg Touhill All of the above. Tom, as we have been running the CERT now for 35 years, we’ve kind of developed the cyber incident response discipline. And arguably Carnegie Mellon and the Software Engineering Institute are the birthplace of cybersecurity. So we formed the original CERT, which I have the honor of leading right now. And we’ve evolved from just a computer emergency response team to a cybersecurity engineering and resilience team. And we’ve been receiving through our CERT coordination center responsibilities numerous reports of incidents that involve supply chain data attacks, algorithmic attacks, hardened hardware and software attacks and defects. And we’re seeing also cyber reconnaissance where folks are doing specific scans, looking at A.I. systems and trying to derive information about the models, the architectures, the frameworks and such based on the vulnerabilities that are being reported to us. And at this point, all of them are embargoed where folks are sharing with us and saying, hey, we want you to help us protect our company and our, you know, the victims, but we’re working with them through our responsible disclosure program to identify means of understanding what’s happening, as well as working with those organizations on what they should do about it. I think it’s really important for our audience to remember that a lot of these things that we’re seeing being reported to us at the CERT are mirroring everything that we’ve seen evolve over the last 30 plus years in the cyber realm. And as we take a look at AI, AI is still doing things with software. The models are software driven. The frameworks are the same types of techniques that we’re using, but a much grander scale in other places. It leads us to believe that A.I. vulnerabilities are cyber vulnerabilities and the things that we are doing in the CERT coordination wants us to believe that we need to rethink how we do incident response when it’s applied to a artificial intelligence or machine learning system.
Tom Temin Right. I guess my question is what are the unique software or protection challenges of AI powered systems versus regular old software applications which are vulnerable enough as they are?
Greg Touhill The commonality is much greater than the differences, but the differences is usually scale, where you will have A.I. systems that are taking advantage of distributed processing, often in multiple cloud environments and frequently in cloud environments that are operated by multiple vendors. The scale of the data that is ingested and the computing power is generally at magnitudes of complexity that dwarf normal IT and commercial type of systems. And then three is as we take a look at the different types of models and there’s many different flavors of A.I., there’s going to be multiple languages often involved in the system itself, making it extremely difficult to tell the difference between an attack and a defect further. And this is the last one, is when you have something like generative AI, where the model is learning and as the data is changing, you can never get back to that one second instance where you had a problem. You can never really quite replicate it, which adds to the degree of difficulty. So that’s why we formed this team is just to take advantage of our experience, but also to grow the community in how to do incident response in an artificial intelligence environment.
Tom Temin We’re speaking with Greg Touhill, director of the Cert Division of the Software Engineering Institute at Carnegie Mellon University. Basically what you’ve said then to maybe summarize it and you can tell me if this is a fair summary. AI vastly balloons your attack surface in effect.
Greg Touhill Absolutely. You know, as you take a look at the models, the frameworks, the connectivity, the computing, the amount of data and the amount of programing languages that are generally put to play in building out an AI and or machine learning system. The scope and scale is such that determining whether or not you’re being attacked or you have a defect becomes the acme of skill and something that we are treating this as part of our applied research activities and we’re looking to take advantage of the 35 years of CERT experience to bring the community, both the constituency as well as the technical communities together to make AI as good as AI can be safe, assured and trustworthy.
Tom Temin And now you have the Artificial Intelligence Security Incident and Response team. What are the initial challenges? Are there specific projects it’s going to work on? And we bring in Homeland Security and the Defense Department to collaborate on those particular challenges.
Greg Touhill We’ve already been working with DHS and with the Defense Department through our sponsors and the research and engineering team. But also we briefed folks such as General Skinner, the DISA folks who run the DODIN in the Department of Defense Information Networks. And you know, for the audience, if you have a vulnerability or you have a concern over an AI system and such. We’ve already set up at our website at kb.cert.org as part of the normal vulnerability management process. Contact us at the server and report what you’re seeing. And what we do is, is we bring together the experts not only from here at Carnegie Mellon, but throughout our vast network of friends in academia, in the research community and in industry. We’ve got over 3900 different companies that we do information sharing with as we take a look at this approach. Think of it as the A.I. Cyber Watch, you know, where we are trying to identify issues and solve them before they become problems.
Tom Temin So the reported under the, you know, safe disclosure incidents that you have collected are some of those from federal situations. And do the collected reports of AI related incidents from industry and government, do those form the basis of the particular problems the team will work on?
Greg Touhill Well, you know, as we take a look at the impact that AI is having on not only national security, but on national prosperity, it’s increasingly becoming difficult to segregate between the two. So as we take a look at A.I. and the application of this technology, we are taking vulnerability reports from government, from industry and from consumers as well. And we have that network in place to coordinate across all of the national security as well as the national economy systems.
Tom Temin Sounds like you really got to operate fast here because the instances and the use cases of AI seem to roll out almost by the minute. And I would think that especially in the federal organizations or large financial institutions and places like that, they would really want to get around the cyber issue before they deploy all of this A.I. or will, you know, could have a disastrous catch up situation down the line.
Greg Touhill You kind of highlighted one of the issues. That’s something that the marketplace is confronting right now. As you take a look at the building of a lot of these A.I. models. We are seeing some evidence as part of our research that some folks, as they’re building out some of these models and frameworks, aren’t necessarily taking advantage of the lessons learned from Devsecops and some of the best practices in software engineering that have been pioneered here. And as we take a look at a lot of the reports that are coming in, we’re finding that some of them are self-inflicted wounds because of not necessarily applying some of those software engineering principles in a race to get to market. As we take a look at is this is an attack, is it a defect? All of those things come into play as we do the forensics and the engineering work to try to find the root of problems, but also a path for solution.
Tom Temin And just a final sort of double question. What will be the output work product of this team? Will it be publicly available? And are you also working with NIST, which is always updating its guidelines? And they have a special AI series of publications that they’ve been working on also?
Greg Touhill We always work with NIST. My teammates here across the SEI, regardless of our technical divisions, remain engaged with the standards bodies not only here nationally, but our contributors in international fora. And that’s one of the great things about being at Carnegie Mellon is the fact that we are in fact engaged domestically as well as internationally in identifying best practices, working with standards bodies and trying to find solutions to really tough problems.
Copyright
© 2023 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.