The National Geospatial Intelligence Agency is about, well geography of course, and data. And now it’s about artificial intelligence. For a progress report,  Federal Drive Host Tom Temin checked in with the NGA’s director of data and digital innovation, Mark Munsell.

Mark Munsell For us, the big picture with our agency is all about imagery and geospatial information. And so we obviously work with thousands and thousands of images a day, and that’s ever increasing. We have new sensors constantly coming on board. Of course, the problem that we have is we don’t have more humans to look at and exploit all of those images. So it’s a linear problem for us right now that’s going to turn exponential. So for us, we have to employ automation to be able to tackle this problem. Artificial intelligence in particular, and it’s actually a sub domain, machine learning in the domain of computer vision. You might have heard that term before. This is where we have computers, actually, I’ll say emulate, simulate the cognitive recognition of things on an image that a human would. And so by having the computer do this more and more, and having the computer do it more and more accurately, we collect more data faster.

Tom Temin So this is really an area in which it does increase efficiency of operations and the ability to create the products that you need. But it’s really also, I think, a crossover in that it will actually enhance mission delivery of the NGA to your federal and [Department of Defense (DoD)] customers.

Mark Munsell Let’s be clear, this is all about increasing mission effectiveness, not decreasing the amount of humans that it takes to do this mission. I joke a little bit about it when we’re asked by oversight, when we’re asked by budgeteers that are essentially looking for efficiencies. This is the great thing about automation, it takes so many people to do it.

Tom Temin Yes, very true. And so what’s the approach? I did an interview some time ago, maybe a year or so ago. One of the challenges that NGA had in computer vision is, is that really a actual circle they’re down on the ground? And if it is a true circle, that is all the points are equidistant from the center point, then it must be manmade. And therefore, what is it?

Mark Munsell Yeah. So you can imagine, as a national security intelligence community agency, we are very interested in certain objects of interest that we want to track that are maybe indicators or warnings of things the country needs to be aware of. And so what we do is we have humans start the process by labeling images. So we will have a particular object of interest that we want to track and a human will identify examples of that object. And in sort of modern computer vision technology, it might take thousands, tens of thousands of examples of those objects to train a model. After you’ve trained a model with a good algorithm, we then test that model, just like you would any sort of software or any new technology. Test that model against a known set. We kind of judge its quality based on that test, then employ it, put it into operations. And for us, that means we run the models, we run inference on imagery. So we take a certain set of models that are looking for certain kinds of objects over certain targets, run that and it produces detections. We’re talking about, in this case, millions of detections. And we take those detections, again, another human operation here where we sort of sift through those detections and find result sets, or maybe we write code to find things. And then from those detections, we develop insights and write reports.

Tom Temin We’re speaking with Mark Munsell. He’s director of data and digital innovation at the National Geospatial Intelligence Agency. And a lot of agencies dealing with AI talk about the ethics, making sure that outcomes are fair, equitable and that gets into the type of training data you use. But you’re not training with face recognition, so there’s different races, different ethnicities and so forth, male, female or whatever. You’re looking at things on the ground, seeing from space in general. And so what are the types of biases, the types of distortions that can come in to AI in this particular domain that you have to worry about?

Mark Munsell Yeah, that’s a really good point. And it’s great point, Tom, in terms of, this really isn’t personal what we’re trying to do. You’re right, its from space, its from low-Earth orbit or from a UAS or UAV. And so for us, it’s a little more along the lines of our goal is to increase accuracy of detecting these objects as much as possible. So it’s kind of three vectors that we’re looking for here. We’re looking to improve the positive identification of these objects. We’re looking to improve the geolocation of these objects, that’s very important. And we’re looking to do it faster. So all three of those things, sort of an enduring need, an enduring capability development cycle that we’re on to make that better. And so when it comes to things like ethics, when it comes to things like Responsible AI. For us, we’re trying to make all of that better. And some people have asked, maybe we should pause or maybe this AI is too powerful, or maybe we aren’t responsible enough in this effort. I would say, we’re not there yet. I think the federal government and my agency in particular is trying to make it better, trying to increase the quality. And therefore, we would never really consider, at this point, pausing what we’re developing because we’re just at the sort of beginning of making this good.

Tom Temin And there is a big human capital side to this, a big knowledge base side to this. And artificial intelligence is ultimately about people using it. And so you’ve launched a certification program the director has announced within NGA. Tell us more about that.

Mark Munsell Yeah, so this is our effort to ensure that developers of the technology and users of the technology are using the technology responsibly. And for us, it’s again, it’s not about necessarily a bias or maybe a personalization problem. For us, it’s more about are you using it within the guidelines of the Department of Defense, within the guidelines of that have already been established, the law of armed conflict do no civilian harm. Are we using it in the American values that have been established already by the Department of Defense and the Intelligence Community to conduct our operations lawfully and ethically?

Tom Temin Yes. So what are some of the challenges there that might come up in this highly technical use of it? For example, if it identifies a farm that you wouldn’t want to bomb. I mean, not that the NGA would make that decision, but you got to feed up the information. This is a farm, this is a factory that might be churning out howitzer shells or something.

Mark Munsell Yeah. So when we certify developers of the technology, we want to ensure that they’re developing it correctly. And we want to assure that the quality of the technology models are validated. And so you would fail certification, if you produce poor models that are misidentifying and mischaracterizing are objects of interest.

Tom Temin Are there any intellectual property questions or challenges with applying artificial intelligence to imagery data that might have been acquired commercially by NGA?

Mark Munsell That’s a good question for either a contracts lawyer or a intellectual property lawyer. But I’ll say broadly the things that we protect are our labels, and we consider all the labels that we’ve created a national asset, and that we would sort of not transfer those outside of the government or to other places. But when it comes to using commercial imagery or other forms of imagery, most of our license agreements allow us to run this kind of analytic on that information. Back to the AI assurance and Responsibility AI question, we have an established framework that the White House, the Department of Defense and the IC have already established, and guidelines that we follow. And so everything we do conforms to those guidelines.

Tom Temin All right. And by the way, how did you personally come to this? Tell us a little bit about yourself.

Mark Munsell I’ve been a technologist almost all my career. I started with the National Oceanic and Atmospheric Administration. So, of course, the challenge was to find an agency to work for that had even a more difficult name to pronounce right. And so I’ve been doing this for over 30 years, and was the chief technology officer for NGA before this job. I have several assignments of leadership inside the agency, both on the technology side, as well as the analytic side. And have started companies, technology companies and spent some time outside of working for some of the cloud vendors and had the opportunity to come back and help NGA in the artificial intelligence area. And I was very happy to come back in and do that.

Tom Temin Yeah, the cloud vendors eat their young. You don’t want to work there too long. But I did have one final question, and this is highly technical. In the fifties and sixties to do aerial imagery, there were super high resolution photographic cameras with really fine grain film that were amazing. I think they’re in museums, these cameras. And so you would fly over and then maybe a month later you’d fly over, Oh, there’s an extra building there that wasn’t there before. There’s a missile silo in Cuba, whatever the case might be. So you could compare two pictures. Now you’re getting continuous drones by the petabyte of video and so forth. Is there too much data coming in? And maybe if we went back to a model of, well, a snapshot every two months is good enough and it’s a heck of a lot easier.

Mark Munsell Yeah, Tom, I think we’re together as far as being Luddites, wanting to go back.

Tom Temin And I still have my Hasselblad, so.

Mark Munsell Is there too much data? Guys are already saying, Yeah, man, can you slow down? We don’t need all this data. Reality it’s not stopping.

Copyright
© 2023 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.





Source link

author-sign