Artificial intelligence appears to have firmly established itself in the defence industry. At Eurosatory 2024 in Paris last month, AI-enabled solutions were prevalent, offered by both startups and major defence companies. This widespread adoption, however, comes with its own set of challenges including trust issues.

On June 19, Safran announced the launch of its Advanced Cognitive Engine (ACE) system. The system aims to integrate AI capabilities into all Safran Electronics & Defence products, enhancing situational awareness, providing decision support and reducing cognitive load for field forces.

Decision support is undoubtedly one of the primary advantages of AI at this point in its development and integration into the defence and naval world. A major factor driving this is the significant disparity between the rapid increase in sensor data collected in operational theatres and the armed forces’ challenges in recruiting and retaining personnel.

How artificial intelligence can threaten military readiness

US to increase funds for artificial intelligence capabilities in FY2024

How AI tools can reduce fuel usage for maritime platforms

“Autonomous systems are changing how data is collected, but it is also posing significant HR issues when, at the moment, each drone or robot requires one operator,” noted Arnaud Valli, head of public affairs at Command AI, who spoke to Shephardduring Eurosatory 2024.

“Considering the difficulties all armed forces services face in recruiting, training and retaining personnel, AI can be a real game-changer in reducing the cognitive load all this data creates for those personnel.”

Against this background, Command AI, a French start-up, has developed two AI-powered systems to support mission planning and command and to enhance operational readiness by facilitating training.

Multiple AI modules can be integrated into an off-the-shelf computer to facilitate the generation of multiple scenarios and solutions based on the collected and processed data. Valli and company founder Loïc Mougeolle have been working to get the two modules ready for November to be tested during a call for interests by France’s Agence de l’innovation de défense.

Safran AI ACE
Safran is looking to integrate AI capability into all its defence products under the ACE (Advanced Cognitive Engine) banner. (Image: Safran)

An additional module focused on wargaming is also being developed and Command AI’s systems are available for both land and naval forces.

Similarly, MBDA briefed on the integration of the Ground Warden – a new AI-enabled decision-aid kit – into its Akeron family of weapon systems.

“The aim is to be able to use these systems beyond line of sight,” Matthieu Krouri, head of business development for battlefield systems at the company, told reporters at the show.

Connected to a firing post, Ground Warden uses intelligence about targets to guide Akeron operators, thus ensuring they will eliminate fixed threats.

“AI is used to continuously track the threats in all weather conditions,” Krouri explained.

Ground Warden can also assist operators in successfully eliminating moving targets when used in combination with an autonomous system.

Akeron MP is currently being enhanced with a unique naval mode, an algorithm designed to improve seeker stabilisation and image processing.

The issue of trust in AI-enabled naval platforms

While AI is critical in enabling autonomy and reducing operators’ cognitive load, armed forces’ ability to fully trust an AI-enabled system remains a significant question – and quest.

MBDA was very open on the topic, explaining that AI is currently being used to support decision-making, but full autonomy is still some way off because many questions remain. Such questions include who will provide the data for AI training, who will carry out that training and how will it be certified.

Thales, for example, is developing contracts designed to enable operators to define the rules and limits of their systems’ autonomy and deploy them confidently. This is what the company’s SwarmMaster solution will do.

At Eurosatory, Shephardalso met with Numalis, a French start-up developing tools for the development of reliable and explainable AI. Its Saimple offering, for instance, has been designed to enable AI developers to validate algorithms following the ISO/IEC 24029 standards.

“The concept behind Saimple is to validate AI algorithms, at all stages of their development process, checking whether they continue to perform correctly despite potential disruptions,” Jacques Mojsilovic, chief communications officer at Numalis, told Shephard. These disruptions could be adverse weather conditions or anything else potentially affecting how an AI-enabled system performs.

To do this, Numalis uses two metrics: dominance, based on the possible behaviour space of the AI, to determine when a decision it makes would no longer be reliable, and relevance, based on visual aids that illustrate what object characteristics the AI used to make its classification.

“Solutions like Saimple aim to be able to shorten the development and validation cycle for AI algorithms, offering the possibility to check their robustness and explainability at all stages of development,” Numalis CEO Arnault Ioualalen told Shephard. Ultimately, it supports AI certification.

Numalis also works with companies such as Safran and MBDA to develop more robust and explainable AI.

Multiple other initiatives exist within the defence industry to develop transparent, explainable and trustworthy algorithms, and though they are progressing fast, the road ahead leads to a destination in the medium, not short, term. AI will remain a decision aid for systems with a human in the loop across all armed services until that point has been reached.

This analysis article originally appeared in June’s Decisive Edge Naval Newsletter. To receive regular updates from Alix Valenti and our team of defence experts visit our Decisive Edge sign up page.

Akeron MP



Source link

author-sign