The Alan Turing Institute and The Centre for Emerging Technology and Security have published a new Research Report that explores the use of AI in strategic decision-making related to national security matters. The report concludes that it is an invaluable tool to support senior decision-makers, government agencies and intelligence bodies, with the scope to make transformative improvements in analysis. However, the report also emphasises the importance of safe, secure and responsible use to mitigate the risk of exacerbating uncertainties inherent to the analysis and assessment of security intelligence. The report was commissioned by the Joint Intelligence Organisation (JIO) and Government Communication Headquarters (GCHQ).

Key findings from the research include:

  • AI is helpful for all-source intelligence analysts as it processes large amounts of data and identifies trends. Not using AI risks undermining the value of intelligence assessments.
  • AI increases uncertainty in intelligence assessment. Its probabilistic calculations may be inaccurate, and its opaque nature makes conclusions difficult to understand.
  • AI systems used in intelligence analysis need careful design, continuous monitoring, and regular adjustment to mitigate the risk of bias and errors.
  • The intelligence function is responsible for evaluating technical metrics in AI methods. Intelligence analysts must consider limitations and uncertainties.
  • National security decision-makers require high assurance regarding AI system performance and security.
  • Decision-makers trust AI’s ability to identify events more than to determine causality, and they prefer AI insights supported by non-AI intelligence sources.
  • Decision-makers need a baseline understanding of AI to make informed decisions based on AI-enriched intelligence.

The report provides six recommendations for effectively communicating AI-enriched intelligence to decision-makers. These include developing guidance to communicate uncertainties, using a layered approach when presenting technical information, providing training for new and existing analysts, building trust in assessments informed by AI-enriched intelligence, offering short expert briefings before high-stakes decision-making sessions, and developing a formal accreditation program for AI systems used in intelligence analysis.



Share this story