Skip to content
Subscriber Assistance+1 215 942 8226
Subscriber Login
Select
Shop Here
eShop
Person on laptop

Blog

AI and the Data Dilemma: Trust, Risk, and Opportunity in Security Intelligence

Ribbon

Artificial Intelligence (AI) is reshaping the landscape of security intelligence, offering unprecedented speed, scale, and personalisation for real-time threat detection and predictive risk modelling. But its effectiveness depends on the quality and traceability of the data from which it draws. In a world where misinformation proliferates and data sources are increasingly complex, the ability to trust and verify the foundation of AI-driven insights is more critical than ever. 

It is almost impossible not to interact with some form of AI in the intelligence process. Even this article has been put through a spell checker, now driven by AI. We will discuss the more transformative, Large Language Model (LLM) AI tools. They present opportunities but also risks from misuse and poor data foundations. Trusted intelligence sources and strong governance frameworks are essential to responsible deployment. 

The Expanding Role of AI in Security Intelligence

AI is now embedded across the security intelligence lifecycle. It automates the collection of open-source intelligence (OSINT), detects anomalies in cyber and physical environments, and generates tailored risk guidance for individuals and organisations. Natural language processing enables sentiment and intent analysis across multiple languages and platforms, while machine learning models can forecast emerging threats based on historical patterns. 

Yet, as AI systems grow more sophisticated, they also become more dependent on the integrity of the data they consume. Without careful curation and oversight, these systems risk amplifying noise, bias, or even disinformation. Indeed, this is one of the drivers behind International SOS’ new training course on how to navigate a polluted information environment.  

Opportunities: What AI Can Do Well

AI offers several compelling advantages for security intelligence teams: 

  • Scalable intelligence gathering: AI tools can scrape, classify, and analyse open-source content across multiple languages and platforms, dramatically expanding the reach of intelligence operations. 
  • Accelerated threat detection: AI can process vast data streams in real time, identifying anomalies and emerging risks faster than human analysts alone. This enables earlier intervention and more agile response strategies. 
  • Tailored risk forecasting: By analysing user profiles, travel plans, and operational contexts, AI can generate personalised insights that support proactive risk mitigation. 

However, these capabilities are only as reliable as the data that fuels them, and the governance that ensures its traceability. 

Risks: When Data Can’t Be Trusted

Despite its promise, AI introduces new risks, especially when built on unverified or biased data. 

  • Bias and blind spots: AI models trained on incomplete or skewed datasets can misclassify threats, overlook key indicators, or reinforce systemic biases. Many AI tools have knowledge cutoffs, wherein they are only trained on data up to a certain date, which increases the risk that what the tool producesmay be out of date. This can lead to flawed assessments and reputational harm, particularly in the domain of real-time threat detection and analysis. 
  • Disinformation and source pollution: Open-source data is vulnerable to manipulation. Without rigorous source validation, AI may inadvertently amplify false narratives, deepfakes, or coordinated influence campaigns. 

Hallucination or Fever Dream?

Examples of AI hallucinations abound and are often shared widely for amusement, but these hallucinations highlight some of the risks of relying on AI tools. On one occasion, Google AI advised that "Geologists recommend that people eat one rock per day", possibly having found the information on a popular satirical website but failing to identify the context. While this is obviously nonsensical, it is not inconceivable that other hallucinations could be harder to spot.

AI tools also do not necessarily have all the context required to identify which facts are the most important, even when true. For example, in the wake of the assassination attempt on then-presidential hopeful Donald Trump in 2024, the AI tool in X (formerly Twitter) reported, accurately, "Home Alone 2 actor shot at Trump rally"


  • Opaque decision making: Many AI systems operate as ‘black boxes’, making it difficult to understand how conclusions are reached. This lack of transparency can erode trust among users and stakeholders. AI systems also tend to be designed to appear confident, and do not communicate uncertainty well. This can lead to undue certainty being assigned to analysis and conclusions. Analysts must still use expert judgment to evaluate outputs and sense check the level of certainty with which they are presented. Further prompting may be required to gain a fuller understanding of how confident they can be in the information from their AI tool, and they may conclude that they need to use alternative tools and techniques to reinforce the analysis.
  • Overreliance on automation: AI should augment, not replace, human judgment. Without human oversight, automated systems may miss nuance, context, or cultural sensitivities that are vital to accurate analysis. The convenience of AI presents a concomitant risk of an outsourcing of judgement to a third party, but accountability for the cogency of the intelligence produced remains with the analyst. An organisation should be confident that in the absence of AI, its analysts can fall back on their skillsets to produce equally coherent intelligence, even if it takes longer than it would with the benefit of the latest tools.

A Trusted Alternative: International SOS' Human-Curated AI

In response to these challenges, International SOS is launching a new AI capability within its risk management platform.

This system integrates cutting-edge generative AI with the company's proprietary knowledge base, built from decades of verified intelligence and on-the-ground engagement. Unlike many AI tools that rely on a large dataset that includes unvetted open-source data, this AI draws exclusively from International SOS proprietary content that has been researched, verified and analysed by expert security professionals.

Clients will be able to receive custom-drafted travel guidance and actionable insights, tailored precisely to their individual needs. Every recommendation is grounded in trusted intelligence, ensuring relevance, accuracy, and contextual depth. It represents an evolution to the type of modern, user-friendly and technologically advanced user experience that clients demand, while still being based on analysis from expert security professionals, ground-truthed by and unrivalled boots-on-the-ground capability.

This approach combines the efficiency of AI with the credibility of human expertise, helping organisation safeguard their people and operations in today's complex risk landscape. International SOS' new AI capability demonstrates how speed, scale, and trust can coexist, but only by keeping the human in the loop, and when governed by a mature, robust quality-assurance framework.


Governance: The Backbone of Responsible AI

A mature governance framework is essential to ensure AI outputs are traceable, accountable, and verifiable. It must sit across the entire intelligence process, including not only the human-driven analysis but also the design, construction, and management of the AI tools themselves. 

The team building and training an AI system must have a deep understanding of how the tool will be used, who it will serve, and why high-quality outputs are critical, especially when those outputs inform decisions about personal safety, travel risk, or crisis response. 

This means embedding user-focused design principles, operational awareness, and domain expertise into every stage of development. It also means ensuring that the AI behaves predictably, transparently, and in alignment with the organisation’s values and risk tolerance. 

Once deployed, every piece of analysis or advice generated by AI should be auditable. It should be possible to trace any piece of output from the tool back to its source data and to validate it for accuracy, relevance, and cogency. This traceability builds trust, supports effective decision-making, and enables continuous improvement of AI models. 

International SOS’ AI capability is designed with this principle at its core. Every insight is backed by a transparent chain of human-curated intelligence, and the system itself has been developed by teams who understand the real-world implications of the information it provides. This governance-first approach sets a benchmark for responsible innovation in the security intelligence space. 

The Way Forward

AI is a powerful tool, but only when built on a foundation of trusted data and effective governance. As the security landscape evolves, organisations must prioritise validation, oversight, and trust when sourcing critical intelligence to keep their people safe, maintain business continuity and safeguard their reputation. Organisations seeking an intelligence partner must ensure they are confident in their supplier’s verification processes. A new AI offering is an attractive proposition for gaining access to timely intelligence product. But there must be transparency about what the sources are, how data and analysis are being obtained and assessed, and what safeguards are in place to ensure high quality. 

Leveraging AI in risk intelligence and analysis offers opportunities for faster data processing, enhanced pattern recognition, and predictive insights that can improve decision making and operational resilience. However, it also introduces risks such as algorithmic bias, over-reliance on automated outputs, and vulnerabilities related to data quality, security, and model transparency.