top of page

The Real Role of AI in OSINT Workflows

Writer's picture: Jacob HJacob H

The integration of AI into intelligence analysis or investigations isn't straightforward.


As someone who has spent many years training intelligence analysts and working across different intelligence functions, I've learned that one of the keys to effective intelligence work isn't about jumping on the latest technological bandwagon – it's about understanding when and how to use new tools appropriately.


This understanding forms the basis of our new course Leveraging Generative AI to Support OSINT, where we take a measured, practical approach to AI integration.

Whether you're running due diligence checks, investigating financial anomalies, analysing information, or countering insider threats, we all face similar challenges when it comes to AI integration.


Understanding the Limitations and Concerns


The fundamental role of intelligence and investigative work is to provide accurate information and assessments that inform decision-making, often with significant real-world consequences. Although AI technology is developing at what feels like breakneck speed, it's not perfect and looking at AI's current limitations it:


  • Cannot understand contextual human behavior and underlying motivations.

  • Is limited by its training data and unable to reliably handle novel situations outside that data.

  • Cannot independently verify the accuracy of its creative outputs or novel connections.

  • Introduces systematic biases from training data that are difficult to detect and correct.

  • Cannot assess the true reliability of its own conclusions.

  • Presents conclusions with confidence that doesn't reflect real-world reliability.

  • Cannot maintain operational security awareness or understand the sensitivity of information in context.


However, AI systems can serve supporting roles in OSINT workflows from data organisation to pattern identification, and in many cases across multiple fields it is outperforming the human capacity. AI, when applied correctly, it's simply

incredible.


But here's the critical concern: even in a supporting role, there's a serious risk that AI involvement could:


  • Create false confidence in conclusions.

  • Introduce hidden biases or blind spots.

  • Lead to over-reliance on historical patterns.

  • Subtly shape analyst thinking in ways we don't fully understand.

  • Lead to a decline in cognitive ability if we rely on AI too much.


Given the stakes involved in intelligence analysis and investigations and the current limitations of AI systems, one might conclude that AI has no place in intelligence work at all. The risks seem to outweigh the potential benefits, particularly given how difficult it would be to identify AI-introduced errors in the analytical process.


This is a field where getting it wrong has serious consequences, and human judgment, experience, and genuine understanding remain essential.


Not how you expected the blog to go, as we hit 'go' for our new course?


But this presents us with a practical dilemma: AI tools are becoming ubiquitous in OSINT. Whether we like it or not, they're being integrated into workflows, used by our colleagues, and leveraged by our adversaries. The real question isn't whether to use AI, but how to use it responsibly while preserving integrity.


It was this very question that led us to develop our new course.


Rather than simply embracing or rejecting AI wholesale, we wanted to create a framework for thoughtful integration that acknowledges both the risks and the opportunities.


Finding the Balance


The key is understanding its appropriate role. Think of AI as a junior team member – one with exceptional skills in certain areas but significant limitations in others. It's like that new analyst who's grown up immersed in internet culture and technology - they might spot trends and connections in social media that a 20-year veteran might miss, but that veteran's experience is crucial in turning those insights into actionable intelligence. The junior's knowledge is valuable, but they need direction and context that only comes from years of operational experience. Similarly, AI can open new avenues of investigation and spot novel patterns, but it should never be the final arbiter of assessment or replace human judgment.


The integration of AI raises complex questions that deserve careful consideration: how do we prevent AI tools from inadvertently introducing bias? When does AI support enhance our work, and when does it create dangerous blind spots? How do we maintain analytical rigour while leveraging AI's capabilities?


AI hit the mainstream in 2022, but even before it was being talked about as prolifically in daily conversations as it is now, OSINT Combine have been thinking and applying ideas on how AI systems can be used. We have put them into practice and encouraged the use of AI across all facets of work and life – essentially to see what works and what doesn't. Leveraging Generative AI for OSINT is the product of our experiments and deep dives into the application of AI to different problems and datasets; we explore different approaches to integrating AI into workflows while maintaining the integrity of investigative analysis.


Let's take a concrete example of appropriate AI integration. When analysing images for location information, a well-crafted prompt to GPT-4 vision capabilities can serve as an initial filter and suggestion engine, as we covered in a previous blog.


Flowchart showing AI-human integration for image location analysis
Flowchart showing AI-human integration for image location analysis with AI serving as a preliminary filter, followed by human cross-referencing, verification, and context addition for final assessment. This flowchart was generated by Claude 3.5 Sonnet (Anthropic).

Leveraging Generative AI for OSINT - Course Learning Outcomes


So, what do we want you to achieve on the course? The learning outcomes are:


  • Explain the fundamental mechanisms of generative AI and large language models, including their components, capabilities, and inherent uncertainties.

  • Evaluate how AI can enhance OSINT operations while understanding their fundamental limitations, risks, and the importance of maintaining human analytical skills.

  • Apply effective prompt engineering techniques across multiple AI platforms.

  • Design AI-enhanced workflows for OSINT tasks and throughout the intelligence cycle from planning, collection, processing, analysing and reporting while maintaining operational security.

  • Implement a structured framework for evaluating AI outputs in intelligence work, focusing on reliability, accuracy, consistency, and context.


Moving Forward with Purpose


The future of intelligence analysis and investigations isn't about choosing between human analysts and AI – it's about understanding how to leverage AI's capabilities while preserving the critical thinking and judgment that only humans can provide.


Through Leveraging Generative AI for OSINT, we're teaching OSINT practitioners how to work at the AI-OSINT edge. It's about understanding both the potential and pitfalls, making smart decisions about tool integration, without falling off the cliff into over-reliance or misuse.


This is the balance we strive for – practical enough to be immediately useful yet grounded in a deep understanding of AI's limitations. Our adversaries are already using AI, so it's imperative that we not only keep pace but stay ahead – not by rushing to adopt every new AI tool, but by developing smart, secure, and effective ways to integrate AI into our workflows.

470 views

Recent Posts

See All
bottom of page