top of page

OSINT in 2025: The Changing Landscape of Social Media and Technology

Jemma Ward

Over the past year, we’ve seen vast changes in the online environment – both in the social media space, and the growing availability of generative AI. In this blog, we’ll look at some of those trends and draw out implications for OSINT practitioners, security and intelligence professionals, and everyday internet users. We’ll also discuss some of the ongoing challenges for data protection and verification.


Generative AI and OSINT


We’ve published several blogs on the applications of generative AI to open-source intelligence work, but each week seems to bring further announcements of new technologies and developments in the space. With the announcement of the Stargate Project, a $US500 billion venture to deliver groundbreaking AI infrastructure for the United States on 22 January, and the more recent arrival of DeepSeek on the AI chatbot scene, it doesn’t seem likely that we’ll see a slowdown in the AI space anytime soon!


What is Stargate?


The Stargate Project is a venture between a number of key technology players (including OpenAI, Oracle, Microsoft, and NVIDIA) to invest $US500 billion to build AI infrastructure in the United States. It was announced on OpenAI’s website on 21 January 2025, and promises to scale generative AI, while also protecting US national security. This one is a ‘wait and see’ for the OSINT space – the implications for open-source research and the information environment remain to be seen, but Stargate, along with the growth of AI capability worldwide, promises to transform how we engage with online content.


What is DeepSeek?


DeepSeek, a Chinese AI company, has introduced DeepSeek-R1, an open-source large language model that marks a significant shift in the AI landscape. The model is freely accessible through multiple channels: directly via their web interface at chat.deepseek.com (which can be used with any email, including privacy-focused services like ProtonMail), or through third-party environments such as AnythingLLM.

So, why is the arrival of DeepSeek making such a splash? Part of the reason is that DeepSeek seemed, at first, to signal the age of ‘cheap’ (i.e. relatively cheap) AI infrastructure, apparently costing just over $US5 million to train – however, this has been contextualised in recent reporting, and the astonishingly low number touted in news headlines is unlikely to be accurate. Nonetheless, the entry of a China-based AI giant onto the playing field has caused a significant stir. While China's AI advancements aren't surprising, given their prowess in technology spaces, the global reaction to the release of DeepSeek-R1 suggests that, perhaps, Western audiences have a limited understanding of China's capabilities.


But is it any good? Well, it depends on what your task is. Like most AI chatbots, DeepSeek presents as benign and amiable – a helpful assistant seeking to solve your problem. And it does deliver on this, although from our testing of R1 so far, it certainly doesn’t outperform established platforms like ChatGPT4 or Claude. Have a question about popular social media platforms, or how to create a template for documents? Great! It will probably be quite useful. And, ultimately, having more tooling options is always a win!


It's no secret, however, that DeepSeek’s chatbot is disinclined to provide the full story on certain issues. It’s a running joke on some parts of the internet, now, that DeepSeek shies away from delving too deep into controversial China-related issues. Want to find out what occurred on 11 November 1918? No problem. How about 5 March 1946, the date of Churchill’s famous ‘Iron Curtain’ speech? DeepSeek gives us a nice little summary. However, if we ask about 4 June 1989 it stutters for a moment (sometimes even beginning to transcribe a relevant answer) before presenting us with an apology:

Text on screen showing a notification about an event from March 5, 1946. User question: "What events occurred on June 4, 1989?"
DeepSeek Response after asking a controversial question.

There are plenty of other censored topics, depending on the prompt used, such as China’s role in Cold War proxy wars.  However, DeepSeek’s chatbot will also toe the party line with gusto at times, particularly when prompt questions are straightforward and innocuous. Note the use of ‘we’, and the subjective nature of the response:

Text image with a One-China policy statement. Discusses China's opposition to Taiwan independence, emphasizing peace and stability.
DeepSeek’s responding with the party line (and some gusto!)

However, it’s worth noting that DeepSeek isn’t the only platform to subtly (or sometimes not so subtly) insert ideology into responses. Compare the results from the identical queries given to DeepSeek and then ChatGPT4 below:

Text lists popular social media platforms in China: WeChat, Weibo, Douyin. Descriptions highlight use and popularity. White background.
DeepSeek response on 10-Feb-2025
Text image explaining differences in Chinese social media due to government regulations, listing WeChat as a super app for messaging and more.
ChatGPT-4 response on 10-Feb-2025

I’d note that, in the above example, there’s nothing particularly sinister about ChatGPT4’s response – its verbosity means that it wants to provide us with more information, and the comment on China’s social media restrictions is entirely valid. However, it still demonstrates how AI responses can lean into world views that, in turn, influence our understanding of topics.


DeepSeek Security Concerns


Security concerns have been raised regarding the widespread use of DeepSeek – this probably comes as no surprise to OSINT practitioners, particularly those working in national security fields. The recent announcement that DeepSeek products were banned from Australian federal government devices comes as no real surprise, but what is, exactly, the cause for concern?


There are some obvious ones – the collection and storage of personal and device data from users (and this, of course, extends to all technology platforms) is a key issue. It is important to implement barriers to prevent those with sensitive information either accidentally or deliberately entering it into web-based platforms, particularly when it can be tied to personal details.


Another concern is data aggregation. Data aggregation is, and should be, a consideration for anyone working in the security and intelligence space. Using generative AI on corporate networks introduces new risks for organisations and agencies working in national security. Consider this example:

An intelligence analyst conducting collection on an entity of interest uses a web-based AI chatbot to gather context about the location, university, and previous organisations linked to the EOI. They do not enter any sensitive mission-related information, and they’re careful not to enter the EOI’s name or title. Meanwhile, in another part of the building, on the shared corporate network, an analyst reading a previous report on the same EOI is curious as to whether there has been any open-source reporting on the entity. They ask a simple question using only publicly available information – ‘Who is <entity of interest>?’. Without either analyst being aware, they have potentially linked their intelligence collection activity to an attributable corporate network.

This is a very simplistic example – but imagine it at scale, when thousands of queries are being performed each day (if not hour!). The aggregation of data that allows an actor to ‘put two and two together’ and make an educated guess about the intelligence priorities of an organisation is a real and present risk, particularly when we don’t know exactly how and where our data is stored.


Most (good) OSINT practitioners keep OPSEC and attribution at the front of their minds when conducting collection activities – but, increasingly, we need to worry not just about our own actions, but those of our colleagues’ and stakeholders.


Changes in the Social Media Landscape Impacting OSINT


In one recent blog, we examined the growth in popularity of Bluesky as a micro-blogging alternative to X. This is just one of many changes and trends in the social media landscape, and it’s been a tumultuous few months for users of popular social media. We’ve also seen an increasingly politicised social media landscape, and this, in turn, is likely to have ongoing implications for disinformation researchers and fact checkers.


TikTok


The high-profile TikTok ban in the United States meant that, for approximately twelve hours, TikTok was blocked on devices in the US (the block also affected some users outside of the US, including Australia!). On 18 January 2025, TikTok users in the US received the following message in the application, namedropping the US President as a possible saviour for the platform:

TikTok app screen states it's unavailable due to a U.S. ban. Mentions President Trump may help reinstate it. Options: "Learn more" and "Close app".
TikTok going down.

Half a day later, TikTok was back online, and users received another message welcoming them back to the platform (and, once more, referring to the US President as the reason for the platform’s return to service).

TikTok welcome message with logo, announcing return to the U.S. due to President Trump’s efforts. Text encourages creating and sharing.
TikTok back up.

All in all, it seems like a blip on the social media radar – however, in the lead-up to the highly publicised TikTok ban, another China-based platform – RedNote (Xiaohongshu) entered the fray, touted as an alternative to TikTok that would enable US users to keep sharing and viewing content. Google Trends shows a surge in interest in the app in mid-January.

Graph showing "Rednote" search interest in the US over 90 days. Interest spikes sharply twice, set against a white and blue interface.
Using Google Trends to see the rapid increase in interest on RedNote.

So, what does this mean for OSINT practitioners? Traditionally, China-based social media platforms and messaging services (such as WeChat) have presented challenges for non-Chinese internet users. WeChat (Weixin), for example, is notoriously difficult to create an anonymous account on, even with a virtual number – another active WeChat member must, in some cases, ‘vouch’ for your account in order to activate it. The surge in RedNote’s popularity (despite, perhaps, short-lived), and the lack of hurdles to account creation, meant that Western and Chinese users came (figuratively) face to face for the first time.


However, the closing of the digital divide between China and Western social media users brings with it security concerns as well. RedNote, for example, displays US users’ IP addresses on profile pages – it’s rare to see an IP address listed against a profile on social media platforms, and for users who don’t employ a VPN to obfuscate their IP, reveals their approximate location, ISP, and potentially other details. Once more, we’re reminded of the importance of managing our online footprints (especially when there are exciting new online platforms to explore!).


Meta


On 8 January 2025, Meta announced that it was discontinuing the use of independent fact checkers, in an effort to promote ‘free expression’. Instead, it would be using a ‘community notes’ style feature, similar to X, and reporting has highlighted concerns around the spread of disinformation and divisive content on the platform.


Verifying information online is already a challenging task – we’ve highlighted in past blogs some of the tools and techniques that OSINT practitioners can use to assess and evaluate open-source content and verification of information will likely become more challenging as mainstream social media platforms allow expanded discourse and ‘free expression’. With more content – and, particularly, more divisive political content – those working in the OSINT and disinformation space will need to work harder than ever to substantiate content found on social media.


For person-of-interest (POI) investigations, the ‘tone’ of social media platforms will become more relevant for investigators. Understanding someone’s background, ideology, and political views will help to understand where on the internet they might have a presence. While niche and alternative platforms like Gab will, no doubt, continue to exist in some form, a lack of moderation and fact-checking may lead to an increase in extremist rhetoric and narratives on mainstream platforms. It’s a good reminder that OSINT practitioners are often exposed to disturbing or problematic content, and we need to be aware of the risks – more on this in a previous blog.


X


Since Elon Musk’s acquisition of Twitter, and its subsequent rebranding as X, there have been a raft of changes to the platform. One of the key changes that OSINT practitioners have likely noticed is the requirement to have an account for the platform in order to search for and view specific content. X’s advanced search operators still function, and they are useful for targeting specific content, but they can no longer be used without registering for an account.


A key implication for OSINT practitioners is the reduction of access to data in X. Although an alternative X front-end, Nitter, still helps us to retrieve content without an account, bulk data collection using tools like Twint (a standard tool in the OSINT toolbox a few years ago) has become far more challenging. Understanding the information environment – even at a moment in history when the spread of disinformation and inauthentic content is more rampant than ever – increasingly has significant barriers to entry, and bespoke and commercial tools will be required to effectively collect data at scale.


Key Takeaways


It’s only February, yet we’ve seen some significant shifts in the 2025 social media and technology landscape already. As always, OSINT practitioners need to adapt and seek to understand these developments to stay up to date on the digital environment.

  • As social media platforms become increasingly entwined with political trends, understanding the demographic and political views of a POI will be more necessary than ever.

  • Validation and verification of data on both social media platforms and from generative AI presents ongoing challenges for anyone seeking to collect and analyse information from the internet.

  • While the reduction in the digital divide between China and the West offers OSINT practitioners opportunities to better understand the Chinese internet landscape and perspective, it brings with it concerns about security and data privacy.

  • Bulk data collection using free tools is becoming increasingly difficult, and practitioners who seek to understand the spread of disinformation and inauthentic content are, more and more, being forced to either pay for API access or resort to manual collection techniques.





bottom of page