Risk of self-radicalisation among vulnerable individuals arising from use of large language model platforms
7 May 2026
Question:
Mr Cai Yinzhou: To ask the Coordinating Minister for National Security and Minister for Home Affairs (a) whether the Ministry has assessed the risk of self-radicalisation among vulnerable individuals arising from the use of large language models that may produce validating or reinforcing outputs; and (b) what measures are the Internal Security Department and relevant agencies taking or considering to detect and mitigate such risks.
Answer:
Mr K Shanmugam, Coordinating Minister for National Security and Minister for Home Affairs:
1. Self-radicalisation is often driven by progressive exposure to online radical content, reinforcement within online echo chambers, and repeated and sustained consumption of extreme and violent materials. Artificial Intelligence (AI) - enabled tools like Lar ge Language Models (LLMs) can accelerate and amplify the pathways to self-radicalisation. For example, they can provide personalised and interactive responses which affirm or validate an individual’s radical beliefs. Social media algorithms c an also drive exposure to reinforcing content, hastening the time taken for self-radicalisation.
2. Youths are particularly vulnerable due to the significant amount of time they spend online. Some of the self-radicalised Singaporean youths that ISD has dealt with have leveraged AI, including LLMs, to facilitate terrorism-related activities. For instance , a 17 - year-old detained in September 2024 used an AI chatbot to generate a pledge of allegiance to the Islamic State in Iraq and Syria (ISIS), and his attack manifesto. A 17 - year-old far right extremist detained in March 2025 had searched on an AI chatbot for instructions on producing ammunition, and on 3D-printing of firearms, for his local attack plans.
3. The Government takes a holistic approach to tackling online radicalisation. We have strengthened our legislative levers through the amended Broadcasting Act and the new Online Criminal Harms Act to block or remove extremist and terrorist content, including those generated by AI. However, it is impossible to detect and remove all such content.
4. Public education and community vigilance are critical complements to tackling the radicalisation threat. ISD works with the Ministry of Education (MOE) to conduct counter-terrorism and counter-radicalisation outreach to schools. These engagements sensitise students, and school personnel, to the threat of radicalisation, and the importance of early reporting of suspected radicalisation cases. MOE’s Cyber Wellness lessons in the Character and Citizenship Education (CCE) curriculum aim to equip students with the skills to recognise risks in the digital space and to be safe, respectful and responsible online users. In addition, community partners such as the Religious Rehabilitation Group have intensified counter-radicalisation outreach in the digital space. MH A also works with digital content creators to raise awareness on the threat of online radicalisation.
