Implicit vs explicit reasoning in strategic narrative detection
Main Article Content
Abstract
Relevance of the research: The proliferation of state-sponsored propaganda across the global digital ecosystem constitutes a severe threat to international security and democratic stability. As cognitive warfare evolves, adversaries rely on subtle, culturally nuanced strategic narratives to polarize public opinion and alter collective behavior. Defending the information space requires advanced natural language processing systems capable of deep, context-aware reasoning to identify and deconstruct manipulative narratives in real time, moving beyond traditional manual analysis. Aim and objectives: This study evaluates the efficacy of advanced artificial intelligence Reasoning Models in classifying strategic narratives. The research investigates the interplay between internal computational reasoning budgets and the enforcement of explicit step-by-step reasoning instructions, seeking to uncover how modern architectures process geopolitical nuances and identify potential interference effects caused by traditional instruction techniques. Methods used: To achieve this, the investigation conducted extensive empirical evaluations across a diverse spectrum of large language models utilizing the DIPROMATS 2024 dataset, analyzing social media posts published by diplomatic authorities. The experimental framework isolated variables by varying internal reasoning resources and prompt output instructions. The methodology bypassed complex modular architectures in favor of highly optimized, singular instruction configurations embedded with contextual examples. The assessment rigorously measured exact classification precision, operational latency, and economic efficiency. Results: The findings reveal a significant shift in optimal processing strategies. The research identified a critical reasoning paradox, demonstrating that enforcing explicit step-by-step reasoning actively degrades the classification accuracy of some highly capable models by disrupting their optimized latent analytical pathways. Furthermore, a stark architectural divergence emerged regarding computational budgets. While certain models improved with increased reasoning time, others suffered severe performance degradation when allocated maximum computational resources. Conversely, optimized lightweight models demonstrated an unparalleled combination of processing speed and financial efficiency. Conclusions: The research concludes that modern cognitive models inherently possess an advanced capacity to natively detect latent propagandistic frameworks, bypassing the need for extensive supervised training data. By exposing the interference effect, the study challenges traditional prompt engineering paradigms and provides the empirical benchmarks required to integrate scalable, real-time strategic narrative detection within comprehensive information security defense systems.

