ijact-book-coverT

Context-Aware LLM Fraud Sentinels for Card Authorization

© 2025 by IJACT

Volume 3 Issue 2

Year of Publication : 2025

Author : Anitha Mareedu

:10.56472/25838628/IJACT-V3I2P106

Citation :

Anitha Mareedu, 2025. "LLM-Powered Cyber Defense: Applications of Large Language Models in Threat Detection and Response" ESP International Journal of Advancements in Computational Technology (ESP-IJACT)  Volume 3, Issue 2: 53-63.

Abstract :

The emergence of large language models (LLMs) such as GPT-4, Claude, and PaLM 2 has introduced transformative capabilities into modern cybersecurity operations. Leveraging advanced natural language processing, code synthesis, and real-time summarization, LLMs are increasingly embedded within Security Operations Centers (SOCs) to augment threat detection, automate event analysis, and support incident response. This review systematically explores the application of LLMs in log analysis, anomaly detection, SOC automation, and cyber threat intelligence, drawing on recent implementations, benchmarks, and case studies. It further examines ethical and regulatory concerns, including explainability, prompt injection risks, and compliance with standards such as NIST, ISO/IEC 27001, and GDPR. While LLMs significantly enhance operational efficiency, the review emphasizes the continued need for human oversight, robust validation, and adherence to responsible AI principles. The article concludes with a brief outlook on emerging trends such as multimodal assistants and autonomous AI agents acknowledged as outside the present scope but indicative of the evolving landscape.

References :

[1] Babate, et al., "State of cyber security: emerging threats landscape," Int. J. Adv. Res. Comput. Sci. Technol., vol. 3, no. 1, pp. 113–119, 2015.

[2] N. Jeffrey, Q. Tan, and J. R. Villar, "A review of anomaly detection strategies to detect threats to cyber-physical systems," Electronics, vol. 12, no. 15, p. 3283, 2023.

[3] H. Sarker, AI-driven Cybersecurity and Threat Intelligence: Cyber Automation, Intelligent Decision-Making and Explainability, Springer Nature, 2024.

[4] S. Sharma and T. Arjunan, "Natural language processing for detecting anomalies and intrusions in unstructured cybersecurity data," Int. J. Inf. Cybersecurity, vol. 7, no. 12, pp. 1–24, 2023.

[5] Y. Yao, et al., "A survey on large language model (LLM) security and privacy: The good, the bad, and the ugly," High-Confidence Comput., p. 100211, 2024.

[6] Y. Chae and T. Davidson, "Large language models for text classification: From zero-shot learning to fine-tuning," Open Sci. Found., vol. 10, 2023.

[7] F. Yashu, M. Saqib, S. Malhotra, D. Mehta, J. Jangid and S. Dixit, "Thread mitigation in cloud native application development," Webology, vol. 18, no. 6, pp. 10160–10161, 2021. [Online]. Available: https://www.webology.org/abstract.php?id=5338s

[8] Y. Surampudi, Big Data Meets LLMs: A New Era of Incident Monitoring, Libertatem Media Private Limited, 2024.

[9] H. Daqqah, Leveraging Large Language Models (LLMs) for Automated Extraction and Processing of Complex Ordering Forms, Ph.D. dissertation, Massachusetts Institute of Technology, 2024.

[10] Fariha, et al., "Log anomaly detection by leveraging LLM-based parsing and embedding with attention mechanism," in Proc. 2024 IEEE Canadian Conf. Electr. Comput. Eng. (CCECE), IEEE, 2024.

[11] Karlsen, et al., "Large language models and unsupervised feature learning: Implications for log analysis," Ann. Telecommun., vol. 79, no. 11, pp. 711–729, 2024.

[12] S. Suominen, "Cyber threat intelligence management in technical cybersecurity operations," 2024.

[13] T. Yang, et al., "Ad-LLM: Benchmarking large language models for anomaly detection," arXiv preprint, arXiv:2412.11142, 2024.

[14] O. Oniagbi, A. Hakkala, and I. Hasanov, Evaluation of LLM Agents for the SOC Tier 1 Analyst Triage Process, Master’s thesis, Univ. of Turku Dept. of Computing, 2024. [Online]. Available: https://www.utupub.fi/bitstream/handle/10024/178601/Oniagbi%20Openime%20Thesis.pdf

[15] S. R. Rahmani, Integrating Large Language Models into Cybersecurity Incident Response: Enhancing Threat Detection and Analysis, Univ. of Applied Sciences Technikum Wien, 2024.

[16] A. Alahmadi, L. Axon, and I. Martinovic, "99% false positives: A qualitative study of SOC analysts' perspectives on security alarms," in Proc. 31st USENIX Security Symp. (USENIX Security 22), 2022.

[17] S. Gandini, Development of Incident Response Playbooks and Runbooks for Amazon Web Services Ransomware Scenarios, Master’s thesis, Univ. of Turku, 2023.

[18] Weber, "Large language models as software components: A taxonomy for LLM-integrated applications," arXiv preprint, arXiv:2406.10300, 2024.

[19] G. Edelman, et al., "Randomized controlled trial for Microsoft Security Copilot," SSRN, [Online]. Available: https://ssrn.com/abstract=4648700, 2023.

[20] J. Jangid, "Efficient training data caching for deep learning in edge computing networks," Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol., vol. 7, no. 5, pp. 337–362, 2020. doi: 10.32628/CSEIT20631113

[21] Ahmed, "Cybersecurity policy frameworks for AI in government: Balancing national security and privacy concerns," Int. J. Multidiscip. Sci. Manage., vol. 1, no. 4, pp. 43–53, 2024.

Keywords :

Large Language Models (LLMs), Security Operations Center (SOC), GPT-4, Claude, PaLM 2, Regulatory Compliance, MITRE ATT&CK, SIEM, SOAR.