Arunkumar Paramasivan, Rajinikannan , 2025. "The Use of AI in Detecting and Combating Online Misinformation" ESP International Journal of Artificial Intelligence & Data Science [IJAIDS] Volume 1, Issue 1: 7-15.
In today's increasingly connected digital world, spreading misleading information online is a big threat to public health, social cohesion, and democratic institutions. Social media, real-time communication tools, and algorithm-driven content screening systems have made it simpler for false information to propagate, whether it is meant to or not. Artificial Intelligence (AI) has become not just a technological marvel but also a crucial line of defence in the middle of all this digital chaos. This abstract talks on the critical link between AI and information integrity by looking at how AI tools are being used to discover, flag, filter, and battle false information online.Natural Language Processing (NLP), Machine Learning (ML), Deep Learning (DL), computer vision, and network analysis are just a few of the AI technologies that are being used more and more to discover patterns of fake content in text, audio, images, and videos. NLP models can search through a lot of online content for language that is deceptive, such as clickbait phrases, logical faults, and language that is full of emotion. While this is going on, ML models can learn from databases of authentic and fake news articles that have been tagged. This lets computers sort and predict new content in real time. AI can also help make sure that multimedia content is correct by discovering altered photographs and fraudulent movies that are often used in campaigns to propagate false information. This is done by using picture forensics and deepfake detection.Facebook, Twitter (X), YouTube, and TikTok are some of the social media companies that now offer AI-powered moderation tools that automatically discover and delete erroneous information. These sites also utilise AI to flag postings that look suspicious, lower the ranking of sources that aren't trustworthy, and send readers to content that has been checked for accuracy. Some more apps that use AI to give journalists and fact-checkers real-time credibility scores and check assertions are ClaimReview, Media Cloud, and FakeNewsNet.AI-driven disinformation detection has clear problems, even though it has promise. AI systems have a variety of issues to cope with, such as algorithmic bias, getting the wrong idea about the situation, and false positives. This is especially true when they have to deal with things that are hard to comprehend, including satire, regional dialects, or stories about society that aren't commonly known. Some AI models are also hard to understand, which raises moral problems regarding freedom of speech, openness, and responsibility. If you don't use AI correctly, it can filter out too much or display political prejudice. These issues illustrate how crucial it is to have a human-in-the-loop method, in which AI helps people make decisions instead of taking their place.AI is a great tool for more than only finding things; it can also help stop them and teach people. AI-based tools can watch how information moves around to guess how fake information will spread. Apps that teach individuals about media and how manipulation works can help them become more media literate. In the future, things may be even more open, trustworthy, and efficient thanks to explainable AI (XAI), personalised misinformation detection systems, and AI that operates on more than one platform.Using AI to battle fake news online is not only a technical solution; it's something that everyone needs to do. In the digital age, AI, human oversight, platform responsibility, and public awareness all need to work together to keep the truth alive. AI can be quick, large, and reliable, but it must be used in a fair, open, and moral way to ensure that the answer doesn't hurt people in a different way.
[1] Shu, K., Sliva, A., Wang, S., Tang, J., and Liu, H. (2017). Using data mining to find fake news on social media. The ACM SIGKDD Explorations Newsletter.
[2] X. Zhou & R. Zafarani (2018). Fake News: A Look at Research, How to Find It, and the Odds. ACM Surveys about Computers
[3] S. Kumar & K. M. Carley (2019). Using Tree LSTMs to Find Rumours. ACL.
[4] Ahmed, H., Traore, I., and Saad, S. (2017). Finding fake news and opinion spam with text categorisation. Safety and Privacy
[5] Conroy, N.J., Rubin, V.L., and Chen, Y. (2015). How to find fake news with automatic deception detection. Proceedings of the Association for Information Science and Technology.
[6] Shaffer, K., Jang, J. Y., Hodas, N., and Volkova (2017). Using language models to determine the difference between authentic and fraudulent news on Twitter. ACL.
[7] Guo, H., Jin, Z., Luo, J., and Zhang (2017). Using multimodal fusion with recurrent neural networks to identify rumours on microblogs. ACM Multimedia Conference.
[8] N. Ruchansky, S. Seo, and Y. Liu (2017). CSI: A deep learning model that discovers fake news. CIKM.
[9] Thorne, J. and Vlachos, A. (2018). How to accomplish automated fact checking, what to do next, and where to proceed from here. Proceedings of the First Workshop on Fact Extraction and Verification (FEVER).
[10] C. Buntain and J. Golbeck (2017). Finding fake news in popular Twitter conversations on your own. IEEE's SmartCloud.
[11] Wardle, C. and Derakhshan, H. (2017). Information Disorder: Moving towards a framework for study and policy development that spans several areas. The Council of Europe.
[12] Marwick, A. & Lewis, R. (2017). Manipulation of the media and incorrect information on the internet. Institute for Research on Data and Society.
[13] Tandoc Jr., E. C., Lim, Z. W., and Ling, R. (2018). A list of academic definitions of "fake news." Digital News.
[14] Lazer, D. M. J., et al. (2018). The study of fake news. Science, 359(6380), 1094–1096.
[15] Howard, P. N., and Woolley, S. (2019). Oxford University Press has a book called "Computational Propaganda: How Political Parties, Politicians, and Political Manipulation Use Social Media."
[16] The European Commission. (2018). A Way to Look at False Information from Many Angles
[17] UNESCO. (2021). A Guide for Teaching and Learning About Journalism, "Fake News," and Misinformation.
[18] The World Economic Forum. (2020). The Global Risks Report.
[19] The Pew Research Centre. (2021). Getting news from numerous social media networks.
[20] The RAND Corporation. (2021). Truth Decay: A First Look at How Facts and Analysis Are Becoming Less Important in American Life.
[21] Meta. (2022). Fighting incorrect information on all of our platforms.
[22] Tweet about your blog. (2021). Birdwatch: a technique for individuals in the community to deal with lies.
[23] AI that Google makes. (2022). Using AI to Stop Lies in Search.
[24] Newsroom on TikTok. (2023). Using AI tools to help us be more honest and open.
[25] Microsoft. (2020). AI for Good: Stopping incorrect information from spreading online.
[26] The Guardian. (2021). How AI is being used to stop bogus news.
[27] News from the BBC. (2022). Can we count on AI to stop fake news?
[28] The Times of New York. In the year 2020. The Rise of Deepfakes and the Threat to Truth
[29] Linked. (2021). AI tools are growing better at spotting deepfakes.
[30] MIT's Technology Review. (2019). Who wins: AI or false information?
[31] The Media Lab at MIT. (2020). Co-inform: Dealing with the Issue of False Information on the Internet.
[32] The Stanford Internet Observatory. (2022). Artificial intelligence, propaganda, and other things are the future of information warfare.
[33] The Internet Institute at Oxford. (2019). Computers are used for propaganda all around the world.
[34] Harvard's Berkman Klein Centre. (2021). How algorithms deal with false information
[35] The Cornell University AI Lab. (2020). Looking for language trends in false news.
[36] akeNewsNet Dataset (Shu et al., 2018).
[37] The LIAR Dataset (Wang, 2017).
[38] The Dataset for FEVER (Thorne et al., 2018).
[39] The CoAID Dataset (Cui & Lee, 2020).
[40] Ma et al. (2017) Twitter 15/16 Datasets.
[41] Jobin, A., Ienca, M., and Vayena, E. (2019). The rules for AI ethics all across the world. Intelligent Machines in Nature.
[42] L. Floridi (2019). Making the rules for AI that you can trust. AI in Nature.
[43] B. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi (2016). The ethics of algorithms: A look at the case. Big Data and Society
[44] The United Nations Educational, Scientific, and Cultural Organisation (2022). How to use AI in a way that is fair and honest.
[45] AlgorithmWatch. (2021). A report on making society more automated.
[46] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). BERT: Teaching Deep Bidirectional Transformers to Understand Language Before They Are Used
[47] Brown, T. B., and others. For language models, GPT-3 is a few-shot learner.
[48] Vaswani, A., et al. (2017). All you have to do is pay attention.
[49] Radford, K. Narasimhan, T. Salimans, and I. Sutskever. (2018). Generative Pre-Training: A Way to Get Better at Understanding LanguageYang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. (2019). XLNet: A method for pretraining language models with generalised autoregression.
Fake News, Digital Literacy, Fact-Checking, Content Authenticity, Explainable AI, Online Trust, Human-AI Collaboration, Algorithmic Bias, Misinformation Detection, Disinformation, Natural Language Processing (NLP), Machine Learning, Deepfake Detection, Social Media Moderation.