Fighting Disinformation in the Digital Age: How the Ukrainian AI Community Came Together to Fight Harmful and False Information

A New Challenge for the Digital Age

In a world where fake news can spread at the speed of light, distinguishing true information from false becomes not just important, but a necessity to protect the information resilience of a state and society. This issue is particularly relevant for Ukraine, where the impact of disinformation can have serious consequences both domestically and on the international stage.

In reaction to this imminent threat, AI HOUSE, the largest and most powerful AI community in Ukraine and a part of the Roosh tech ecosystem, collaborated with the AI startup Mantis Analytics, in November, to launch a competition on the Kaggle platform. The objective of this competition was to unite developers and AI/ML specialists in crafting artificial intelligence-driven solutions dedicated to detecting and countering disinformation.

Disinformation Detection Challenge: What Is It?

The Disinformation Detection Challenge was a three-day event with the main goal of developing AI models to assist Ukrainian media and fact-checkers in identifying potential fake information.

The challenge was attended by 35 teams and 9 mentors, including representatives of leading technology companies and organizations. The event was also supported by companies such as Grammarly, SQUAD, and VoxCheck.

Participants, who could join as part of already formed teams or individually, developed their solutions on the Kaggle platform. Kaggle is a competition platform for analytics and predictive modeling, where participants compete to create the best models for predicting and describing data provided by companies or other users.

The event concluded with an offline meeting of participants and mentors, where the results were discussed and networking took place, which gave participants the opportunity to exchange experiences and ideas about the further use of the created solutions.

Behind the Scenes: How Teams Worked

Organizers provided participants with data and a task that involved developing models to assess the likelihood that certain content is disinformation and, accordingly, requires verification by an analyst or communication strategist. Using this information as a starting point, participants competed to create the most effective model, experimenting with various techniques and approaches, from deep learning to the development of complex algorithms. Each team sought to find a unique and innovative solution, and collaboration with mentors helped teams adjust their strategies and improve the quality of their models.

The methodological challenges emerged during the work — in particular, the question of where to draw the line between actually harmful and false content and simply expressed views, even those with which few would agree. The teams experimented with different models, including LLM, various loss functions, and analysis of additional media data, as well as linear models based on Roberta-based sentence transformer, etc. These diverse approaches demonstrate that even in difficult conditions, with the use of creativity and the ability to adapt, significant results can be achieved.

Winners and Their Solutions

The Disinformation Detection Challenge had three winning teams, each presenting its own solution:

  • The first team created two simple models that engaged the results of a second-tier model.
  • The second team trained its model on available data, built a classification model on the features of this model, and then developed a solution to increase the training sample using labelled data.
  • The third team applied a classical classification model in combination with the nearest neighbors method and merged these two approaches to obtain a class distribution close to the training set.

One key insight from participants was that using combinations of different models allows for more effective data generalization and increases the accuracy of disinformation detection. This underscores that integrating various approaches and technologies can significantly enhance results in the fight against fake news.

Other insights shared by the teams:

  • Linear algorithms, despite their simplicity, remain effective, especially in conditions of limited resources and data.
  • This suggests the potential use of such models in various scenarios where both high accuracy and interpretability of results are needed.
  • Some teams experimented with data labelling using ChatGPT-3.5-turbo, proving to be a promising approach.
  • Combating disinformation through AI requires close collaboration between technology companies and civil society, which will serve as a source of expertise and additional verification of solutions.

These discoveries not only demonstrate the importance of innovation and experimentation in AI but also open up new possibilities for their real-world application.

The challenge winners will receive consultations from Mantis Analytics on the further implementation of their solutions for Ukrainian media and fact-checkers. This assistance aims to implement effective tools against disinformation and stimulate the further development of these solutions. Additionally, some participants plan to volunteer to develop their solutions in collaboration with VoxCheck.

The Future of Combating Disinformation

Upon completing the Disinformation Detection Challenge, it becomes evident that combating disinformation in the digital era is not just a challenge, but also a significant opportunity for the Ukrainian AI community. The innovative approaches demonstrated by participants indicate broad prospects for using artificial intelligence to ensure information accuracy and transparency.

However, alongside opportunities, the community faces challenges: the need to productize developed solutions and provide assistance with implementation from major players in the media environment and civil society, the development of a transparent ethical code for the use of such technologies, ensuring their accessibility and effectiveness for a wide range of users, and continuous improvement in response to constantly changing disinformation methods.

Further development in this field will require collaboration among various stakeholders: technology companies, research institutions, government agencies, and the public. Two crucial steps will be the development and implementation of effective tools capable of quickly and accurately detecting disinformation, as well as the promotion of information literacy among the population.

About the organizers of the Disinformation Detection Challenge:

AI HOUSE is a non-profit organization that aims to build the largest and most powerful AI community in Ukraine. AI HOUSE is part of the Roosh ecosystem, a technology company that helps pioneering entrepreneurs and disruptive technologies scale globally.

Roosh is one of the most powerful Ukrainian AI/ML tech companies helping pioneering entrepreneurs and disruptive technologies scale globally by building, developing, and investing in them. The Roosh tech ecosystem consists of venture firm Roosh Ventures; companies Reface and Zibra AI; strategic partner for AI-powered company development Gathers; R&D company Neurons Lab, and Ukraine’s largest AI community, AI HOUSE.

Mantis Analytics is an AI-based platform that monitors the information space in real-time. Mantis Analytics provides organizations with reliable and actionable data, enriched with deep intelligence, for managing physical and informational risks.

Event partners:

VoxCheck is an independent fact-checking project from the nonprofit organization VoxUkraine.

Grammarly is a Ukrainian-founded AI writing assistant for English.

SQUAD is a Ukrainian research and delivery center working on the latest smart home security and IoT products.

Share

Related news

Menu