July 25, 2019, Paris, France

Workshop on
Reducing Online Misinformation Exposure
ROME 2019

Colocated with SIGIR 2019

Overview

The spread of misinformation online undermines the trust of people in online information. There is a global awareness that online misinformation might have repercussions on society. For example, false news can have the effect of polarizing the public discourse ahead of important events, such as elections. In the context of the health domain, low quality and misleading information can have an adverse impact on the health of individuals, who may adopt remedies not supported by scientific evidence.

To reduce the exposure of people to misinformation online, fact-checkers manually verify the veracity of claims made in content shared online. However, fact-checking is a slow process involving significant manual and intellectual effort to find trustworthy and reliable information. A fact-checker may have to look for evidence from trustworthy sources and interpret the available information in order to reach a conclusion.

Improving the efficiency of fact-checking by providing tools that automate parts of the process, or defining other processes for validating the veracity of claims made in online social media, are challenging problems with real impact on society, requiring an interdisciplinary approach to address them. The International Workshop on Reducing Online Misinformation Exposure (ROME) will provide a forum for researchers to discuss these problems and to define new directions for work on automating fact checking, reducing misinformation online, and making social media more resilient to the spread of false news.

Call For Papers

We invite submissions of research papers on computational fact-checking, false news detection, and the analysis of misinformation spread on social media. Topics of interest include, but are not be limited to:

  • Claim extraction/detection
  • Stance detection
  • Claim source detection
  • End-to-end evaluation of fact-checking
  • Supporting evidence retrieval
  • Quantifying and addressing biases in false news detection
  • Explainable models for computational fact checking
  • Software architectures for large scale false news detection
  • Provenance and source detection of claims
  • Analysis of the spread of misinformation
  • Crowd-sourcing for fact checking

Important Dates

Submission deadline: Wednesday, May 15, 2019

Acceptance notification: Friday, May 31, 2019

Workshop day: Thursday, July 25, 2019

Submission Guidelines

All submissions will be undergo double-blind peer review by the programme committee and will be judged on their relevance to the workshop and their potential to generate discussion. Submissions have to present original research contributions not concurrently submitted elsewhere (pre-prints submitted to ArXiv are eligible).

Submissions should be at most 6 pages, excluding references, must be submitted electronically through the EasyChair submission site and must be formatted according to ACM SIG proceedings format.

Each accepted paper will be presented in one of the sessions of the workshop, and it will be allocated a presentation slot in a poster session. At least one author of each accepted paper must register for the workshop to present the paper in person. Authors of accepted papers will be invited to submit extended versions to a special issue in a journal.

All questions about the workshop and submissions should be emailed to: rome2019workshop[at]easychair.org

Invited Speakers

Title: Technological Approaches to Online Misinformation: Major Challenges Ahead

Abstract: The currently ubiquitous online mis- and disinformation poses serious threats to society, democracy, and business. This talk first defines the technological, legal, societal, and ethical dimensions of this phenomenon. It also presents current understanding in why people believe false narratives, what motivates their sharing, and how they might impact offline behaviour (e.g. voting). The talk then summarises state-of-the-art technological approaches to fighting online misinformation. It follows the AMI conceptual framework (Agent, Message, Interpreter), which considers the origin(s) and the impact of online disinformation alongside the veracity of individual messages. In conclusion, major outstanding socio-technical and ethical challenges are discussed.


Title: Can We Spot the "Fake News" Before They Were Even Written?

Abstract: Given the recent proliferation of disinformation online, there has been also growing research interest in automatically debunking rumors, false claims, and "fake news". A number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone. An arguably more promising direction is to focus on fact-checking entire news outlets, which can be done in advance. Then, we could fact-check the news before they were even written: by checking how trustworthy the outlets that published them are. We will show how we do this in the Tanbih news aggregator, which makes readers aware of what they are reading. In particular, we develop media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, and stance with respect to various claims and topics.


Title: Towards the automatic detection of credulous users on social networks

Abstract: Recent studies show how online social bots (automated, often malicious accounts, populating social networks and mimicking genuine users) are able to amplify the dissemination of (fake) information by orders of magnitude. To mitigate the phenomenon, academicians and social media administrators have been studying bot detection techniques since years. In the talk, we shall tackle the issue from another perspective: using Twitter as a benchmark, we design and develop a supervised classification approach to automatically recognize those Twitter accounts referable to humans but with a high percentage of bots among their friends. We have deemed those users as 'credulous'. We do rely on an existing bot detector and a dataset of genuine users, extended with additional information about their friends. First, the bot detector is launched over such friends, to discriminate those friends that are bots. Then, we rank the genuine users by combining different metrics, including the ratio of bots over humans in their list of friends. The ground-truth of credulous users derives from the ranking. Afterwards, we design and develop the credulous users classifier, without exploiting features of the (heavy to collect) list of friends and only relying on low-cost feature engineering and extraction phase. On our ground-truth, we achieve an accuracy spanning from 88.70% to 93.27% (according to different learning algorithms), thus leading to positive and encouraging results. We argue that an automatic, and low-cost, detection of 'credulous' users, and the consequent investigation of their behavioral patterns, are beneficial for limiting the circulation of polarised and/or fake data on social networks, as well as an alternative for detecting social bots.

Accepted Papers

Oral & Poster presentations

  1. Ahmet Aker, Marius Hamacher, Anne Smets, Alicia Nti, Hauke Gravenkamp, Johannes Erdmann, Sabrina Mayer, Julia Serong, Anna Welpinghus and Fancesco Marchi. Corpus of news articles annotated with article level subjectivity. [PDF]
  2. Dimitrios Bountouridis, Mykola Makhortykh, Emily Sullivan, Jaron Harambam, Nava Tintarev and Claudia Hauff. Annotating Credibility: Identifying and Mitigating Bias in Credibility Datasets. [PDF]
  3. Marco De Grandis, Gabriella Pasi and Marco Viviani. Multi-Criteria Decision Making and Supervised Learning for Fake News Detection in Microblogging. [PDF]
  4. Amira Ghenai, Mark D. Smucker and Charles L. A. Clarke. A Think-Aloud Study about Medical Misinformation in Search Results. [PDF]
  5. Olga Papadopoulou, Dimitrios Giomelakis, Lazaros Apostolidis, Symeon Papadopoulos and Yiannis Kompatsiaris. Context Aggregation and Analysis: A Tool for User-Generated Video Verification. [PDF]
  6. Wee Yong Lim, Mong Li Lee and Wynne Hsu. End-to-End Time-Sensitive Fact Check.[PDF]

Poster presentations

  1. Mustafa Abualsaud and Mark Smucker. Exposure and Order Effects of Misinformation on Health Search Decisions. [PDF]
  2. Frosso Papanastasiou, Georgios Katsimpras and Georgios Paliouras. Tensor Factorization with Label Information for Fake News Detection. [PDF]
  3. Fabio Del Vigna, Guido Caldarelli, Rocco De Nicola, Marinella Petrocchi, and Fabio Saracco. The role of bot squads in the political propaganda on Twitter. [PDF]
  4. Ivan Srba, Robert Moro, Jakub Simko, Jakub Sevcech, Daniela Chuda, Pavol Navrat and Maria Bielikova. Monant: Platform for Monitoring, Detection and Mitigation of Antisocial Behaviour. [PDF]

Programme

9:00 - 9:05: Welcome
9:05 - 10:50: Keynote 1 - Carolina Scarton
10:50 - 10:30: Session 1
Amira Ghenai, Mark D. Smucker and Charles L. A. Clarke. A Think-Aloud Study about Medical Misinformation in Search Results
Dimitrios Bountouridis, Mykola Makhortykh, Emily Sullivan, Jaron Harambam, Nava Tintarev and Claudia Hauff. Annotating Credibility: Identifying and Mitigating Bias in Credibility Datasets
10:30 - 11:00: Coffee break
11:00 - 11:45: Keynote 2 - Rocco De Nicola
11:45 - 12:30: Session 2
Wee Yong Lim, Mong Li Lee and Wynne Hsu. End-to-End Time-Sensitive Fact Check
Olga Papadopoulou, Dimitrios Giomelakis, Lazaros Apostolidis, Symeon Papadopoulos and Yiannis Kompatsiaris. Context Aggregation and Analysis: A Tool for User-Generated Video Verification
12:30 - 13:30: Lunch Break
13:30 - 14:10: Session 3
Ahmet Aker, Marius Hamacher, Anne Smets, Alicia Nti, Hauke Gravenkamp, Johannes Erdmann, Sabrina Mayer, Julia Serong, Anna Welpinghus and Fancesco Marchi. Corpus of news articles annotated with article level subjectivity
Marco De Grandis, Gabriella Pasi and Marco Viviani. Multi-Criteria Decision Making and Supervised Learning for Fake News Detection in Microblogging
14:10 - 15:00 Poster Session
All accepted papers presented as posters
15:00 - 15:30: Coffee break
15:30 - 16:15 Keynote 3 - Preslav Nakov
16:15 - 17:00 Panel and Closing

Organization

Workshop Organizers

  • Guillaume Bouchard, Facebook, UK
  • Guido Caldarelli, IMT Lucca, Italy
  • Vassilis Plachouras, Facebook, UK

Steering Committee

  • Filippo Menczer, University of Indiana, US
  • Fabrizio Silvestri, Facebook, UK

Programme Committee

  • Udo Kruschwitz, University of Essex, UK
  • Kashyap Popat, Max Planck Institute for Informatics, Germany
  • Kyumin Lee, Worcester Polytechnic Institute, US
  • Kai Shu, Arizona State University, US
  • Arkaitz Zubiaga, Queen Mary University of London, UK
  • Julien Leblay, National Institute of Advanced Industrial Science and Technology (AIST), Japan
  • Antonio Scala, Institute for Complex Systems / Italian National Research Council, Italy
  • Marcos Zampieri, University of Wolverhampton, UK
  • Luca Maria Aiello, Nokia Bell Labs, UK
  • Svitlana Volkova, Pacific Nothwest National Laboratory, US
  • Ioana Manolescu, INRIA Saclay - Île-de-France and Ecole Polytechnique, France
  • Joemon Jose, University of Glasgow, UK
  • Symeon Papadopoulos, Information Technologies Institute, Greece
  • Huan Liu, Arizona State University, US
  • Maria Liakata, University of Warwick, UK
  • Sumithra Velupillai, KTH Royal Institute of Technology, Sweden
  • Andreas Vlachos, University of Cambridge, UK
  • Preslav Nakov, Qatar Computing Research Institute, HBKU, Qatar
  • Aristides Gionis, Aalto University, Finland

Venue

ROME 2019 is colocated with ACM SIGIR 2019. The conference will be held in the Cité des Sciences, located in the north-east of Paris. The Cité des sciences is located with the park La Villette which is home to exhibitions, and shows.