Reducing Online Misinformation Exposure
Colocated with SIGIR 2019
Colocated with SIGIR 2019
The spread of misinformation online undermines the trust of people in online information. There is a global awareness that online misinformation might have repercussions on society. For example, false news can have the effect of polarizing the public discourse ahead of important events, such as elections. In the context of the health domain, low quality and misleading information can have an adverse impact on the health of individuals, who may adopt remedies not supported by scientific evidence.
To reduce the exposure of people to misinformation online, fact-checkers manually verify the veracity of claims made in content shared online. However, fact-checking is a slow process involving significant manual and intellectual effort to find trustworthy and reliable information. A fact-checker may have to look for evidence from trustworthy sources and interpret the available information in order to reach a conclusion.
Improving the efficiency of fact-checking by providing tools that automate parts of the process, or defining other processes for validating the veracity of claims made in online social media, are challenging problems with real impact on society, requiring an interdisciplinary approach to address them. The International Workshop on Reducing Online Misinformation Exposure (ROME) will provide a forum for researchers to discuss these problems and to define new directions for work on automating fact checking, reducing misinformation online, and making social media more resilient to the spread of false news.
We invite submissions of research papers on computational fact-checking, false news detection, and the analysis of misinformation spread on social media. Topics of interest include, but are not be limited to:
Submission deadline: Wednesday, May 15, 2019
Acceptance notification: Friday, May 31, 2019
Workshop day: Thursday, July 25, 2019
All submissions will be undergo double-blind peer review by the programme committee and will be judged on their relevance to the workshop and their potential to generate discussion. Submissions have to present original research contributions not concurrently submitted elsewhere (pre-prints submitted to ArXiv are eligible).
Each accepted paper will be presented in one of the sessions of the workshop, and it will be allocated a presentation slot in a poster session. At least one author of each accepted paper must register for the workshop to present the paper in person. Authors of accepted papers will be invited to submit extended versions to a special issue in a journal.
All questions about the workshop and submissions should be emailed to: rome2019workshop[at]easychair.org
Title: Technological Approaches to Online Misinformation: Major Challenges Ahead
Abstract: The currently ubiquitous online mis- and disinformation poses serious threats to society, democracy, and business. This talk first defines the technological, legal, societal, and ethical dimensions of this phenomenon. It also presents current understanding in why people believe false narratives, what motivates their sharing, and how they might impact offline behaviour (e.g. voting). The talk then summarises state-of-the-art technological approaches to fighting online misinformation. It follows the AMI conceptual framework (Agent, Message, Interpreter), which considers the origin(s) and the impact of online disinformation alongside the veracity of individual messages. In conclusion, major outstanding socio-technical and ethical challenges are discussed.
Title: Can We Spot the "Fake News" Before They Were Even Written?
Abstract: Given the recent proliferation of disinformation online, there has been also growing research interest in automatically debunking rumors, false claims, and "fake news". A number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone. An arguably more promising direction is to focus on fact-checking entire news outlets, which can be done in advance. Then, we could fact-check the news before they were even written: by checking how trustworthy the outlets that published them are. We will show how we do this in the Tanbih news aggregator, which makes readers aware of what they are reading. In particular, we develop media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, and stance with respect to various claims and topics.
Title: Towards the automatic detection of credulous users on social networks
Abstract: Recent studies show how online social bots (automated, often malicious accounts, populating social networks and mimicking genuine users) are able to amplify the dissemination of (fake) information by orders of magnitude. To mitigate the phenomenon, academicians and social media administrators have been studying bot detection techniques since years. In the talk, we shall tackle the issue from another perspective: using Twitter as a benchmark, we design and develop a supervised classification approach to automatically recognize those Twitter accounts referable to humans but with a high percentage of bots among their friends. We have deemed those users as 'credulous'. We do rely on an existing bot detector and a dataset of genuine users, extended with additional information about their friends. First, the bot detector is launched over such friends, to discriminate those friends that are bots. Then, we rank the genuine users by combining different metrics, including the ratio of bots over humans in their list of friends. The ground-truth of credulous users derives from the ranking. Afterwards, we design and develop the credulous users classifier, without exploiting features of the (heavy to collect) list of friends and only relying on low-cost feature engineering and extraction phase. On our ground-truth, we achieve an accuracy spanning from 88.70% to 93.27% (according to different learning algorithms), thus leading to positive and encouraging results. We argue that an automatic, and low-cost, detection of 'credulous' users, and the consequent investigation of their behavioral patterns, are beneficial for limiting the circulation of polarised and/or fake data on social networks, as well as an alternative for detecting social bots.
ROME 2019 is colocated with ACM SIGIR 2019. The conference will be held in the Cité des Sciences, located in the north-east of Paris. The Cité des sciences is located with the park La Villette which is home to exhibitions, and shows.