ENQUIRE PROJECT DETAILS BY GENERAL PUBLIC

Project Details
Funding Scheme : General Research Fund
Project Number : 12602420
Project Title(English) : Why fact-checking fails? Factors influencing the effectiveness of corrective messages countering misinformation on social media: A comparison of Hong Kong, the United States, and the Netherlands  
Project Title(Chinese) : 事實核查為何會失效?探究社交媒體上影響事實核查信息有效性的若干因素——香港、美國、與荷蘭的跨國對比研究  
Principal Investigator(English) : Prof Zhang , Xinzhi 
Principal Investigator(Chinese) :  
Department : Department of Media and Communication
Institution : Hong Kong Baptist University
E-mail Address : xzzhang2@cityu.edu.hk 
Tel :  
Co - Investigator(s) :
Prof Chen, Li
Dr Nancy, Guo
Dr Peng, Tai-Quan
Dr Qinfeng, Zhu
Panel : Humanities, Social Sciences
Subject Area : Humanities and Arts
Exercise Year : 2020 / 21
Fund Approved : 408,256
Project Status : Completed
Completion Date : 31-8-2022
Project Objectives :
It examines the extent to which people’s false beliefs can be debunked by corrective messages (i.e., messages that are designed to clarify a false argument or fabricated information) encountering misinformation (messages containing untrue and fabricated information) in the social media context;
It compares the effectiveness of corrective messages across three issue domains, i.e., politics, health, and marketing;
It examines the effects of three message-related factors, namely, (a) the source (a correction from the social media platform as the outcome of algorithmic recommendation versus a correction from a social peer versus a correction from a third-party fact-checking service), (b) the correction type (a correction merely stating the fact versus a correction appealing to coherence), and (c) the social cues of the message (whether the correction has been socially endorsed or not);
It examines how people process and act upon corrective messages when certain political attitudes and emotional status are evoked;
It examines people’s attitudinal and behavioral consequences of belief accuracy;
It helps to design effective corrective messages to debunk misinformation in the social media contexts, with empirical evidence in Hong Kong, the U.S., and the Netherlands, i.e., three wired and technologically advanced societies but public’s private concern on big data technologies is heightened by recent controversial events;
It contributes to the literature on the socio-psychological aspect of political communication countering misinformation and offers conceptual generalizations on the creation of an informed public in the era of information disorder.
Abstract as per original application
(English/Chinese):

本項目旨在探究在社交媒體環境下,可能影響「事實核查/闢謠/澄清」這類「糾正信息」(corrective messages)有效性的若干因素。社交媒體使人們不可避免地接觸錯誤信息,但社交媒體也可以作為糾正錯誤信息的渠道。然而,儘管政府部門、非政府組織、新聞媒體、以及互聯網企業在事實核查和闢謠信息上投入了大量資源,僅僅陳述正確的事實很難糾正人們錯誤的信念。糾正信息的屬性以及受眾個人的心理傾向都可能影響糾正信息的接受程度。本項目的理論框架源自政治傳播和社會心理學,著眼於三個信息因素和兩個受眾心理因素,這些因素可以幫助人們(在接觸錯誤信息之後)糾正錯誤認知,提升認知準確性。 從信息層面來看,本研究探究(1)糾正信息的來源,即是 a. 社交媒體平台提供的算法更正、b. 社交圈同行的更正、還是c. 第三方事實核查機構的更正更為有效; (2)糾正信息的類型,即 a. 從邏輯角度進行更正(指出錯誤信息的內部矛盾)還是 b. 從事實的角度進行更正(說明正確的事實) 更為有效; 以及(3)該糾正信息在網上的受歡迎程度。 從受眾心裡層面來看,本研究探究兩個心理因素, 分別是(1)受眾的政治態度和(2)受眾的負面情緒。過往研究發現這兩個因素都會影響人們的信息處理方法以及他們會接受或是拒絕該消息。 該項目還會對比錯誤信息的糾正機制在不同話題(如公共事務領域、健康領域、和市場營銷領域)之間的差異。該研究以數碼科技中的利益-隱私之辯作為具體的案例, 考察人們對數碼科技的認知如何觸發人們對這些科技的態度和行為反應,如接納、抵觸、或尋求更多相關信息。 為了提升本研究的普遍性(generalizability),本研究是一個跨國比較研究, 對比香港、美國、和荷蘭這三個科技發達的社會。 這三個社會都曾發生一系列對科技發展具有爭議的事件, 加劇了公眾對數據技術的關注。研究將進行在線實驗,並採用前測-後測設計(pretest-posttest design),以探索消息屬性、心理取向、信念準確性、和人們對科技的反應之間的因果關係。研究結果旨在幫助公共部門、新聞機構、事實核查團隊、和科技企業設計有效的事實核查信息,以糾正錯誤的信息。該項目還為政治傳播和社會心理方面的文獻提供了理論貢獻, 探討在這個信息失調的時代, 人們如何應對錯誤信息、從而構建一個知情的社會。
Realisation of objectives: We reviewed and examined how public communicators—such as journalists, professional fact-checking organizations, and government and public health institutions—communicated with the citizens about public health information, from fact-checking reports to debunking messages. After this preliminary exploration, we conducted two-wave population-based online survey experiments in Hong Kong, the US, and the Netherlands, respectively, in February 2022 and June 2022 (total N = 2,769). Participants were local citizens in the three focal societies aged between 18 and 65 years, recruited by Qualtrics, a survey vendor that monitored the project's data collection. Results from these studies addressed all seven research objectives. Objective 1: It examines the extent to which people’s false beliefs can be debunked by corrective messages. To address Objective 1, we conducted two-wave population-based online survey experiments in Hong Kong, the US, and the Netherlands, respectively. We conceptualized debunking messages as a practice of misinformation intervention, elaborating on the effectiveness as a multidimensional construct encompassing attitudinal and behavioral components. Specifically, we examined (1) the improvement in belief accuracy, which is crucial for informed decision-making to comply with public health policies. Additionally, we investigated (2) individuals’ positive appraisal of the debunker and (3) their increased engagement with the messages, such as sharing and posting news, thereby amplifying the societal impact of debunkers. Furthermore, we explored (4) people’s attitudes extremity toward the government’s current policy on COVID-19, which is defined as holding extreme positions on crucial policy issues and shielding themselves from opposing views. Together, these dimensions gauge the contribution of debunking as a misinformation intervention toward fostering an informed and deliberative public. The results reveal how the corrective messages could effectively combat misinformation in the social media context. Objective 2: It compares the effectiveness of corrective messages across three issue domains. As we addressed the effectiveness of corrective messages across three different issue domains, the results held greater generalizability than most current studies focusing on a single event. When the current project is focused on COVID-19, the focal domains—politics, health, and business—also pertain to the context of the pandemic. This design allowed for comparing distinct issues while minimizing potential variations from unique event characteristics. The research investigated the debunking of misinformation about politics (e.g., allegations of the government’s public health department excessively collecting citizens’ health data through vaccination), health technology (e.g., concerns about iPhones surreptitiously gathering sensitive personal information), and business (e.g., claims of Bill Gates and George Soros illicitly collecting global health data during their COVID-19 aid programs). Results indicated that, compared to the business issue, debunking messages related to health technology received higher source appraisal (in the Hong Kong and US samples). Furthermore, debunking statements on government vaccination programs were effective in reducing people’s false beliefs compared to debunking messages about the business (in the Netherlands sample). Objective 3: It examines the effects of three message-related factors. To achieve Objective 3, the current project is built on the literature on political and social-psychological communication. It examines factors influencing the effectiveness of debunking messages at two levels: (1) debunking’s message features— “who” (source-level factors), “says what” (message-level factors), and “with whom” (recipient-level factors on social media)—that include several communication elements that constitute citizens’ information ecosystem; and (2) the audience’s individual-level characteristics, which are the conditions that may facilitate or inhibit debunking’s effectiveness. Contrary to most of the hypotheses, limited effects were found from the source of intermediary, message framing, and social information on debunking’s effectiveness. Objective 4: It examines how people process and act upon corrective messages when certain political attitudes and emotional status are evoked. Objective 4 extended Objective 3 by addressing the conditional effects of source-level and message-level factors. The most important findings were that political attitudes emerged as prominent factors that could impede the effectiveness of the debunking intervention. We focused on political cynicism and conspiracy beliefs as two political predispositions identified in prior studies that might alter media effects related to news and public health. Our experiment study revealed that causal elaborations within debunking messages (i.e., identifying logical fallacies or providing alternative explanations) were more likely to backfire (i.e., increasing, but not decreasing, false beliefs) than denial (i.e., simply stating a statement is wrong without explanations), particularly among political cynics. Individuals with conspiracy beliefs who were exposed to debunking messages from peers developed extreme perspectives on government COVID-19 prevention policies. Objective 5: It examines people’s attitudinal and behavioral consequences of belief accuracy. As mentioned, the present take comprehensively examines several attitudinal and behavioral consequences of the effectiveness of misinformation debunking. Belief accuracy is one of these components. Results revealed the conditional effects of debunking messages’ features. Objective 6: It helps to design effective corrective messages to debunk misinformation in the social media contexts, with empirical evidence in Hong Kong, the U.S., and the Netherlands. To address Objective 6, our study stands as one of the pioneering endeavors to conduct a cross-national comparative investigation among North America (the U.S.), Europe (the Netherlands), and Asia (Hong Kong). This coverage spans Western and non-Western contexts, along with diverse political systems. The results indicate that debunking intervention would function differently in different societies. For instance, in the U.S., compared to the other two societies, individuals with conspiratorial beliefs exhibited a more positive rating of debunkers when the debunking messages were shared by peers. However, such a message also intensified their extremity in policy attitudes, thereby reducing the space for constructive deliberation with those who held differing viewpoints. Also, our study disclosed that debunking messages recommended by platform algorithms unexpectedly triggered a backfire effect among cynics. Readers of these debunking posts tended to harbor beliefs that large corporations or government entities collected excessive personal data and engaged in misconduct without obtaining users’ consent. Objective 7: It contributes to the literature on the socio-psychological aspect of political communication countering misinformation and offers conceptual generalizations on the creation of an informed public in the era of information disorder. Overall, our study aimed to comprehend why misinformation intervention by public health institutions on social media might backfire and struggle to reach a wider audience. Debunking could indeed backfire—reinforcing false beliefs and intensifying policy attitude extremity—particularly among political cynics and conspiracy believers. To effectively achieve the goals of misinformation intervention, we recommend that authorities and power elites should focus on cultivating public trust and dispelling conspiracy beliefs among the public. Establishing an informed and transparent communication ecosystem becomes pivotal for all stakeholders when addressing public health crises.
Summary of objectives addressed:
Objectives Addressed Percentage achieved
1.It examines the extent to which people’s false beliefs can be debunked by corrective messages (i.e., messages that are designed to clarify a false argument or fabricated information) encountering misinformation (messages containing untrue and fabricated information) in the social media context;Yes100%
2.It compares the effectiveness of corrective messages across three issue domains, i.e., politics, health, and marketing;Yes100%
3.It examines the effects of three message-related factors, namely, (a) the source (a correction from the social media platform as the outcome of algorithmic recommendation versus a correction from a social peer versus a correction from a third-party fact-checking service), (b) the correction type (a correction merely stating the fact versus a correction appealing to coherence), and (c) the social cues of the message (whether the correction has been socially endorsed or not);Yes100%
4.It examines how people process and act upon corrective messages when certain political attitudes and emotional status are evoked;Yes100%
5.It examines people’s attitudinal and behavioral consequences of belief accuracy;Yes100%
6.It helps to design effective corrective messages to debunk misinformation in the social media contexts, with empirical evidence in Hong Kong, the U.S., and the Netherlands, i.e., three wired and technologically advanced societies but public’s private concern on big data technologies is heightened by recent controversial events;Yes100%
7.It contributes to the literature on the socio-psychological aspect of political communication countering misinformation and offers conceptual generalizations on the creation of an informed public in the era of information disorder.Yes100%
Research Outcome
Major findings and research outcome: The project comprises two interconnected sections. The first section involves several observational studies on how public and professional communicators, including journalists, professional fact-checking organizations, and the government (especially its public health institutions) communicate public health information and misinformation interventions via social media during the COVID-19 pandemic. We analyzed approximately 4,000 tweets from health journalists and examined over 23,000 social media posts from professional fact-checking organizations. The results revealed that 65% of posts contained deliberative content (i.e., arguments being supported by evidence, such as identifiable citation information, URLs, or multimedia elements), which garnered more engagement. Nearly one-fifth of fact-checking posts incorporated click-bait elements (sensational or emotional elements aiming to attract readers to click the title). It is crucial to emphasize the significance of transparent, diverse, and detailed messages while balancing outreach and professionalism. The second section involves population-based online survey experiments to investigate factors influencing the effectiveness of debunking messages. We explored variables such as source-level factors, message framing, and cues of peer influence, along with individual-level factors like political cynicism and conspiracy beliefs. Comparative survey experiments were administered in Hong Kong, the Netherlands, and the US (total N = 2,679) in Feb and July 2022. We found limited effects from the source of intermediary, message framing, and social information across all three societies. Notably, political attitudes emerged as a pivotal factor. Surprisingly, debunking using causal elaborations (such as accusing the logical fallacy or providing alternative explanations) is more likely to backfire (increasing but not decreasing false beliefs) than debunking using merely denial, particularly for politically cynical individuals. When individuals with conspiracy beliefs received debunking messages from peers, their extreme attitudes toward government COVID-19 policies became more pronounced. The study provides insights into scenarios where misinformation interventions thrive or falter during public health crises, contributing to an informed public amidst the challenges of information disorder. We presented four papers at the 71st, 72nd, and 73rd Annual Conferences of the International Communication Association (ICA) in 2021, 2022, and 2023. The papers are later published in journals such as Journalism (Q1 in SSCI-Communication), Journalism Practice (SSCI), and Health Promotional International (Q1 in SSCI-Public Health). The paper derived from the survey experiments was accepted by the 106th Annual Conference of the Association for Education in Journalism and Mass Communication (AEJMC), DC, the US, in Aug 2023 and is now under review by Political Behavior, a top SSCI journal in political science.
Potential for further development of the research
and the proposed course of action:
1. The project team plans to conduct additional experiments exploring the effectiveness of debunking messages across various modalities. This includes assessing the impacts of debunking videos and graphic-based interventions like posters or information visualization, given that multimedia content is so popular on social media and casts more substantial psychological consequences than text messages. 2. Subsequent studies will also address misinformation interventions across diverse communication channels. While the current project primarily targets (semi-)public social media platforms like Twitter and Facebook, upcoming research will delve into private encrypted channels like WhatsApp and Telegram. 3. Building upon the current study, future projects aim to analyze the long-term enduring effects of misinformation intervention on social values and political attitudes. These endeavors will examine alternative misinformation intervention strategies, such as media literacy education, rather than post-hoc debunking. The current experiment indicated that source and message features have a limited impact when people are exposed to misinformation. Hence, future interventions should focus on cultivating resilient communities and proactively equipping individuals to counter false information. This involves fostering solid media literacy skills to evaluate their information consumption critically.
Layman's Summary of
Completion Report:
This project aims to enhance the effectiveness of misinformation intervention by promoting corrective messages that counter misinformation on social media. We first assessed current practices of news media, professional fact-checking organizations, and government in disseminating public health information, identifying opportunities for improvement. We subsequently explored factors impacting the effectiveness of debunking, such as social media intermediaries as the source-level aspect, debunking message framing as the message-level component, and social cues for peer influence. Additionally, we considered that individual-level political predispositions—political cynicism and conspiracy beliefs—may inhibit the effectiveness of debunking messages. Conducting population-based online survey experiments across Hong Kong, the Netherlands, and the United States, we identified limited effects from the source of intermediary, message framing, and social information on debunking’s effectiveness. Instead, political attitude emerged as a prominent factor. Contrary to our expectation, causal elaborations in debunking messages were more likely to backfire (increasing, but not decreasing, false beliefs) than denial, especially among political cynics. Individuals with conspiracy beliefs exposed to debunking messages from peers developed extreme views on government COVID-19 prevention policies. These findings underscore the necessity of accounting for political attitudes when designing intervention strategies for effective response during public health crises.
Research Output
Peer-reviewed journal publication(s)
arising directly from this research project :
(* denotes the corresponding author)
Year of
Publication
Author(s) Title and Journal/Book Accessible from Institution Repository
2022 Zhang, Xinzhi*; Zhu, Rui.  How source-level and message-level factors influence journalists’ social media visibility during a public health crisis  Yes 
2023 Zhu, Rui; Zhang, Xinzhi*  Public sector’s misinformation debunking during the public health campaign: a case of Hong Kong  Yes 
2022 Zhang, Xinzhi*; Fu, Xiaoyi  专业事实核查机构在社交媒体中的「点击诱饵」及其传播效果研究  No 
Zhu, Qinfeng; Peng, Tai-Quan; Zhang, Xinzhi*  How do individual and societal factors shape news authentication? Comparing misinformation resilience across Hong Kong, the Netherlands, and the United States  No 
2022 Zhang, Xinzhi*; Zhu, Rui  Health Journalists’ Social Media Sourcing During the Early Outbreak of the Public Health Emergency  No 
Zhang, Xinzhi*; Peng, Tai-Quan; Zhu, Qinfeng  Political Cynicism and Conspiracy Beliefs Inhibit Misinformation Intervention  No 
Recognized international conference(s)
in which paper(s) related to this research
project was/were delivered :
Month/Year/City Title Conference Name
Virtual Health journalists’ social media sourcing during the public health emergency: A network analytics approach  The 71st Annual Conference (virtual) of the International Communication Association (ICA), 27-31 May 2021 
Virtual How source-level and message-level factors influence journalists’ social media visibility during a public health emergency  The 71st Annual Conference (virtual) of the International Communication Association (ICA), 27-31 May 2021. 
Toronto Public sector’s misinformation debunking during the public health campaign: A case of Hong Kong’s COVID-19 vaccination programme  The 73rd Annual Conference of the International Communication Association (ICA), Toronto, Canada, 25 - 29 May 2023. 
Paris “FALSE! Read about it here!” Fact-checkers’ social media language feature and its effects on user engagement  The 72nd Annual Conference of the International Communication Association (ICA), Paris, France, 26-30 May 2022. 
DC Factors Influencing Debunking Messages’ Effectiveness: Comparing Hong Kong, the Netherlands, and the United States  The 2023 Annual Convention of the Association for Education in Journalism and Mass Communication (AEJMC), 7 - 10 August 2023, DC, the United States. 
Other impact
(e.g. award of patents or prizes,
collaboration with other research institutions,
technology transfer, etc.):

  SCREEN ID: SCRRM00542