Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks (2210.10026v3)
Abstract: Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call "diverse misinformation" the complex relationships between human biases and demographics represented in misinformation. To investigate how users' biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1) their classification as misinformation is more objective; 2) we can control the demographics of the personas presented; 3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N=2,016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide "herd correction" where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.
- Information flow reveals prediction limits in online social activity. \JournalTitleNat. Hum. Behav. 3, 122–128, 10.1038/s41562-018-0510-5 (2019).
- Limits of individual consent and models of distributed consent in online social networks. In 2022 ACM Conf. Fairness Account. Transpar., 2251–2262, 10.1145/3531146.3534640 (ACM, 2022).
- Impact and dynamics of hate and counter speech online. \JournalTitleEPJ Data Sci. 11, 3, 10.1140/epjds/s13688-021-00314-6 (2022).
- Deep fakes: A looming challenge for privacy, democracy, and national security. \JournalTitleSSRN Electron. J. 107, 1753, 10.2139/ssrn.3213954 (2018).
- Deepfake detection by human crowds, machines, and machine-informed crowds. \JournalTitleProc. Natl. Acad. Sci. 119, 10.1073/pnas.2110013119 (2021).
- Recruiting large online samples in the united states and india: Facebook, mechanical turk, and qualtrics. \JournalTitlePolitical Sci. Res. Methods 8, 232–250, 10.1017/psrm.2018.28 (2018).
- Ebner, N. C. et al. Uncovering susceptibility risk to online deception in aging. \JournalTitleJ. Gerontol.: B 75, 522–533, 10.1093/geronb/gby036 (2018).
- Black and white lies: Race-based biases in deception judgments. \JournalTitlePsychol. Sci. 28, 1125–1136, 10.1177/0956797617705399 (2017).
- Racial discrimination and race-based biases on orthopedic-related outcomes. \JournalTitleOrthop. Nurs. 41, 103–115, 10.1097/nor.0000000000000830 (2022).
- Gender intensification and gender generalization biases in pre-adolescents, adolescents, and emerging adults. \JournalTitleBrit. J. Dev. Psychol. 38, 415–433, 10.1111/bjdp.12326 (2020).
- Macchi Cassia, V. Age biases in face processing: The effects of experience across development. \JournalTitleBrit. J. Psychol. 102, 816–829, 10.1111/j.2044-8295.2011.02046.x (2011).
- Microscopic evolution of social networks. In Proc. 14th ACM SIGKDD int. conf. Knowl. discov. data min., 462–470, 10.1145/1401890.1401948 (ACM, 2008).
- Mixed membership stochastic blockmodels. \JournalTitleAdv Neural Inf Process Syst 21 (2008).
- Birds of a feather are persuaded together: Perceived source credibility mediates the effect of political bias on misinformation susceptibility. \JournalTitlePers. Indiv. Differ. 185, 111269, 10.1016/j.paid.2021.111269 (2022).
- Personality factors and self-reported political news consumption predict susceptibility to political fake news. \JournalTitlePers. Indiv. Differ. 174, 110666, 10.1016/j.paid.2021.110666 (2021).
- Lazer, D. M. J. et al. The science of fake news. \JournalTitleScience 359, 1094–1096, 10.1126/science.aao2998 (2018).
- Measuring the news and its impact on democracy. \JournalTitleProc. Natl. Acad. Sci. 118, e1912443118, 10.1073/pnas.1912443118 (2021).
- Quoting is not citing: Disentangling affiliation and interaction on twitter. In Benito, R. M. et al. (eds.) Complex Networks & their Applications X, Studies in Computational Intelligence, 705–717, 10.1007/978-3-030-93409-5_58 (Springer Int. Publ. , 2022).
- The detection of political deepfakes. \JournalTitleJ. Comput.-Mediat. Commun. 27, zmac008, 10.1093/jcmc/zmac008 (2022).
- Ahmed, S. Who inadvertently shares deepfakes? analyzing the role of political interest, cognitive ability, and social network size. \JournalTitleTelemat. Inform. 57, 101508, 10.1016/j.tele.2020.101508 (2021).
- The tensions of deepfakes. \JournalTitleInf. Commun. & Soc. 1–15, 10.1080/1369118x.2023.2234980 (2023).
- Addressing health-related misinformation on social media. \JournalTitleJAMA 320, 2417, 10.1001/jama.2018.16865 (2018).
- Impact of rumors and misinformation on COVID-19 in social media. \JournalTitleJ. Prev. Med. Pub. Health 53, 171–174, 10.3961/jpmph.20.094 (2020).
- Kimmel, A. J. Rumors and the financial marketplace. \JournalTitleJ. Behav. Finance 5, 134–141, 10.1207/s15427579jpfm0503_1 (2004).
- Rini, R. Deepfakes and the epistemic backstop. \JournalTitlePhilos. Impr. 20, 1–16 (2020).
- Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. \JournalTitleSoc. Media Soc. 6, 205630512090340, 10.1177/2056305120903408 (2020).
- Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. \JournalTitleHealth Commun. 36, 1776–1784, 10.1080/10410236.2020.1794553 (2020).
- Misinformation in social media. \JournalTitleACM SIGKDD Explor. Newsl. 21, 80–90, 10.1145/3373464.3373475 (2019).
- Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 Boston marathon bombing. \JournalTitleIConference 2014 proc. (2014).
- HSpam14. In Proc. 38th Int. ACM SIGIR Conf. Res. Dev. Inf. Retr., 223–232, 10.1145/2766462.2767701 (ACM, 2015).
- Arif, A. et al. A closer look at the self-correcting crowd. In Proc. 2017 ACM Conf. Comput. Support. Coop. Work Soc. Comput., Cscw ’17, 155–168, 10.1145/2998181.2998294 (ACM, New York, NY, USA, 2017).
- The role of the crowd in countering misinformation: A case study of the COVID-19 infodemic. In 2020 IEEE Int. Conf. Big Data (Big Data), 748–757, 10.1109/bigdata50022.2020.9377956. Ieee (IEEE, 2020).
- Scaling up fact-checking using the wisdom of crowds. \JournalTitleSci. Adv. 7, eabf4393, 10.1126/sciadv.abf4393 (2021).
- Deepfakes and beyond: A survey of face manipulation and fake detection. \JournalTitleInform. Fusion 64, 131–148, 10.1016/j.inffus.2020.06.014 (2020).
- Roose, K. Here come the fake videos, too. \JournalTitleThe New York Times 4 (2018).
- Mori, M. The uncanny valley: The original essay by masahiro Mori. \JournalTitleIEEE Spectr. (1970).
- Verdoliva, L. Media forensics and DeepFakes: An overview. \JournalTitleIEEE J. Sel. Top. Signal Process. 14, 910–932, 10.1109/jstsp.2020.3002101 (2020).
- DeepVision: Deepfakes detection using human eye blinking pattern. \JournalTitleIEEE Access 8, 83144–83154, 10.1109/access.2020.2988660 (2020).
- Deepfake video detection using recurrent neural networks. In 2018 15th IEEE Int. Conf. Adv. Video Signal Based Surveill. (AVSS), 1–6, 10.1109/avss.2018.8639163. IEEE (IEEE, 2018).
- DeepFake detection algorithms: A meta-analysis. In 2020 2nd Symp. Signal Process. Syst., 43–48, 10.1145/3421515.3421532 (ACM, 2020).
- Blue, L. et al. Who are you (I really wanna know)? detecting audio DeepFakes through vocal tract reconstruction. In 31st USENIX Secur. Symp. (USENIX Secur. 22), 2691–2708 (Boston, MA, 2022).
- Does dispositional envy make you flourish more (or less) in life? an examination of its longitudinal impact and mediating mechanisms among adolescents and young adults. \JournalTitleJ. Happiness Stud. 22, 1089–1117, 10.1007/s10902-020-00265-1 (2020).
- Supporting a cybersecurity mindset: Getting internet users into the cat and mouse game. \JournalTitleSoc. Sci. Res. Netw. (2016).
- Greengard, S. Will deepfakes do deep damage? \JournalTitleCommun. ACM 63, 17–19, 10.1145/3371409 (2019).
- Schwartz, G. T. Explaining and justifying a limited tort of false light invasion of privacy. \JournalTitleCase W. Res. L. Rev. 41, 885 (1990).
- Fallis, D. The epistemic threat of deepfakes. \JournalTitlePhilos. & Technol. 34, 623–643, 10.1007/s13347-020-00419-2 (2020).
- Harris, D. Deepfakes: False pornography is here and the law cannot protect you. \JournalTitleDuke Law & Technol. Rev. 17, 99 (2018).
- de Ruiter, A. The distinct wrong of deepfakes. \JournalTitlePhilos. & Technol. 34, 1311–1332, 10.1007/s13347-021-00459-2 (2021).
- 115th Congress (2017–2018), S. –. Malicious deep fake prohibition act of 2018 (2018).
- Citron, D. K. The fight for privacy: Protecting dignity, identity, and love in the digital age (W.W. Norton & Company, 2022), first edn.
- on the Judiciary House of Representatives, T. C. Federal rules of evidence (2019).
- Solove, D. J. Conceptualizing privacy. \JournalTitleCalif. Law Rev. 90, 1087, 10.2307/3481326 (2002).
- The deepfake detection challenge (DFDC) preview dataset. \JournalTitlePreprint at https://arxiv.org/abs/1910.08854, (2019).
- Dolhansky, B. et al. The DeepFake detection challenge dataset. \JournalTitlePreprint at https://arxiv.org/abs/2006.07397, (2020).
- Matthews, B. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. \JournalTitleBiochim. Biophys. Acta (BBA) - Protein Struct. 405, 442–451, 10.1016/0005-2795(75)90109-9 (1975).
- Optimal classifier for imbalanced data using Matthews correlation coefficient metric. \JournalTitlePLoS One 12, e0177678, 10.1371/journal.pone.0177678 (2017).
- Multiple imputation by chained equations: What is it and how does it work? \JournalTitleInt. J. Method. Psych. 20, 40–49, 10.1002/mpr.329 (2011).
- Flock. In Proc. 18th ACM Conf. Comput. Support. Coop. Work & Soc. Comput., CSCW ’15, 600–611, 10.1145/2675133.2675214 (ACM, New York, NY, USA, 2015).
- Artifact magnification on deepfake videos increases human detection and subjective confidence. \JournalTitleJ. Vision 23, 5327, 10.1167/jov.23.9.5327 (2023).
- Epidemic data survivability in unattended wireless sensor networks: New models and results. \JournalTitleJ. Netw. Comput. Appl. 99, 146–165, 10.1016/j.jnca.2017.09.008 (2017).
- Epidemiological modeling of news and rumors on twitter. In Proc. 7th Workshop Soc. Netw. Min. Anal., 1–9, 10.1145/2501025.2501027 (ACM, 2013).
- Efficient estimation of influence functions for SIS model on social networks. In Twenty-First Int. Jt. Conf. Artif. Intell. (2009).
- Epidemic theory and data survivability in unattended wireless sensor networks: Models and gaps. \JournalTitlePervasive Mob. Comput. 9, 588–597, 10.1016/j.pmcj.2012.07.010 (2013).
- Epidemic spreading on complex networks with overlapping and non-overlapping community structure. \JournalTitlePhysica A 419, 171–182, 10.1016/j.physa.2014.10.023 (2015).
- Suppressing epidemics in networks using priority planning. \JournalTitleIEEE Trans. Network Sci. Eng. 3, 271–285, 10.1109/tnse.2016.2600029 (2016).
- van der Linden, S. Misinformation: Susceptibility, spread, and interventions to immunize the public. \JournalTitleNat. Med. 28, 460–467, 10.1038/s41591-022-01713-6 (2022).
- Virality prediction and community structure in social networks. \JournalTitleSci. Rep. 3, 1–6, 10.1038/srep02522 (2013).
- A new rumor propagation model and control strategy on social networks. In Proc. 2013 IEEE/ACM Int. Conf. Adv. Soc. Netw. Anal. Min., 1472–1473, 10.1145/2492517.2492599 (ACM, 2013).
- Dynamic 8-state ICSAR rumor propagation model considering official rumor refutation. \JournalTitlePhysica A 415, 333–346, 10.1016/j.physa.2014.07.023 (2014).
- A novel SCNDR rumor propagation model on online social networks. In 2015 IEEE Int. Conf. Consum. Electron. - Taiwan, 154–155, 10.1109/icce-tw.2015.7216829. IEEE (IEEE, 2015).
- Fact-checking effect on viral hoaxes. In Proc. 24th Int. Conf. World Wide Web, 977–982, 10.1145/2740908.2742572 (ACM, 2015).
- Xiao, Y. et al. Rumor propagation dynamic model based on evolutionary game and anti-rumor. \JournalTitleNonlinear Dynam. 95, 523–539, 10.1007/s11071-018-4579-1 (2018).
- Rumor and authoritative information propagation model considering super spreading in complex social networks. \JournalTitlePhysica A 506, 395–411, 10.1016/j.physa.2018.04.082 (2018).
- Information diffusion model for spread of misinformation in online social networks. In 2013 Int. Conf. Adv. Comput. Commun. Inform. (ICACCI), 1172–1177, 10.1109/icacci.2013.6637343. IEEE (IEEE, 2013).
- Dynamic effects of falsehoods and corrections on social media: A theoretical modeling and empirical evidence. \JournalTitleJ. Manage. Inform. Syst. 38, 989–1010, 10.1080/07421222.2021.1990611 (2021).
- Comparing community structure to characteristics in online collegiate social networks. \JournalTitleSIAM Rev. 53, 526–543, 10.1137/080734315 (2011).
- Epidemic spreading in scale-free networks. \JournalTitlePhys. Rev. Lett. 86, 3200–3203, 10.1103/physrevlett.86.3200 (2001).
- In related news, that was wrong: The correction of misinformation through related stories functionality in social media. \JournalTitleJ. Commun. 65, 619–638, 10.1111/jcom.12166 (2015).
- Using expert sources to correct health misinformation in social media. \JournalTitleSci. Commun. 39, 621–645, 10.1177/1075547017731776 (2017).
- Feld, S. L. Why your friends have more friends than you do. \JournalTitleAm. J. Sociol. 96, 1464–1477, 10.1086/229693 (1991).
- Co-diffusion of social contagions. \JournalTitleNew J. Phys. 20, 095001, 10.1088/1367-2630/aadce7 (2018).
- Complex dynamics of synergistic coinfections on realistically clustered networks. \JournalTitleProc. Natl. Acad. Sci. 112, 10551–10556, 10.1073/pnas.1507820112 (2015).
- Spread of infectious disease and social awareness as parasitic contagions on clustered networks. \JournalTitlePhys. Rev. Research 2, 033306, 10.1103/physrevresearch.2.033306 (2020).
- Dueling biological and social contagions. \JournalTitleSci. Rep. 7, 1–9, 10.1038/srep43634 (2017).
- Törnberg, P. Echo chambers and viral misinformation: Modeling fake news as complex contagion. \JournalTitlePLoS One 13, e0203958, 10.1371/journal.pone.0203958 (2018).
- Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review. \JournalTitlePsychol. Public Policy Law 7, 3–35, 10.1037/1076-8971.7.1.3 (2001).
- Cross-racial facial identification: A social cognitive integration. \JournalTitlePers. Soc. Psychol. B. 18, 296–301, 10.1177/0146167292183005 (1992).
- Much ado about deception. \JournalTitleSociol. Methods & Res. 41, 383–413, 10.1177/0049124112452526 (2012).
- Bröder, A. Deception can be acceptable. \JournalTitleAm. Psychol. 53, 805–806, 10.1037/h0092168 (1998).
- Greene, C. M. et al. Best practices for ethical conduct of misinformation Research. \JournalTitleEur. Psychol. 28, 139–150, 10.1027/1016-9040/a000491 (2023).
- Exploring the ethics and psychological impact of deception in psychological research. \JournalTitleIRB 35, 7 (2013).
- Center, P. R. Social media fact sheet. \JournalTitlePew Research Center: Washington, DC, USA (2021).
- Juniper Lovato (8 papers)
- Laurent Hébert-Dufresne (79 papers)
- Jonathan St-Onge (4 papers)
- Randall Harp (3 papers)
- Gabriela Salazar Lopez (1 paper)
- Sean P. Rogers (2 papers)
- Ijaz Ul Haq (6 papers)
- Jeremiah Onaolapo (7 papers)