Papers
Topics
Authors
Recent
Search
2000 character limit reached

Contextual Confidence and Generative AI

Published 2 Nov 2023 in cs.AI | (2311.01193v2)

Abstract: Generative AI models perturb the foundations of effective human communication. They present new challenges to contextual confidence, disrupting participants' ability to identify the authentic context of communication and their ability to protect communication from reuse and recombination outside its intended context. In this paper, we describe strategies--tools, technologies and policies--that aim to stabilize communication in the face of these challenges. The strategies we discuss fall into two broad categories. Containment strategies aim to reassert context in environments where it is currently threatened--a reaction to the context-free expectations and norms established by the internet. Mobilization strategies, by contrast, view the rise of generative AI as an opportunity to proactively set new and higher expectations around privacy and authenticity in mediated communication.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (137)
  1. P. Verma. (2023, March) They thought loved ones were calling for help. it was an AI scam. The Washington Post. [Online]. Available: https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/
  2. J. A. Lanz, “Dating app tool upgraded with AI is poised to power catfishing,” Decrypt, 2023.
  3. S. Kreps and D. L. Kriner, “The potential impact of emerging technologies on democratic representation: Evidence from a field experiment,” New Media & Society, pp. 1–20, 2023.
  4. S. Jain, D. Siddharth, and G. Weyl, “Plural publics,” Edmond and Lily Safra Center for Ethics, 2023. [Online]. Available: https://gettingplurality.org/2023/03/18/plural-publics/
  5. C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, no. 3, pp. 379–423, 1948.
  6. I. Solaiman, Z. Talat, W. Agnew, L. Ahmad, D. Baker, S. L. Blodgett, H. Daumé III, J. Dodge, E. Evans, S. Hooker et al., “Evaluating the social impact of generative AI systems in systems and society,” arXiv preprint arXiv:2306.05949, 2023.
  7. R. Shelby, S. Rismani, K. Henne, A. Moon, N. Rostamzadeh, P. Nicholas, N. Yilla-Akbari, J. Gallegos, A. Smart, E. Garcia et al., “Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction,” in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 2023, pp. 723–741.
  8. L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh et al., “Ethical and social risks of harm from language models,” arXiv preprint arXiv:2112.04359, 2021.
  9. R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On the opportunities and risks of foundation models,” arXiv preprint arXiv:2108.07258, 2021.
  10. L. Weidinger, J. Uesato, M. Rauh, C. Griffin, P.-S. Huang, J. Mellor, A. Glaese, M. Cheng, B. Balle, A. Kasirzadeh et al., “Taxonomy of risks posed by language models,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 214–229.
  11. T. Shevlane, S. Farquhar, B. Garfinkel, M. Phuong, J. Whittlestone, J. Leung, D. Kokotajlo, N. Marchal, M. Anderljung, N. Kolt et al., “Model evaluation for extreme risks,” arXiv preprint arXiv:2305.15324, 2023.
  12. M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar et al., “The malicious use of artificial intelligence: Forecasting, prevention, and mitigation,” arXiv preprint arXiv:1802.07228, 2018.
  13. H. Nissenbaum, “Privacy as contextual integrity,” Washington Law Review, vol. 79, p. 119, 2004.
  14. National Science and Technology Council, “Roadmap for researchers on priorities related to information integrity research and development,” 2022.
  15. D. Allen and J. Pottle, “Democratic knowledge and the problem of faction,” Knight Foundation White Paper Series, Trust, Media, and Democracy, 2018.
  16. A. E. Marwick and d. boyd, “I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience,” New Media & Society, vol. 13, no. 1, pp. 114–133, 2011.
  17. N. K. Baym and D. Boyd, “Socially mediated publicness: An introduction,” Journal of Broadcasting & Electronic Media, vol. 56, no. 3, pp. 320–329, 2012.
  18. E. Brynjolfsson, “The Turing trap: The promise & peril of human-like artificial intelligence,” Daedalus, vol. 151, no. 2, pp. 272–287, 2022.
  19. E. Horvitz, “On the horizon: Interactive and compositional deepfakes,” in Proceedings of the 2022 International Conference on Multimodal Interaction.   Bengaluru, India: ACM, November 2022, pp. 653–661.
  20. J. Bote, “Sanas, the buzzy Bay Area startup that wants to make the world sound whiter,” San Francisco Gate, 2022.
  21. R. Chandran. (2023, April) Indigenous groups fear culture distortion as AI learns their languages. The Japan Times. [Online]. Available: https://www.japantimes.co.jp/news/2023/04/10/world/indigenous-language-ai-colonization-worries/
  22. R. McIlroy-Young, J. Kleinberg, S. Sen, S. Barocas, and A. Anderson, “Mimetic models: Ethical implications of AI that acts like you,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2022, pp. 479–490.
  23. J. A. Goldstein, G. Sastry, M. Musser, R. DiResta, M. Gentzel, and K. Sedova, “Generative language models and automated influence operations: Emerging threats and potential mitigations,” arXiv preprint arXiv:2301.04246, 2023.
  24. M. Sharma, M. Tong, T. Korbak, D. Duvenaud, A. Askell, S. R. Bowman, N. Cheng, E. Durmus, Z. Hatfield-Dodds, S. R. Johnston et al., “Towards understanding sycophancy in language models,” arXiv preprint arXiv:2310.13548, 2023.
  25. H. Vasconcelos, M. Jörke, M. Grunde-McLaughlin, T. Gerstenberg, M. S. Bernstein, and R. Krishna, “Explanations can reduce overreliance on AI systems during decision-making,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW1, pp. 1–38, 2023.
  26. P. Henderson, X. Li, D. Jurafsky, T. Hashimoto, M. A. Lemley, and P. Liang, “Foundation models and fair use,” arXiv preprint arXiv:2303.15715, 2023.
  27. N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson et al., “Extracting training data from large language models,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2633–2650.
  28. M. Nasr, N. Carlini, J. Hayase, M. Jagielski, A. F. Cooper, D. Ippolito, C. A. Choquette-Choo, E. Wallace, F. Tramèr, and K. Lee, “Scalable extraction of training data from (production) language models,” arXiv preprint arXiv:2311.17035, 2023.
  29. I. Shumailov, Z. Shumaylov, Y. Zhao, Y. Gal, N. Papernot, and R. Anderson, “The curse of recursion: Training on generated data makes models forget,” arXiv preprint arxiv:2305.17493, 2023.
  30. K. Singhal, T. Tu, J. Gottweis, R. Sayres, E. Wulczyn, L. Hou, K. Clark, S. Pfohl, H. Cole-Lewis, D. Neal et al., “Towards expert-level medical question answering with large language models,” arXiv preprint arXiv:2305.09617, 2023.
  31. European Disability Forum, “Resolution on the “EU Artificial Intelligence Act for the inclusion of persons with disabilities”,” Tech. Rep., 2023. [Online]. Available: https://www.edf-feph.org/content/uploads/2023/04/EDF-Board-Resolution-on-the-EU-Artificial-intelligence-Act-for {}-the-inclusion-of-persons-with-disabilities.pdf
  32. Internet Crime Complaint Center. (2022) Federal Bureau of Investigation elder fraud report. [Online]. Available: https://www.ic3.gov/Media/PDF/AnnualReport/2022_IC3ElderFraudReport.pdf
  33. A. Puig. (2023, March) Scammers use AI to enhance their family emergency schemes. Federal Trade Commission Consumer Alert. [Online]. Available: https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes
  34. Consumer Financial Protection Bureau, “Office of servicemember affairs annual report,” Tech. Rep., 2023. [Online]. Available: https://s3.amazonaws.com/files.consumerfinance.gov/f/documents/cfpb_osa-annual-report_2022.pdf
  35. M. Xiao, M. Wang, A. Kulshrestha, and J. Mayer, “Account verification on social media: User perceptions and paid enrollment,” arXiv preprint arXiv:2304.14939, 2023.
  36. D. Akhawe and A. P. Felt, “Alice in warningland: A large-scale field study of browser security warning effectiveness,” in Proceedings of the 22nd USENIX Security Symposium, 2013, pp. 257–272.
  37. The Coalition for Content Provenance and Authenticity. (2023) Overview of C2PA. [Online]. Available: https://c2pa.org/
  38. Project Origin. (2023) Project origin. [Online]. Available: https://www.originproject.info/
  39. Content Authenticity Initiative. (2023) Content authenticity initiative. [Online]. Available: https://contentauthenticity.org/
  40. Microsoft. (2023) Cross-platform origin of content framework. [Online]. Available: https://github.com/microsoft/xpoc-framework
  41. V. Buterin. (2023) What do I think about Community Notes? [Online]. Available: https://vitalik.ca/general/2023/08/16/communitynotes.html
  42. D. Alba, D. Lu, L. Yin, and F. Eric, “How musk’s x is failing to stem the surge of misinformation about israel and gaza,” Bloomberg.com. [Online]. Available: https://www.bloomberg.com/graphics/2023-israel-hamas-war-misinformation-twitter-community-notes
  43. e Estonia. (2023) e-identity: ID-card. [Online]. Available: https://e-estonia.com/solutions/e-identity/id-card/
  44. Unique Identification Authority of India. (2023) Aadhaar. [Online]. Available: https://uidai.gov.in/en/my-aadhaar/get-aadhaar.html
  45. Singpass. (2023) Singapore government identity passport. [Online]. Available: https://www.singpass.gov.sg/
  46. Microsoft Research. (2023) U-prove. [Online]. Available: https://www.microsoft.com/en-us/research/project/u-prove/
  47. W3C. (2022) Verifiable credentials data model v1.1. [Online]. Available: https://www.w3.org/TR/vc-data-model/
  48. American Association of Motor Vehicle Administrators. (2023) Mobile driver’s license (mDL) implementation guidelines. [Online]. Available: https://www.aamva.org/getmedia/b801da7b-5584-466c-8aeb-f230cef6dda5/mDL-Implementation-Guidelines-Version-1-2_final.pdf
  49. Digital Government Exchange (DGX) Digital Identity Working Group. (2022) Digital identity and verifiable credentials in centralised, decentralised and hybrid systems. [Online]. Available: https://www.developer.tech.gov.sg/our-digital-journey/digital-government-exchange/files/DGX%20DIWG%202022%20Report%20v1.5.pdf
  50. Apple. (2023) Apple vision pro. [Online]. Available: https://www.apple.com/apple-vision-pro/
  51. Microsoft. (2023) LinkedIn and Microsoft Entra introduce a new way to verify your workplace. [Online]. Available: https://www.microsoft.com/en-us/security/blog/2023/04/12/linkedin-and-microsoft-entra-introduce-a-new-way-to-verify-your-workplace/
  52. S. Basu and R. Malik. (2023) India’s Aadhaar surveillance project should concern us all. WIRED UK. [Online]. Available: https://www.wired.co.uk/article/india-aadhaar-biometrics-privacy
  53. Worldcoin. (2023) Worldcoin whitepaper. [Online]. Available: https://whitepaper.worldcoin.org/
  54. N. Immorlica, M. O. Jackson, and E. G. Weyl, “Verifying identity as a social intersection,” Available at SSRN 3375436, 2019.
  55. OAuth. (2023) Oauth information. [Online]. Available: https://mailarchive.ietf.org/arch/browse/oauth
  56. Gitcoin. (2023) Gitcoin passport. [Online]. Available: https://passport.gitcoin.co/
  57. SpruceID. (2023) SpruceID. [Online]. Available: https://spruceid.com/
  58. Proof of Humanity. (2023) Proof of humanity. [Online]. Available: https://proofofhumanity.id/
  59. D. Siddarth, S. Ivliev, S. Siri, and P. Berman, “Who watches the watchmen? A review of subjective approaches for sybil-resistance in proof of personhood protocols,” Frontiers in Blockchain, vol. 3, pp. 1–16, 2020.
  60. S. Jain, L. Erichsen, and G. Weyl, “A plural decentralized identity frontier: Abstraction v. composability tradeoffs in web3,” arXiv preprint arXiv:2208.11443, 2022.
  61. Y. Wen, J. Kirchenbauer, J. Geiping, and T. Goldstein, “Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust,” arXiv preprint arXiv:2305.20030, 2023.
  62. J. Kirchenbauer, J. Geiping, Y. Wen, J. Katz, I. Miers, and T. Goldstein, “A watermark for large language models,” arXiv preprint arXiv:2301.10226, 2023.
  63. S. Abdelnabi and M. Fritz, “Adversarial watermarking transformer: Towards tracing text provenance with data hiding,” in Proceedings of the 2021 IEEE Symposium on Security and Privacy.   IEEE, 2021, pp. 121–140.
  64. X. Zhao, P. Ananth, L. Li, and Y.-X. Wang, “Provable robust watermarking for AI-generated text,” arXiv preprint arXiv:2306.17439, 2023.
  65. S. Aaronson. (2023) My AI safety lecture for UT effective altruism. Shtetl-Optimized. [Online]. Available: https://scottaaronson.blog/?p=6823
  66. S. Gowal and P. Kohli. (2023) Identifying AI-generated images with SynthID. [Online]. Available: https://www.deepmind.com/blog/identifying-ai-generated-images-with-synthid
  67. M. Douze and P. Fernandez. (2023, October) Stable signature: A new method for watermarking images created by open source generative ai. [Online]. Available: https://ai.meta.com/blog/stable-signature-watermarking-generative-ai
  68. Z. Jiang, J. Zhang, and N. Z. Gong, “Evading watermark based detection of AI-generated content,” arXiv preprint arXiv:2305.03807, 2023.
  69. X. Zhao, K. Zhang, Y.-X. Wang, and L. Li, “Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats,” arXiv preprint arXiv:2306.01953, 2023.
  70. J. Kirchenbauer, J. Geiping, Y. Wen, M. Shu, K. Saifullah, K. Kong, K. Fernando, A. Saha, M. Goldblum, and T. Goldstein, “On the reliability of watermarks for large language models,” arXiv preprint arXiv:2306.04634, 2023.
  71. S. Shoker, A. Reddie, S. Barrington, M. Brundage, H. Chahal, M. Depp, B. Drexel, R. Gupta, M. Favaro, J. Hecla et al., “Confidence-building measures for artificial intelligence: Workshop proceedings,” arXiv preprint arXiv:2308.00862, 2023.
  72. A. Karpur, D. Lahav, J. Matheny, J. Alstott, and S. Nevo, “Securing artificial intelligence model weights: Interim report,” 2023.
  73. D. Kang, T. Hashimoto, I. Stoica, and Y. Sun, “Scaling up trustless dnn inference with zero-knowledge proofs,” arXiv preprint arXiv:2210.08674, 2022.
  74. EZKL. (2023) What is EZKL? [Online]. Available: https://docs.ezkl.xyz/
  75. Evals. (2023) Update on ARC’s recent eval efforts. [Online]. Available: https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/
  76. OpenAI. (2023) GPT-4 system card. [Online]. Available: https://cdn.openai.com/papers/gpt-4-system-card.pdf
  77. Anthropic. (2023) Model card: Claude-2. [Online]. Available: https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf
  78. Cohere Safety Team and Responsibility Council. (2023) Generation model card. [Online]. Available: https://docs.cohere.com/docs/generation-card
  79. J. Mökander, J. Schuett, H. R. Kirk, and L. Floridi, “Auditing large language models: A three-layered approach,” AI and Ethics, pp. 1–31, 2023.
  80. P. Cihon, M. J. Kleinaltenkamp, J. Schuett, and S. D. Baum, “AI certification: Advancing ethical practice by reducing information asymmetries,” IEEE Transactions on Technology and Society, vol. 2, no. 4, pp. 200–209, 2021.
  81. M. Brundage, S. Avin, J. Wang, H. Belfield, G. Krueger, G. Hadfield, H. Khlaaf, J. Yang, H. Toner, R. Fong et al., “Toward trustworthy AI development: Mechanisms for supporting verifiable claims,” arXiv preprint arXiv:2004.07213, 2020.
  82. I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes, “Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 33–44.
  83. S. K. Katyal, “Private accountability in the age of artificial intelligence,” UCLA Law Review, vol. 66, p. 54, 2019.
  84. T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iii, and K. Crawford, “Datasheets for datasets,” Communications of the ACM, vol. 64, no. 12, pp. 86–92, 2021.
  85. I. D. Raji, P. Xu, C. Honigsberg, and D. Ho, “Outsider oversight: Designing a third party audit ecosystem for AI governance,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2022, pp. 557–571.
  86. D. Kang and S. Waiwitlikhit. (2023, March) Tensorplonk: A “GPU” for ZKML, delivering 1,000x speedups. [Online]. Available: https://medium.com/@danieldkang/tensorplonk-a-gpu-for-zkml-delivering-1-000x-speedups-d1ab0ad27e1c
  87. “ZK10: ZKML with EZKL: Where we are and the future,” 2023. [Online]. Available: https://www.youtube.com/watch?v=YI3ljDis8sc
  88. E. G. Weyl, P. Ohlhaver, and V. Buterin, “Decentralized society: Finding web3’s soul,” Available at SSRN 4105763, 2022.
  89. (2023) Pairwise coordination subsidies: A new quadratic funding design. [Online]. Available: https://ethresear.ch/t/pairwise-coordination-subsidies-a-new-quadratic-funding-design/5553
  90. (2023) Plural communication channel. Plurality Network. [Online]. Available: https://github.com/PluralCC#about
  91. T. Shevlane, “Structured access: An emerging paradigm for safe AI deployment,” arXiv preprint arXiv:2201.05159, 2022.
  92. M. Anderljung and J. Hazell, “Protecting society from AI misuse: When are restrictions on capabilities warranted?” arXiv preprint arXiv:2303.09377, 2023.
  93. M. Anderljung, J. Barnhart, J. Leung, A. Korinek, C. O’Keefe, J. Whittlestone, S. Avin, M. Brundage, J. Bullock, D. Cass-Beggs et al., “Frontier AI regulation: Managing emerging risks to public safety,” arXiv preprint arXiv:2307.03718, 2023.
  94. I. Solaiman, “The gradient of generative AI release: Methods and considerations,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 111–122.
  95. Anthropic. (2023) Claude 2. [Online]. Available: https://www.anthropic.com/index/claude-2
  96. OpenAI. GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses. [Online]. Available: https://openai.com/gpt-4
  97. K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, N. Scales, A. Tanwani, H. Cole-Lewis, S. Pfohl et al., “Large language models encode clinical knowledge,” arXiv preprint arXiv:2212.13138, 2022.
  98. J. Howard. (2023, November) AI safety and the age of dislightenment. fast.ai. [Online]. Available: https://www.fast.ai/posts/2023-11-07-dislightenment.html
  99. S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” Decentralized business review, 2008.
  100. E. Medina and R. Mac, “Musk says twitter is limiting number of posts users can read,” New York Times, 2023. [Online]. Available: https://www.nytimes.com/2023/07/01/business/twitter-rate-limit-elon-musk.html
  101. G. Support. Prevent mail to Gmail users from being blocked or sent to spam. [Online]. Available: https://support.google.com/a/answer/81126?sjid=2987346224567351299-NA
  102. Microsoft. (2023) Data loss prevention. [Online]. Available: https://www.microsoft.com/en-us/security/business/security-101/what-is-data-loss-prevention-dlp
  103. (2023) Custom instructions for chatgpt. [Online]. Available: https://openai.com/blog/custom-instructions-for-chatgpt
  104. S. Petridis, B. Wedin, J. Wexler, A. Donsbach, M. Pushkarna, N. Goyal, C. J. Cai, and M. Terry, “Constitutionmaker: Interactively critiquing large language models by converting feedback into principles,” arXiv preprint arXiv:2310.15428, 2023.
  105. M. Jakobsson, K. Sako, and R. Impagliazzo, “Designated verifier proofs and their applications,” in In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques.   Springer, 1996, pp. 143–154.
  106. J. Lanier, “How to fix Twitter - and all of social media,” Retreived from https://www.theatlantic.com/technology/archive/2022/05/how-to-fix-twitter-social-media/629951/, 2022.
  107. L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 730–27 744, 2022.
  108. Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McKinnon et al., “Constitutional AI: Harmlessness from AI feedback,” arXiv preprint arXiv:2212.08073, 2022.
  109. F. Khani and M. T. Ribeiro, “Collaborative development of NLP models,” arXiv preprint arXiv:2305.12219, 2023.
  110. H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
  111. Y. Li, S. Bubeck, R. Eldan, A. Del Giorno, S. Gunasekar, and Y. T. Lee, “Textbooks are all you need II: phi-1.5 technical report,” arXiv preprint arXiv:2309.05463, 2023.
  112. C. Xu, Q. Sun, K. Zheng, X. Geng, P. Zhao, J. Feng, C. Tao, and D. Jiang, “Wizardlm: Empowering large language models to follow complex instructions,” arXiv preprint arXiv:2304.12244, 2023.
  113. Salesforce. (2023) Xgen. [Online]. Available: https://github.com/salesforce/xgen
  114. Falcon LLM Team. (2023) Falcon LLM. [Online]. Available: https://falconllm.tii.ae/
  115. (2023) Who’s Harry Potter? Making LLMs forget. Accessed: September 26, 2023. [Online]. Available: https://www.microsoft.com/en-us/research/project/physics-of-agi/articles/whos-harry-potter-making-llms-forget-2/
  116. D. Choi, Y. Shavit, and D. Duvenaud, “Tools for verifying neural models’ training data,” arXiv preprint arXiv:2307.00682, 2023.
  117. S. Longpre, R. Mahari, N. Muennighoff, A. Chen, K. Perisetla, W. Brannon, J. Kabbara, L. Villa, and S. Hooker, “The data provenance project,” in Proceedings of the 40th International Conference on Machine Learning, 2023.
  118. T. Hardjono and A. Pentland, “Data cooperatives: Towards a foundation for decentralized personal data management,” arXiv preprint arXiv:1905.08819, 2019.
  119. K. Schwab, A. Marcus, J. Oyola, W. Hoffman, and M. Luzi, “Personal data: The emergence of a new asset class,” in An Initiative of the World Economic Forum.   World Economic Forum Cologny, Switzerland, 2011, pp. 1–40.
  120. (2023) Data freedom act. RadicalxChange. [Online]. Available: https://www.radicalxchange.org/media/papers/data-freedom-act.pdf
  121. P. W. Koh and P. Liang, “Understanding black-box predictions via influence functions,” in International conference on machine learning.   PMLR, 2017, pp. 1885–1894.
  122. V. Feldman and C. Zhang, “What neural networks memorize and why: Discovering the long tail via influence estimation,” Advances in Neural Information Processing Systems, vol. 33, pp. 2881–2891, 2020.
  123. R. Grosse, J. Bae, C. Anil, N. Elhage, A. Tamkin, A. Tajdini, B. Steiner, D. Li, E. Durmus, E. Perez et al., “Studying large language model generalization with influence functions,” arXiv preprint arXiv:2308.03296, 2023.
  124. S. M. Park, K. Georgiev, A. Ilyas, G. Leclerc, and A. Madry, “Trak: Attributing model behavior at scale,” arXiv preprint arXiv:2303.14186, 2023.
  125. A. Ilyas, S. M. Park, L. Engstrom, G. Leclerc, and A. Madry, “Datamodels: Predicting predictions from training data,” in Proceedings of the 39th International Conference on Machine Learning, 2022.
  126. A. Ghorbani and J. Zou, “Data Shapley: Equitable valuation of data for machine learning,” in Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 2242–2251.
  127. R. Jia, D. Dao, B. Wang, F. A. Hubis, N. Hynes, N. M. Gürel, B. Li, C. Zhang, D. Song, and C. J. Spanos, “Towards efficient data valuation based on the Shapley value,” in The 22nd International Conference on Artificial Intelligence and Statistics.   PMLR, 2019, pp. 1167–1176.
  128. Z. Hammoudeh and D. Lowd, “Training data influence analysis and estimation: A survey,” arXiv preprint arXiv:2212.04612, 2022.
  129. D. Bogdanov, P. Laud, S. Laur, and P. Pullonen, “From input private to universally composable secure multi-party computation primitives,” in 2014 IEEE 27th Computer Security Foundations Symposium.   IEEE, 2014, pp. 184–198.
  130. C. Dwork, “Differential privacy,” in International colloquium on automata, languages, and programming.   Springer, 2006, pp. 1–12.
  131. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318.
  132. A. Shamir, “How to share a secret,” Communications of the ACM, vol. 22, no. 11, pp. 612–613, 1979.
  133. M. Sabt, M. Achemlal, and A. Bouabdallah, “Trusted execution environment: what it is, and what it is not,” in 2015 IEEE Trustcom/BigDataSE/Ispa, vol. 1.   IEEE, 2015, pp. 57–64.
  134. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  135. L. Ho, J. Barnhart, R. Trager, Y. Bengio, M. Brundage, A. Carnegie, R. Chowdhury, A. Dafoe, G. Hadfield, M. Levi et al., “International institutions for advanced AI,” arXiv preprint arXiv:2307.04699, 2023.
  136. J. Schuett, N. Dreksler, M. Anderljung, D. McCaffary, L. Heim, E. Bluemke, and B. Garfinkel, “Towards best practices in agi safety and governance: A survey of expert opinion,” arXiv preprint arXiv:2305.07153, 2023.
  137. The White House. (2023) Fact sheet: Biden-Harris administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI. [Online]. Available: https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.