Dice Question Streamline Icon: https://streamlinehq.com

Interpret the causal meaning of the ‘prior thought’–outcome probability association

Ascertain the causal interpretation of the observed association between AI researchers’ self‑reported amount of prior thought about the social impacts of smarter‑than‑human machines and their assigned probabilities to extremely good and extremely bad outcomes of High‑Level Machine Intelligence; specifically, determine whether greater prior thought improves prediction quality or whether selection effects (e.g., preexisting concern leading to more thought) drive the association.

Information Square Streamline Icon: https://streamlinehq.com

Background

The survey compared outcome probability assignments across respondents who reported having thought “very little/a little” versus “a lot/a great deal” about the social impacts of smarter‑than‑human machines. Those who reported more prior thought assigned higher probabilities to extremely bad outcomes and lower probabilities to extremely good outcomes.

The authors explicitly note that the causal direction is unclear: increased thinking might improve predictive judgment, or concern might motivate greater thinking, implying a selection effect. Disentangling these interpretations remains unresolved.

References

It is not clear how to interpret the results of this question. Specifically, while thinking more about a topic presumably improves predictions about it, people who think a lot may do so because they are concerned, so the association could also be due to this selection effect.

Thousands of AI Authors on the Future of AI (2401.02843 - Grace et al., 5 Jan 2024) in Appendix A, Section “How good or bad for humans will HLMI be?”, Subsubsection “Amount of thinking about the issue”