Interpret the causal meaning of the ‘prior thought’–outcome probability association
Ascertain the causal interpretation of the observed association between AI researchers’ self‑reported amount of prior thought about the social impacts of smarter‑than‑human machines and their assigned probabilities to extremely good and extremely bad outcomes of High‑Level Machine Intelligence; specifically, determine whether greater prior thought improves prediction quality or whether selection effects (e.g., preexisting concern leading to more thought) drive the association.
References
It is not clear how to interpret the results of this question. Specifically, while thinking more about a topic presumably improves predictions about it, people who think a lot may do so because they are concerned, so the association could also be due to this selection effect.