Finely tuned models sacrifice explanatory depth (1910.13608v2)
Abstract: It is commonly argued that an undesirable feature of a theoretical or phenomenological model is that salient observables are sensitive to values of parameters in the model. But in what sense is it undesirable to have such 'fine-tuning' of observables (and hence of the underlying model)? In this paper, we argue that the fine-tuning can be interpreted as a shortcoming of the explanatory capacity of the model: in particular it signals a lack of explanatory depth. In support of this argument, we develop a schema -- for (a certain class of) models that arise broadly in physical settings -- that quantitatively relates fine-tuning of observables to a lack of depth of explanations based on these models. We apply our schema in two different settings in which, within each setting, we compare the depth of two competing explanations. The first setting involves explanations for the Euclidean nature of spatial slices of the universe today: in particular, we compare an explanation provided by the big-bang model of the early 1970s (where no inflationary period is included) with an explanation provided by a general model of cosmic inflation. The second setting has a more phenomenological character, where the goal is to infer from a limited sequence of data points, using maximum entropy techniques, the underlying probability distribution from which these data are drawn. In both of these settings we find that our analysis favors the model that intuitively provides the deeper explanation of the observable(s) of interest. We thus provide an account that relates two 'theoretical virtues' of models used broadly in physical settings -- namely, a lack of fine-tuning and explanatory depth -- and argue that finely tuned models sacrifice explanatory depth.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.