Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
11 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

The games we play: critical complexity improves machine learning (2205.08922v1)

Published 18 May 2022 in cs.CY

Abstract: When mathematical modelling is applied to capture a complex system, multiple models are often created that characterize different aspects of that system. Often, a model at one level will produce a prediction which is contradictory at another level but both models are accepted because they are both useful. Rather than aiming to build a single unified model of a complex system, the modeller acknowledges the infinity of ways of capturing the system of interest, while offering their own specific insight. We refer to this pragmatic applied approach to complex systems -- one which acknowledges that they are incompressible, dynamic, nonlinear, historical, contextual, and value-laden -- as Open Machine Learning (Open ML). In this paper we define Open ML and contrast it with some of the grand narratives of ML of two forms: 1) Closed ML, ML which emphasizes learning with minimal human input (e.g. Google's AlphaZero) and 2) Partially Open ML, ML which is used to parameterize existing models. To achieve this, we use theories of critical complexity to both evaluate these grand narratives and contrast them with the Open ML approach. Specifically, we deconstruct grand ML `theories' by identifying thirteen 'games' played in the ML community. These games lend false legitimacy to models, contribute to over-promise and hype about the capabilities of artificial intelligence, reduce wider participation in the subject, lead to models that exacerbate inequality and cause discrimination and ultimately stifle creativity in research. We argue that best practice in ML should be more consistent with critical complexity perspectives than with rationalist, grand narratives.

Citations (5)

Summary

We haven't generated a summary for this paper yet.