Assembly Theory (AT) and its central measure, the assembly index (Ai), represent an invaluable opportunity to address some of the most persistent and widespread conflations and misconceptions about computability and complexity theory in science. The AT defence embodies several common concurrent misconceptions that pile on each other: the belief that Turing machines impose artefactual constraints, the mischaracterisation of Kolmogorov complexity as inapplicable, and the claims around Ai as different from Shannon entropy or compression algorithms. Here we show that the new arguments advanced by the AT group in their defence, are based on misleading and incomplete experiments that, when completed, show the extent of the correlations and overlapping with popular statistical compression algorithms, conforming with the mathematical equivalence to Shannon entropy previously mathematically proved and reported, which remains undisputed. Through theoretical and empirical analysis, we show that Ai does not offer a path towards fundamental novel causal or informational insights beyond what existing statistical frameworks already offer. Rather than offering a unifying theory of life as the AT authors suggest, we argue that AT obfuscates the field and provides a cautionary example of how the accumulation of conceptual mistakes can lead to a misleading theory. Finally, we show that Ai is a particular limited case of another complexity metric based on algorithmic (Kolmogorov) complexity, consisting of decomposing an object into its causal blocks that goes beyond, and outperforms, AT.
翻译:暂无翻译