figure 1: The International System maintained by BIPM


Is this the best of all possible worlds? This question has probably occurred to every single citizen of a modern nation state at some point in their lives. For many scientists and engineers this question probably guides their choice of problems to focus on as they are keenly aware that a modern nation state is governable through the interaction of multiple interacting systems(public transportation, election systems, weather-forecasting, water-treatment, public energy grids, the internet…) and improvements to these systems are possible thanks to advances in engineering.

In this context, it is natural to frame this question in scientific terms. If such a formulation appears to contrived to social scientists and anthropologists it is worth pointing out that mechanisms for social evolution are ultimately constrained by mechanisms for biological evolution, and biology is ultimately constrained by physics. If we are to push this line of reasoning to its logical conclusion we may ask whether the laws of physics could have been any different or whether the observable universe is the best of all possible universes.

I believe that this is nothing less than the ultimate scientific question as the ultimate goal of science is to understand the universe and its origins. However, even if this may be the ultimate question it is not a particularly good place to start. In fact, we are better off starting with a related question. Does high-precision science require high-precision engineering? In this article, I show not only that:

\begin{equation} \text{high-precision science exists} \iff \text{high-precision engineering exists} \end{equation}

but that the auxiliary question inevitably leads us to a constructive approach to the ultimate question, via the Planck system.

Does high-precision science require high-precision engineering?

Before venturing into the Planck system of units, let’s consider the International System(IS) which most scientists are only superficially familiar with. For concreteness, not long ago I discussed the definition of the metre with an applied physicist starting with an elementary question:

It appears that the metre was defined in terms of the speed of light only relatively recently, in 1983, when the 17th CGPM occurred. Given that this definition is ideal in theory(1905) because the speed of light(c) is constant in any inertial frame, i.e. c is also the constant of special relativity, which relates time and space, what engineering challenges stood in the way of making very precise measurements of the speed of light before 1983?

After thinking about this question, I realised that high-precision science and engineering would not be possible without an international system founded on the precise measurements of fundamental constants. Furthermore, given that these constants are expected to remain constant within the framework of Quantum Field Theory, high-precision measurements of these constants are also important tests of QFT. This line of reasoning ultimately led me to the conclusion that:

\begin{equation} \text{high-precision science exists} \iff \text{high-precision engineering exists} \end{equation}

By high-precision, I mean Angstrom-level precision or better which is required to build GPS systems and electron microscopes. As for why the scientific units are ideally defined in terms of the fundamental constants, there are two reasons. The first is that they are dimensionally independent and the second is that the definition does not change.

I would like to add that progress in the domain of metrology is certainly in the interest of trade and commerce as an order-of-magnitude improvement in our ability to measure the speed of light would imply a factor of \(~10^3\) increase in the range of physical products we can manufacture in terms of their spatial configurations. Such exponential increases in our civilisation’s creative potential translates into much greater innovation and therefore much better prospects for economic growth.

A closer look at the international system of units:

Within the framework of QFT we have a notion of absolute time and once we have defined this notion precisely(using an atomic clock for example), we may use the definition of time to define related notions such as mass and length. In this sense, while the electric charge(e), constant of special relativity(c), Planck’s constant(h), are all equally important physical primitives within QFT, these terms are defined in a hierarchical manner by metrologists responsible for updating the international system. The main arguments for the compositional structure are elegance as well as practicality from an engineering perspective.

In this context, we may introduce the four SI units that play a fundamental role in QFT using the mises en pratique of the Bureau International des Poids et des Mesures(BIPM):

Defining the second(s):

The second is defined to be equal to the duration of 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the unperturbed ground states of the Caesium 133 atom.

This definition may be understood as the unit of proper time of special relativity.

Defining the metre(l):

The metre is the length of the path traversed by light in a vacuum during a time interval of \(\frac{1}{299,792,458}\) of a second where the second is defined by an exact value of the hyperfine transition frequency \(\Delta \upsilon_{Cs}\) of the Caesium 133 atom.

Here we are implicitly using the fact that the speed of light(c) is also the constant of special relativity. Concerning its biological importance, it is worth noting that if the speed of light was not constant and varied in a complex manner it is unlikely that the mammalian visual system would have developed.

Defining the kilogram(m):

For the kilogram, we may use the mass-energy relationship, discovered by Einstein for electromagnetic radiation:

\begin{equation} E = h \upsilon = mc^2 \end{equation}

where the kg may be defined by taking the fixed numerical value of the Planck constant \(h\) to be \(6.626 \cdot 10^{-34}\) when expressed in the unit \(J \cdot s\) which is equal to:

\begin{equation} \text{kg} \cdot m^2 \cdot s^{-1} \end{equation}

where the metre and second are defined in terms of \(c\) and \(\Delta \upsilon_{Cs}\) respectively.

Defining the Ampere(A):

The ampere, A, is defined in terms of the magnitude of the charge of the electron whose value may be expressed in terms of Coulombs:

\begin{equation} 1 e \approx 1.60217 \cdot 10^{-19} C \end{equation}

where we have:

\begin{equation} 1 A = 1 C \cdot s^{-1} \end{equation}

and this definition may be realised using a device for single electron transport.

Digression on the fine-structure constant:

Within the context of QFT, the importance of these four units comes together with the fine-structure constant which has recently been determined with an accuracy of 81 parts per trillion:

\begin{equation} \alpha = \frac{e^2}{4 \pi \epsilon_0 \hbar c} = 0.007297(11) \end{equation}

\begin{equation} \alpha^{-1} = 137.035(11) \end{equation}

The experimental importance of a unit for time is implicit in the expectation that \(\alpha\) is constant across space-time. More importantly, given the dependence of chemistry on the value of \(\alpha\), a natural question arises. How sensitive is DNA structure(and therefore evolutionary game theory) to variations in this dimensionless constant?

As for whether the importance of \(\alpha\) is a coincidence, it is plausible that the value of \(\alpha\) is due to cosmological natural selection [2]. I share these questions in order to demonstrate that these units for engineering are crucial due to their importance as international standards for trade and commerce as well as the fact that they are fundamental pillars of human knowledge.

That said, both of these considerations are beyond the scope of this article.

Defining temperature in terms of Boltzmann’s constant:

Finally, I would like to make a few remarks on the definition of the Kelvin in terms of the Boltzmann constant as it appears to have little to do with QFT. The Kelvin is defined in terms of the Boltzmann constant and the units for time, length and mass as follows:

\begin{equation} k_B = 1.380649 \cdot 10^{-23} J \cdot K^{-1} = 1.380649 \cdot 10^{-23} \text{kg} \cdot m^2 \cdot s^{-2} \cdot K^{-1} \end{equation}

where the kilogram, metre and second are defined in terms of \(h\), \(c\) and \(\Delta \upsilon_{Cs}\).

One approach to measuring \(k_B\) involves the principle of primary acoustic gas thermometry which exploits the relationship between the speed of sound \(u\) in an ideal gas in the limit of zero frequency and the thermodynamic temperature of the gas:

\begin{equation} u^2 = \frac{\gamma k_B T}{m} \end{equation}

where \(k_B\) is the Boltzmann constant, \(m\) is the average molecular mass of the gas, and \(\gamma\) is the ratio of the heat capacity of the gas at constant pressure to its heat capacity at constant volume. For ideal monatomic gases, \(\gamma = 5/3\).

As for the practical importance of \(k_B\) to QFT, one merely has to survey non-trivial correspondences between QFT and condensed matter physics. In particular, one may develop this perspective by analysing super-cooled Bose-Einstein condensates and their implications for high-temperature superconductivity as well as the teleportation of macroscopic objects.

On the complexity of measuring the fundamental constants:

With advances in engineering, metrologists may devise better methods to quantify the fundamental constants of nature. This simultaneously helps verify the range of validity of Quantum Field Theory and advance the precision of engineering methodology.

In particular, we have come a long way from using the Fizeau apparatus(1849) to determine the speed of light(c) to the modern standard of determining the constant of special relativity(c) using optimal interferometry.

Likewise, at present the gold standard for determining the electric charge(e) uses the Quantum hall effect where(if I may quote NIST):

…a current of ultra-cold electrons-so that is, in effect, a two-dimensional sheet-is exposed to a magnetic field perpendicular to the sheet’s electrical resistance across the width of the current stream develops in exactly quantized steps.

The magnitude of these steps is given by:

\begin{equation} R_k = \frac{h}{e^2} \end{equation}

which ca be determined with an uncertainty of 2 parts in 10^{10}. However, this is not the end of the line. Advances in Single Electron Transport are expected to redefine the electric charge in the near future.

The possibility of Planck-scale engineering and its implications:

Given that the fundamental constants reflect the epistemic limits of the Standard Model, it is inevitable that an advanced civilisation would ultimately develop a system of units founded directly upon the fundamental constants themselves. By an incredible feat of insight, this notion occurred to Planck himself in 1899 who realised that:

…it is possible to setup units for length, mass, time and temperature, which are independent of special bodies or substances, necessarily retaining their meaning for all times and for all civilisations, including extraterrestrial and non-human ones, which can be called ‘natural units of measure’

In particular, Planck realised that \(h\), \(c\), \(G\) and \(k_B\) may be used to create dimensionally independent Planck units:

\begin{equation} \text{Planck length}(l_P) = \sqrt{\frac{hG}{c^3}} \approx 10^{-35} m \end{equation}

\begin{equation} \text{Planck mass}(m_P) = \sqrt{\frac{hc}{G}} \approx 10^{-8} \text{kg} \end{equation}

\begin{equation} \text{Planck time}(t_P) = \sqrt{\frac{hG}{c^5}} \approx 10^{-43} s \end{equation}

\begin{equation} \text{Planck temperature}(T_P) = \sqrt{\frac{hc}{G \cdot k_B^2}} \approx 10^{32} K \end{equation}

The Planck scale is precisely where the Standard Model breaks down. For distances smaller than the Planck length, the notion of distance breaks down due to the uncertainty principle. Furthermore, at the Planck energy:

\begin{equation} m_P \cdot c^2 \approx 10^{19} GeV \end{equation}

Quantum Field Theory is expected to be invalid as Quantum gravitational effects become important. In fact, Planck-scale engineering would imply that we would develop a much better understanding of the origins of the early universe and potentially develop the technology to create new universes capable of stellar formation and the emergence of life.


At this point in human history, the possibility of engineering new universes appears to be beyond what the human mind can fathom. But, at another point in human history the same could have been said about the atomic bomb and the discovery of DNA. What appears to be physically impossible remains so until we develop the technological expertise to engineer fecund universes.

However, I believe that the much bigger question is whether we may acquire the collective wisdom that is necessary to decide whether or not we should create new universes as the moral burden of deciding the fate of a trillion souls would be an important weight on our conscience. Perhaps once we have acquired such wisdom we shall look back upon the original question with much greater humility. If I may quote T.S. Elliott:

The end of our journey will be to arrive to the place where we started, and understand the place for the first time.

Note: For software engineers or machine learning researchers out there who are not sure how to wrap their minds around this question, you may consider the challenges involved with building the ‘perfect’ physics simulation software or developing the ‘perfect’ programming language. It is not easy to simultaneously satisfy conflicting requirements.


  1. Léo Morel, Zhibin Yao, Pierre Cladé & Saïda Guellati-Khélifa. Determining the fine-structure constant with an accuracy of 81 parts per trillion. Nature. 2020.

  2. Jeffrey M Shainline. Does cosmological natural selection select for technology? Institute of Physics. 2020.

  3. Michael R. Moldover et al. Acoustic Gas Thermometry. NIST. 2014.

  4. G. E. Volovik. Universe in a Helium droplet. Oxford University Press. 2003.

  5. Mise en pratique for the definition of the metre in SI. BIPM. 2019.

  6. Andrei Linde. The Hard Art of Universe creation. Arxiv. 1991.

  7. Richard H. Parker, Chenghui Yu, Weicheng Zhong, Brian Estey, Holger Müller. Measurement of the fine-structure constant as a test of the standard model. Science. 2018.

  8. Lee Smolin. The status of cosmological natural selection. Arxiv. 2006.

  9. RJ Adler. Six easy roads to the Planck scale. Arxiv. 2010.

  10. Definition of the Metre. The 17th Conférence Générale des Poids et Mesures. 1983.

  11. Practical realizations of the definitions of some important units. BIPM. 2020.