Managing open problems
Maybe a script that extracts open problems directly from the book LaTeX would help?
I am still trying to figure out how to track open problems/conjectures/todos for our Assumptions of Physics program. In the last version of the book, as there were a number of open problems for ensemble spaces, it seemed clear that those issues should be discussed and left in the book. It is dawning on me that, probably, all open issues should be in the book, even the ones that are really open, like how to generalize results from Reverse Physics to classical field theories.
What I have been struggling on is understanding what is the entry point for someone contributing. For an open source project is straight forward: you use the software, then you file bugs, then you start patching some bugs, then you start taking more ownership, and so on. Maybe the pipeline for AoP could work like this: you learn about the project from videos/papers, you read the book, you report typos or minor problems, you work on a small conjecture/TODO, and so on. If that’s the idea, it would make sense to have all open problems organized in the text itself, and maybe have some sort of summary page on the website with a somewhat uptodate list.
Therefore, it may be useful to have a script that would scan, after each push on GitHub, the LaTeX of the book, extract the (properly tagged) open problems, generate a yaml file that is then used by the website to create the pages (much like the other pages are generated). So, if someone is familiar with LaTeX, parsing LaTeX (probably regular expressions may be sufficient) in Python or any other language and/or GitHub actions, do reach out!


Don't worry, no one else is coming. What you are doing, the foundations of physics, is taboo.
MiKTeX is easier to use, in my opinion. Doing this in six months would require many miracles. I skimmed your book. I am amused that you aim to solve Hilbert's problem to find the axioms of physics.
You want to go from:
combinatorics -> Galois group theory-> linear algebra eigenvalues -> differential equations.
When solving the eigenvalue problem in linear algebra, it is often expressed in polynomial form. Galois theory and group theory include symmetries of the roots of polynomials. The roots are the possible values a quantum system can take.
Quantum eigenvalues are often the energy or momentum observables because they commute. If you know one value, you can know the other with certainty. There is no uncertainty relationship between the observables, and the equation E = p^2/2m holds in both classical and quantum physics.
The reason we use energy is apparent. There are two ways to multiply vectors—the dot product, which returns a scalar, or the cross product, which returns a vector.
Energy is not a vector; it is found using the dot product. The inner products of quantum mechanics mean that physics is done in the context of energy. This is often the Hamiltonian kinetic energy + potential energy or the Lagrangian Kinetic-Potential. For the forces, the Euler-Lagrange equation is used, and the action is minimized instead of using a force vector.
This is why the standard model Lagrangian, which contains all the forces except gravity, places heavy emphasis on symmetry and special unitary groups.
About your ensembles.
Notice the grand canonical ensemble uses the exponential, as does the time-dependent wavefunction. In quantum mechanics, hbar is used, like Boltzmann's constant, to calculate energy.
There is a fun way to get an exponential from combinatorics, and then generalize it to entropy.
A lazy professor randomly returns exams to the class to grade them. What is the probability that no student will receive their original exam to grade?
The probability is 1/e, which we write as e^(-1). In combinatorics, this is a derangement, a term used by anyone trying to understand quantum.
What is the probability that no one gets their exam back to grade after n independent exam reshufflings? The trials are independent, so the probabilities multiply, giving e^(-n). Look familiar?
Extending the idea to atoms and charges, or to information being exchanged, should be possible even for commuting and non-commuting observables. It converges rapidly, and for small classes, the nearest integer function [1/e] is used.
A derangement symbolized as !n is a permutation where no element of a set remains in its original position. The probability of a derangement is the number of microstates or the subset of arrangements where every element in a final set (or state) is in a different position from the initial set (or state) divided by the total number of possible arrangements, say n!. The derivation for this is not straightforward, but it is an almost miraculous answer, !n/n! = e^(-1).
This inverse exponential function is like the one in the Gibbs entropy, the Gaussian distribution, the Boltzmann entropy, and the time-dependent Schrodinger equation of a particle in a box. The proof uses the principle of inclusion and exclusion to obtain the Taylor series for the exponential function.
I consider this miraculous because it converges rapidly and is within 90% of the 1/e value for a set as small as four elements, and using the nearest integer function [1/e] works for any set of elements. When mutually exclusive and independent, these derangements can be multiplied together to transform into e^(-n) for n events
A derangement is a final state in which no object in a set stays in its initial position. A partial derangement of n objects is the number of fixed points. A fixed point is the number of possible ways objects do not change from their initial to their final positions. Partial derangements can be enumerated from 1 to n, with one fixed point, or with the possibility that every object stays in its initial position. The nth fixed point or nth partial derangement is a static set of objects where nothing changes.
Combining the total number of ways we can list every fixed point (all partial derangements) with the derangement (no fixed points) equals the total number of permutations of the set, n!. The total number of possible microstates. Dividing this by the number of microstates gives the probability for each fixed point, from 0 to N, which are the macrostates of the set.
Each partial derangement also has an e^-1 coefficient and a binomial coefficient for its weight, because we choose m fixed points from n elements of the set and then derange the remaining elements. This gives an equation with the same structure as a wavefunction or partition function.
Combinatorics is a tricky subject. For instance, identical and distinguishable objects are essential distinctions, exactly like working with fermions and bosons.
I believe this works for mutually exclusive and independent random variables, in other words, for observables that commute. It may not be possible to use it for dependent random variables and for observables with an uncertainty relationship.
In other words, it will likely require or be easiest to use Markov chains, and those dependent observables will also converge to an expected value. My guess is they converge to the physical constants.
However, your video showing hbar from entropy was a trick I had never seen. I loved it!
Sounds like an interesting problem. Given that the book is in LaTeX and the open problems on the website are written in markdown consider using pandoc, which can convert .tex into .md (with some level of accuracy).