Reverse math shows why hard problems are hard
I was surprised at how difficult I found math. Now, I was never really great at math; logic and calculation in the head I could do fairly well (above average), but just foundational knowledge was hard and mathematical theory even harder. But now I even had trouble with integration and differentiation and even with understanding a problem to put it down into a formula. I am far from being the youngest anymore, but I was surprised at how shockingly bad I have become in the last some +25 years. So I decided to change this in the coming months. I think in a way computers actually made our brains worse; many problems can be auto-solved (python numpy, sympy etc...) and the computers work better than hand-held calculators, but math is actually surprisingly difficult without a computer. (Here I also include algorithms by the way, or rather, the theory behind algorithms. And of course I also forgot a lot of the mathematical notation - somehow programming is a lot easier than higher math.)
Mostly because they have rote memorised it (and partly because much of the education system is a game to be played e.g. speaking with professors during office hours can give very strong hints to exams).
I suspect this is a lot like being able to recognize a piece of art to the artist by sight. Strictly, not required. But a lot of great artists can do it.
For real fun, I saw an interview with Magnus Carlsen where someone was quizzing him on famous games. He was able to say the match on the first 2-3 moves a remarkable number of times.
Years after I graduated, I was browsing comp.lang.java (I think it was) and somebody asked for help developing an applet that could draw a 3-D arrow that could orient itself in any direction. For some reason, that sounded interesting to me, so I started trying to work on it and I realized I needed to go back and re-learn all of that "point-slope" stuff that I'd made a point of learning just enough of to squeak through college.
That sent me down the path of re-learning all the things I now wish I'd put more effort into learning when I was a teenager. I ended up working through my old undergraduate calculus textbook a few times and I understand undergraduate calculus _really_ well now. I was able to coach both of my kids through high school calculus and they both remarked that none of their friends parents were able to help them beyond algebra.
It makes me wonder how many people are great (or even adequate) at math and how many are just faking it - as interesting as I now find it, math skills aren't actually very practically useful.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD109...
Problem: A plane has every point colored red, blue, or yellow. Prove that there exists a rectangle whose vertices are the same color.
Solution: Consider a grid of points, 4 rows by 82 columns. There are 3^4=81 possible color patterns of columns, so by the Pigeonhole Principle, at least two columns have the same color pattern. Also by the Pigeonhole Principle, each column of 4 points must have at least two points of the same color. The two repeated points in the two repeated columns form a rectangle of the same color. QED.
The Pigeonhole Principle is very neat. It would be hard not to use it for the proof.
Partly that article argues against proof by contradiction which does seem to be overused.
This is not uncommon: we can say that "by the fundamental theorem of algebra" two polynomials of degree N that agree on N+1 points are identically equal. "By induction" includes Cauchy induction, sometimes with "this and that are the same" we mean "up to isomorphism" and so on.
The advice he ends on is extremely solid, though:
The moral of the story is that we are much better off with the neutral, general standard procedure: name the unknown(s) so that you can formalize all conditions given or to be established and simplify by formula manipulation.
The math will always math.Also, whether some problem has polynomial complexity or exponential complexity depends on what you consider as the problem space. Complexity of b^c is polynomial if b is the problem space and exponential if c is the problem space.
Complexity of traveling salesman problem depends on what you consider as the problem space - number of cities or number of connections between cities.
"Overall Complexity" is a meaningless term without you defining it. I suppose you mean that there is some lower bound limit to the amount of work relative to a model of computation, but this is a trivial statement, because the limit is always at least no work.
We have many problems where we don't know the least upper bound, so even an interesting formulation of your idea is not necessarily true: work need not be conserved at the least upper bound, because reaching the bound may not be possible and epsilon improvement might always be possible.
Finally, algorithmic complexity is the wrong analogy for reverse mathematics anyway.
A problem scenario doesn't have absolute characteristics. It's relative to your way of looking at it, and your definition of a problem.
EDIT: I think this line is the most telling:
But he cautioned that the reverse mathematics approach may be most useful for revealing new connections between theorems that researchers have already proved. "It doesn’t tell us much, as far as we can say, about the complexity of statements which we do not know how to prove."
So, at this point, it helps us understand more about problems we already understand a little about, but nothing yet about new problems.
So, at this point, it helps us understand more about problems we already understand a little about, but nothing yet about new problems.
I don't think this caution makes any sense!
The more we learn about theorem/axiom equivalences (or more generally, the lattice of such connections) between existing proofs, the more insights we will gain into how unproved conjectures may relate to existing proofs or each other.
Only in the strictest possible sense does saying showing X tells us nothing about showing Y. Meaning a proof or identification of X is not a proof or identification of related thing Y. But that is an obviously pedantic statement.
Not to critique the person being quoted. I feel like an offhand remark may have got unduly elevated by being quoted in a "two-sides of a story" writer's dramatization reflex.
If you're reading the paper, I recommend Section 1.3 where it goes over the examples of Binary Search and Dijkstra. The idea that "natural numbers are encoded by binary strings just as other data structures" in the preface is prevalent in their constructions of their proofs. As a computer scientist, this is great because I intuitively recognize that both algorithms and memory consist of only 1's and 0's underneath all the abstractions and code.
This work ties together practical applications and complexity theory to create a new bridge in mathematics that I'm excited about exploring. I'm especially interested in reverse mathematics applied to space complexity.
Here's some additional resources I found on this, Talk by Jiatu Li, joint work with Lijie Chen, Igor Carboni Oliveira Title: Reverse Mathematics of Complexity Lower Bounds https://www.youtube.com/watch?v=g5EqAgDxxE0
In a lot of ways this feels related to the development of noneuclidean geometry so it's kind of surprising it wasn't a hot topic sooner. Maybe it was waiting for a critical mass of computer scientists to come along, or maybe we just needed to run out of other low-hanging fruit so that foundational topics in complexity became more interesting
Edit: here's a paper on it from 2006, https://link.springer.com/chapter/10.1007/978-3-540-76928-6_...
[0] https://en.wikipedia.org/wiki/Garden_of_Eden_(cellular_autom...
Given two bit strings of length n, if we compare fewer than n pairs, we cannot tell whether they are equal. The strings being equal is the proposition a_0 = b_0 ^ a_1 = b_1 ^ ... ^ a_n-1 = b_n-1. You cannot simplify this formula such that any a_i or b_i primary is taken away.