Comments (11)
https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3
Highly recommended.
Almost all of these mind-bogglingly large numbers are built around recursion, like the Ackermann function, which effectively has an argument for the number of Knuth up arrows to use. Then you can start thinking about feeding the Ackermann function into that slot and enjoy the sense of vertigo at how insane that becomes.
I find it fascinating how quickly the machinery of specifying large numbers probes the limits of what's definable within the mathematical systems we use.
https://youtube.com/playlist?list=PLt5AfwLFPxWJ_FwchNAjB3xa8...
The tools required to probe and compare these monsters are so interesting.
I get kind of intrigued by these large number things at first but ultimately it feels like kind of a less elegant version of the same thing? It's all this mucking about with multiples and powers of multiples and powers when it's like...we've already taken this to the limit, we can just stand back and marvel at that, what are we looking for? We already know you can always add another digit, why invent more and more complicated ways to do that?
This isn't meant to be contradictory to what you're saying or anything, just interesting to explore these different perspectives on what mathematical concepts capture the imagination.
Meanwhile, infinity is for those that embrace chaos, minimalism, nothingness.
You need to look at this:
Like, there is a perfectly finite number, but is so large that there simply isn’t enough information in the universe to encode it in any format. How cool is that to just think about for a while?
It is not obvious to me that the definition of an encodable function has bounded growth: is it true that f(1) - f(0) for encodable f always has a bound given by the amount of data used to encode f? What is that bound?
Postulate: You cannot define a largest physically describable number.
My assumption is that due to the very nature of Kolmogorov complexity (and other Godel related / halting problem related / self referential descriptions), this is not an answerable or sensible question.
It falls under the same language-enabled recursion problems as:
- The least number that cannot be described in less than twenty syllables. - The least number that cannot be uniquely described by an expression of first-order set theory that contains no more than a googol (10^100) symbols.
Unless...such an energy exceeds the Schwarzschild radius...
It's not impossible (afaik) for things to be smaller than the Planck length. We just don't have the ability (maybe ever) to measure something smaller than this limit.
Now, good luck finding something the size of 1 Planck length, and also to accelerate it to relativistic speeds.
Apparently, under some of these models, this implies that the speed of light ends up depending on wavelength, lending them to empirical tests. My understanding is that these discrete space models have failed to line up with experiment, at least within the limits of measurement.
On the other hand, general relativity implies that if you put enough matter in a small enough space it becomes a black hole, and then we can't access the information in it.
IANA physicist but I think this line of thought is probably speculative at the moment because it involves both general relativity and quantum mechanics and it isn't known how they should work together.
BB(n) surpasses Tree(n) by - at most - when n=2645.
And likely shortly after BB(100).
Now consider the following definition for an exponentially faster growing number:
HyperTree(n) is where the Tree function is nested x times, where x is the result of tree(n).
HyperTree(3) = Tree(Tree(Tree..{Tree(3) long}...Tree(3))))...)
BB(X) would (should) still outpace HyperTree(x) at some value of x.
I don't know where to begin even to give an upper or lower bound of when BB(x) is larger than HyperTree(x).
The fast growing hierarchy, on the other hand, provides oodles of structure for us to explore, and we can construct numbers that are vastly larger than anything BB(6) is likely to hit. In fact, this is why we use the fast growing hierarchy to approximate really big numbers all the time!
When we take something like f_Γ_0 and try to unpack even just the barest surface of its size, I get a feeling of vastness similar to those videos of diving endlessly into fractals.
"Infinity is farther away than you thought."
Does anybody know of other interesting problems in the Busy Beaver space?
This paper contains many conjectures around BB that could be interesting to some.
> Their approach was more subtle; they persuaded their host machine to initiate a program which could not be completed before the end of the universe, or which - the Mandelbrot Maze was the deadliest example - involved a literally infinite series of steps.
https://archive.org/stream/SpaceOdyssey_819/3001_The_Final_O...
Do we know if they grow faster than busy beavers?
What's more, for any N > 549, you cannot prove (in the usual universe of mathematics) that any computable number is equal to or larger than BB(N). (At least that's my understanding of the independence results (https://wiki.bbchallenge.org/wiki/Independence_from_ZFC))
The problem is that the naive algorithm above requires an oracle for the Halting Problem, which every CS student learns is impossible in general for a computational model that isn’t more powerful than a Turing Machine.
So if a function can be expressed in terms of a Turing machine, busy beaver has to grow faster because eventually for some value of N, you can just encode the candidate function as a Turing machine and have Busy Beaver simulate it.
Busy Beaver hunters on the low end (trying to find the next number) have to exhaustively prove whether each Turing machine eventually halts or not at that number of states so that they can evaluate the longest running one with finite execution.
But no. It's just an _impossibly unfathomably large number_. Eventually, there's no more trees with less than $i$ vertices, but only at such a stupid incredibly large $i$ that it makes other "googology" numbers vanish by comparison.
I consider TREE to be medium sized in the googology universe. It obliterates Graham's Number of course, as well as Goodstein sequence and simple hydras. But it's easily bested by an ingenious matrix generalization of hydras known as the Bashicu Matrix System, even when fixed at 2 rows. 3-row BMS goes far beyond that, let alone n-row. BMS is so simple that it can be encoded in under 50 bytes [1], where TREE would take significantly more. Beyond that there vastly greater numbers still such as Loader's Number. I shouldn't mention Rayo's because that's no longer computable in the usual sense.
[1] https://codegolf.stackexchange.com/questions/18028/largest-n...
It isn't. The article neglects to explain what makes busy beaver numbers interesting in the first place. And I think it's symptomatic of Quanta Magazine articles that feature on HN several times a week. A profoundly-sounding title and pleasant writing, but not much essence beyond that.
you gotta pay for that. or dig through the academic publisher archives; which you also gotta pay for unless you believe in digital piracy and evil copyright infringement which may or may not fund terrorism
like they used to say: information wants to be expensive so pay to be free
All this talk about exponentiation, tetration and pentation have their roots in Peano arithmetic, which for reasons of logical clarity, defines precisely one function -- the "successor function":
s(n) = n+1
Using just this function, Peano defines addition, multiplication and exponentiation with great clarity. Since Peano's time, people have been making adjustments, like including zero among the counting numbers, and by extending exponentiation into tetration, pentation and a few more exotic operations fully discussed in the linked article and elsewhere.
I personally would have liked to see a more complete exposition of the logical roots of the Busy Beaver challenge, and I think the missing parts would have made the article a better read, even for non-specialists. Maybe especially for those readers.
But Quanta articles are perfectly suited to their audience, people who may be inspired to look for more depth elsewhere.
Anyway, this article seems fine to me. The "exponentiation" comment seems like a bizarre misreading. The article is just trying to explain how big BB(6) is. Before that, it explains what BB(n) is. To think it's solely about exponentiation you have to skip that entire section.
The problem with this particular article is simple: busy beavers numbers aren't interesting because they're big. They don't break mathematics because of that; you can always say "+1" to get a larger number. There's also nothing particularly notable about Knuth's up-arrow notation, which is essentially a novelty that you're never gonna use. Instead, the numbers are interesting because they have fairly mind-blowing interactions with the theory of computability.
Hey guess what, I can imagine a number even larger. It's BB(6) + 1, isn't that amazing and interesting? Wow imagine BB(6) multiplied by a googolplex, amazing. Wow, such number.
What's the point? Numbers are infinite, what else is new?
What's the point of your comment? To suck the joy out of everything?
Turing machines are a fundamental object of study within the theory of computation. The complexity and wild behavior that arises from even the simplest machines is a cool discovery. BB(6) was thought to be a lot smaller, but it turns out to be really huge. The Busy Beaver game is also interesting to those who work on combinatorics and theorem provers. And of course many many people in the space of recreational math & volunteer computing love this challenge.
You don't like it? Ok, then. You don't have to.
https://bbchallenge.org/1RB1LC_1RC1RB_1RD0LE_1LA1LD_1RZ0LA?w...
https://wiki.bbchallenge.org/wiki/5-state_busy_beaver_winner
I mean obviously there is, it’s the same difference between N and infinity. But… is there really?
This is weirdly stated. The first sentence is correct. But trivially, either a machine that always says “yes” or a machine that always says “no” will give the correct answer for any given case.
Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means. But that’s going beyond the halting problem.
It is indeed the Halting problem. (Except that they forgot to state what the input is (the code itself).
There is no case where no method whatsoever will work. It's true that for any method, there are cases where it fails but it's not true that there exist cases for which every method fails.
There are machines which are very, very hard to determine whether they halt or not, and so people end up thinking that there must be specific machines for which no Turing machine can decide halting correctly. But that’s just not true. Every “attempted halting decider” has its own infinite set of failure cases, which are specific to that machine, and not fundamental to the input machines.
This feels really strange, and trying to turn that other intuitive sense into something meaningful and reasonably formal is an interesting exercise, but it’s tricky.
It really hinges on what is meant by "this question", i.e., Given the code of a computer program, can you tell whether it will eventually stop or run forever?
If what's given to you is the code and its input, then I think the statement's correct.
However, if the input is assumed to be either the code itself or any other fixed thing, then I agree with you.
If it is the halting problem — less ambiguously written as "given the code of a computer program and its input, will it run forever?" — then the statement is incorrect: there is a method that returns the correct result for every possible program and input.
If it is about proving whether a machine halts — not getting it right by chance, but showing that a particular machine halts or runs forever — then the statement is correct, for any set of axioms there are Turing machines that cannot be proven to halt or run forever using those axioms.
Because you wrote "But trivially, either a machine...", I think you must think that the second sentence in your quote means "For every program p, there does not exist any method m that, when applied to p, determines whether p halts" -- which would clearly be false, since it allows every program to have its own method (and as you say, just 2 "constant-valued methods" suffice). But I interpret that second sentence in your quote as "There does not exist a single method m that, when applied to any program p, determines whether p halts", which is correct.
>Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means.
I also think that's what the article means, and I think it's a very nice way of putting it, though it's an interesting enough wrinkle to discuss further.
That's obviously incorrect; the program either halts or it does not. So if you had function that always returned true and another that always returned false, one of those functions would correctly tell you whether a given program will halt — for every program that could possibly exist.
The hard part is figuring out which function to use :)
As it turns out, the MathJax library is called from the article's HTML (in line 1765), but for some reason it's not working.
A long-running debate argues that LaTeX rendering should be a default browser feature, so far without success. In the meantime MathJax works pretty well unless someone disables JavaScript client-side.
Reasonable people may differ, but not being able to render LaTeX is egregious and wrong.