Jump to content

Talk:Ackermann function

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Former featured articleAckermann function is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophyThis article appeared on Wikipedia's Main Page as Today's featured article on September 24, 2004.
Article milestones
DateProcessResult
December 4, 2003Peer reviewReviewed
March 15, 2004Featured article candidatePromoted
October 17, 2004Featured article reviewKept
October 21, 2006Featured article reviewDemoted
Current status: Former featured article

Error? Not.

[edit]

This article has a mistake somewhere in the definitions. I wrote a C program based on what is here, and it simply goes in an endless loop for m = 4 (A(4,n)). It does not terminate, as the definition states. --Gesslein 16:58, 4 October 2006 (UTC)[reply]

Nevermind. My computer is just really, really slow. It works now and I verified all the numbers below. --Gesslein 17:18, 4 October 2006 (UTC)[reply]

Here's a few values of the Ackermann function. Not so surprisingly, my program didn't get any further.

A(0,0) = 1 A(0,1) = 2 A(0,2) = 3 A(0,3) = 4 A(0,4) = 5 A(0,5) = 6 A(0,6) = 7 A(0,7) = 8 A(0,8) = 9 A(0,9) = 10

A(1,0) = 2 A(1,1) = 3 A(1,2) = 4 A(1,3) = 5 A(1,4) = 6 A(1,5) = 7 A(1,6) = 8 A(1,7) = 9 A(1,8) = 10 A(1,9) = 11

A(2,0) = 3 A(2,1) = 5 A(2,2) = 7 A(2,3) = 9 A(2,4) = 11 A(2,5) = 13 A(2,6) = 15 A(2,7) = 17 A(2,8) = 19 A(2,9) = 21

A(3,0) = 5 A(3,1) = 13 A(3,2) = 29 A(3,3) = 61 A(3,4) = 125 A(3,5) = 253 A(3,6) = 509 A(3,7) = 1021 A(3,8) = 2045 A(3,9) = 4093

A(4,0) = 13 A(4,1) = 65533 (Didn't use program for this one, but should be right.)

Κσυπ Cyp 17:07, 18 Oct 2003 (UTC)

Hi, NIST gives a different version [1] of this function, which seems incompatible with the veresion given here. Are they just plain wrong? -- Pakaran 03:34, 25 Nov 2003 (UTC)

Perhaps they are equivalent? Dysprosia 03:40, 25 Nov 2003 (UTC)
I don't think so, not even if you decrease the arguments in our version. In particular, their definition gives perfect powers of two, where ours gives 3 less than the powers (e.g. wiki_ack(3,3) = 61 = 26-3, while theirs never takes on that value). Their version does snip off the uninteresting bottom few rows of the table, which may not be a bad idea, but it's not the "true" ackermann function. It may or not be asymptotically equal to that function with some change of arguments, I'm not sure on that point. -- Pakaran 03:58, 25 Nov 2003 (UTC)
I've seen quite a few different functions called Ackermann functions, with anything from one through three arguments; about all they have in common was that they are easy-to-define recursive functions which grow too fast to be primitive recursive. I suggest we add a paragraph on these sort of 'variants' of the Ackermann function. 4pq1injbok 23:07, 24 Sep 2004 (UTC)

Please see section 14 below.


One of the table entries looks messed up under IE. Can someone with more knowledge of the pampering of IE fix this? Thanks. Pakaran. 20:55, 27 Feb 2004 (UTC)


It may be noted that A(4, 2), which appears as a decimal expansion in several web pages, cannot possibly be computed by recursive application of the Ackermann function.
How, then? --Fibonacci 21:06, 31 Jul 2004 (UTC)

By using formulas for the A(0,n) through A(3,n), one shortcuts the recursion. If a program had to actually thread through the recursion, it would require way way too many steps.
-- Rpresser 21:07, 2004 Sep 1 (UTC)

Pseudocode to Python...

[edit]

Did anybody notice that except for the absence of a few ':' characters, that pseudo code is valid Python. --83.146.34.206 06:23, 24 Sep 2004 (UTC)

Well... I wrote it and designed the language and I don't even know Python. So go figure. I'm sure there are some other differences, if only the emboldened keywords and the ≠ symbol. Derrick Coetzee 23:52, 1 Oct 2004 (UTC)

Inverse

[edit]

The inverse of the function f is less than 4 for any conceivable input size, so for practical algorithm analysis, it can be regarded as a constant.

I take issue with the first half of the sentence. It's kind of imprecise. It should at least give the order of the number at which it changes to 5. Also there needs to be some mention of a practical use - the union-find data structure. Deepak 14:35, 24 Sep 2004 (UTC)
That's not something I fully understand myself - it was based on a single paragraph in one of my college textbooks. I don't think we have an article on union-find in any event. F becomes 4 at A(4,4) which has a number of digits whose own number of digits cannot be written in the physical universe, correct me if I'm wrong. It becomes 5 at a number which is far bigger than that. The point is a real computer running union-find on an input with only one element per proton in the universe will have no problem, nor if it first expands each proton to a universe the size of our own a fair number of times. This would probably require an environmental impact statement or something :). Pakaran. 16:24, 24 Sep 2004 (UTC)
Indeed. I remember what Aho-Ulmann had to say "\alpha [the inverse] eventually reaches 4, then 5 and so on in its unimaginably slow crawl to infinity." Let's at least write your description - "number of digits whose own digits cannot be written in the physical universe" (to be pendantic, we should say in base-10). That should be suffficiently impressive. -- Deepak
A(4, 4) is A(3, A(4, 3)), or A(3, A(3, A(4, 2))) - which is to say 2 to the power of a number which is itself 2 to the power of a number which would require a good fraction of a day to read aloud. A(5, 5) is a horse of another color. The practicalities of storing an input the size of either in a PC's address space are left as an exercise. Feel free to add something along the lines we've discussed to the article. Pakaran. 17:50, 24 Sep 2004 (UTC)
Oh, and the number of digits in the number of digits in A(4, 4) can be written in the physical universe, since the number of bits in the number of bits can be (it'll be fairly close to A(4, 2)) Pakaran. 17:54, 24 Sep 2004 (UTC)

Introduction

[edit]

Just a note: in the introduction it states, "In mathematics and computer science, the Ackermann function (also known as Ackermann-Peter function) is an important example in the theory of computation.", but doesn't say what the Ackermann function is an important example of. -Seth Mahoney 22:06, Sep 24, 2004 (UTC)

I think it is supposed to be an example of "A function of two parameters whose value grows very fast" according to [2]. Kirbytime 02:17, 24 March 2006 (UTC)[reply]

Intuitive meaning

[edit]

If I understood the Aaronson article (under "References") correctly, the Ackermann function extends the line of operations "addition, multiplication, exponential" to "tetration, pentation" and so forth. So you have:

A(1) = 1 + 1
A(2) = 2 * 2
A(3) = 3 ^ 3 = (3 * 3) * 3
A(4) = 4 tetrated 4 = ((4 ^ 4) ^ 4) ^ 4
A(5) = 5 pentated 5 = (((5 tetrated 5) tetrated 5) tetrated 5) tetrated 5
...

The definition of the function in the Wikipedia article seems to be different, but the "Explicit description" section says something about "[...] iterated exponentiation, iteration of this operation and so on", so it sounds like the essence is the same in the two definitions.

I just stumbled across this article because it was featured, so I'm not a mathematician and thus don't really know what I'm talking about here! However, if the Ackermann function really has an as simple intuitive meaning as this, maybe there should be a table showing this, like the one I just wrote.

---the original definition by Ackermann was a 3 termed function A(m,n,p) where p=1=>m+n, p=2=>m*n, etc. This 1 term function is closely related, but not the same, though it is what is usually referred to as the Ackermann function

Alternative definition

[edit]

At the end of the section about Inverse we can read:

In particular, some modified functions simplify the expression by eliminating the −3 and similar terms.

Here's an example of a modified Ackermann function which simplifies the explicit formulas for each level in the hierarchy. This function is defined for positive integers m,n both starting at 1 instead of 0:

This function meets:

A(1,n) = 1 + 1 + ... (n 1's) ... + 1 + 2 = 2+n
A(2,n) = 2 + 2 + ... (n 2's) ... + 2 = 2*n
A(3,n) = 2 * 2 * ... (n 2's) ... * 2 = 2^n
A(4,n) = 2 ^ 2 ^ ... (n 2's) ... ^ 2 = 2 tetrated n
 (powers are evaluated right-to-left as usual and so should the rest of operators;
  I borrowed the terms "tetrated", "pentated", etc. from "Intuitive meaning" above)
A(5,n) = 2 tetrated 2 tetrated 2 ... (n 2's) ... tetrated 2 = 2 pentated n
A(6,n) = 2 pentated 2 pentated 2 ... (n 2's) ... pentated 2 = 2 hexated n
etc.

In particular, A(m,2) = 4 for all m, which implies that A(m, 3) = A(m-1, 4) for m > 1.

I think that the hierarchy is more easily understood this way.

-PGimeno 18:37, 26 Sep 2004 (UTC)

:PGimeno, do you have a reference to the modified Ackermann functions? Why do you use the same character A? dima (talk) 13:27, 3 March 2008 (UTC)[reply]
It's just Buck's function with reversed parameters, as seen in Mathworld but with the base cases simplified by me to get rid of the zeros, in an attempt to be didactic. The choice of the same letter A seems to be a bit unfortunate. I should undoubtly have changed it. As for a reference to modified Ackermann functions, there's one already reachable from the External Links section, namely this one: [3]pgimeno (talk) 13:16, 9 November 2008 (UTC)[reply]

mn

[edit]

The explanation of mn as "m multiplied by itself n times" is not quite true. For example, it implies that m1 means "m multiplied by itself once". I can't see an obvious way to fix this. Rewriting it as "1 multiplied by m n times" would be correct but maybe a bit obscure to the layman. Snottygobble 01:15, 20 Dec 2004 (UTC)

I consider this to be one of those cases where being a little wrong in a few edge cases for the purpose of simplicity is okay. But I'll try and stick in a word or two as a reminder. Deco 03:29, 5 Feb 2005 (UTC)
Why would this be incorrect? The empty product is 1, so I think the original explanation is still correct. 63.96.95.133 23:43, 15 May 2007 (UTC)[reply]
A better wording would be "the product of n copies of m". JRSpriggs 07:49, 16 May 2007 (UTC)[reply]
How about the n-fold m-multiple of 1. AJRobbins (talk) 20:20, 5 December 2007 (UTC)[reply]

Possible vandalism - please check

[edit]

User 213.94.207.61 made a change months ago: here.

His other edits were subtle vandalisms - he changed dates a bit and these changes didn't get caught for very long time. (all his edits)


Can someone check whether the change in Ackermann function above is or isn't valid? Pavel Vozenilek 12:58, 26 Mar 2005 (UTC)


It's redundant, m is always greater than 0 there. I'll remove it. Thank you for pointing it out. RJFJR 13:45, Mar 26, 2005 (UTC)


The Ackermann Recursive Function Antinomy

[edit]

The Ackermann’s ‘function’ hardly counts as a function --- its array or table of values is merely a perverse pathological maze. Indeed, it blatantly demonstrates that the clearly ill-defined Ackermann’s function is an antinomy or self-contradiction paradox just like:

(1) Cantor’s diagonal argument [the sequence of diagonal terms is actually an inherent list-inclusion-and-imposition-of-order condition for the row-listing so that the anti-diagonal-terms expression is indubitably excluded from the row-list]; or

(2) Cantor’s bone-of-contention “set of all the elements of an infinite set which are not contained in their respective images for a presumed one-to-one correspondence between the elements of the given infinite set and the members of its power set [see Wikipedia article and discussion on Cantor’s Theorem]; or

(3) the ‘liar’ paradox [“This statement is false” — which is both true and false at the same time].

The following is a very simple counter-argument to the generally accepted claim that the Ackermann’s function is recursive but not primitive recursive:

(1) Clearly, the first row of the Ackermann’s function values table or array already enumerates the natural numbers — A(0,n) = n + 1 — or, recursively: A(0,0) = 1, A(0,n) = A(0,n-1) + 1.

(2) The Ackermann’s function has N x N as domain and N+ as range, where N is the set of all natural numbers and N+ is the set of all positive natural numbers. Therefore, for all natural numbers m > 0 and n ³ 0, A(m,n) = z + 1 = A(0,z) where z is some natural number (which can be readily obtained from a non-recursive definition of the Ackermann’s function — say, using the hyper operator). That is:

A(m,n) = A(0,z) = A(0,A(0,z-1)) = A(0,A(0,A(0,z-2))) = A(0,A(0,A(0,A(0,z-3)))) . . . = A(0,A(0,A(0,A(0, . . . ,A(0,0)))))

— which is the standard primitive recursive successor function on the natural numbers. In other words, every Ackermann’s function value is primitive recursively and not primitive recursively defined at the same time in the same respect — hence, its very definition indeed violates the principle of non-contradiction so it is untenable ab initio.

For example, the following is a primitive recursive definition of the Ackermann’s function:

A(m,n) = 1 if m = 0, n = 0

= A(0,n-1) + 1 if m = 0, n > 0

= A(0,hyper(2,m,n+3)-4) otherwise

where the hyper operator [see Wikipedia article on “Hyper operator”] for m > 0 is defined as:

hyper(a,1,b) = a + b

hyper(a,2,b) = a x b

hyper(a,3,b) = a ­^ b = ab

hyper(a,4,b) = a ­­^^ b = ba [see Wikipedia article on “Tetration”]

. . .

hyper(a,m,b) = a ­^m-2 b

(3) Therefore, to avoid the self-contradiction, one should define the first row Ackermann’s function values from an already known recursive but not primitive recursive function — say, A(0,0) = a > 1 and, for n > 0, A(0,n) = f(A(0,n-1)) — for example, a = 10100 and f(A(0,n-1)) = A(0,n-1)A(0,n-1) — but this undercuts Ackermann’s function’s posturing as the simplest recursive but not primitive recursive function.

It must be noted that all the Ackermann function (indeed, a many-to-one relation) values do not follow a recursive single sequence — that is, they cannot be enumerated such that the later values are obtained only from earlier values in the sequence.

Moreover, the abstract algebraic commutative ring of integers or field of rational or real numbers has only addition and multiplication (as well as their respective inverse) operations initially defined. Exponentiation (that is, iterated multiplication) is also well-defined over these number systems but tetration (see Wikipedia article) or non-standard iterated exponentiation (the exponentiation is done at the deepest level first — that is, right associative operation) is not well-defined (it violates the laws of exponents for these standard number systems — that is, its operation is not properly derived from addition and multiplication). The fast growth of the Ackermann’s function values beginning with the A(4,n) row is attributable to the fact that their definition involve non-standard iterated exponentiation which belongs to a different number system than where primitive recursive functions are defined.

BenCawaling@Yahoo.com [25 April 2005]

Hooray, a crank! Here's why you're wrong:
  • The Ackermann function is a function, if you define a function (like every other mathematician) to be a set of pairs. It has a procedure for computing it which provably always halts, which is enough to prove that the function exists and is unique.
  • It's not primitive recursive, and that's not "generally believed", that's a proven theorem - therefore any "counterargument" you offer is fallacious before I read it. The fact that you can expand it in terms of its own values in ways other than the recursive definition is true but irrelevent.
  • Cantor's diagonal argument is also not a paradox.
Deco 08:01, 25 May 2005 (UTC)[reply]

Definition and properties - "more concise"

[edit]

The text says:

Haskell yields a more concise definition:
[...]
And so does Scheme:

I'm not convinced these add anything: the Haskell example is identical to the function pseudocode, except that all the verbose keywords get swallowed up by the ease of defining functions in Haskell, and the use of multimethods to shift the burden of argument testing from the code to the compiler. The Scheme example seems a direct copy of the pseudocode except that it swaps the names of the two arguments (and again minimizes the need for keywords).

This is not a Perl golf competition - do these examples really contribute to the article at all? Hv 15:47, 13 August 2005 (UTC)[reply]

I agree. The Haskell one is elegant, but it's cryptic to people who don't know Haskell (read: 99% of readers), and they really disrupt the flow of the introduction, and which really has enough junk already. I removed them. Deco 21:30, 14 August 2005 (UTC)[reply]
I'm familiar with both the languages, but I think that the Haskell version should be left in. It's so consise and elegant. If you can understand the wikipseudocode, you would be able to understand the haskell notation of the function. I would say it were even easier to understand then the wiki one. I think you should keep the Scheme version out though, whereas it is easy to follow, it is more crytic than the original. - Hahnchen 03:58, 22 August 2005 (UTC)[reply]
Okay, that's fine. One can at least say that the Haskell version doesn't cost much in terms of space. But let's not expand it beyond that. If necessary, the partially iterative pseudocode can be removed also. Deco 05:22, 22 August 2005 (UTC)[reply]
OK, I have restored the Haskell version and removed the partially iterative pseudocode version. —Ashley Y 23:06, 3 January 2006 (UTC)[reply]

"One surprising aspect of the Ackermann function is that the only arithmetic operations it ever uses are addition and subtraction of 1" If you define each term m+1 in terms of m and n, then the function is defined entirely in terms of addition... 137.205.8.2 10:25, 15 March 2007 (UTC)[reply]

First non-PR

[edit]

The article says "...simple example of a recursive function that is not primitively recursive." It was the first such example known, wasn't it? (IIRC from class 20 years ago) Bubba73 (talk), 03:41, 8 December 2005 (UTC)[reply]

Nope. As the article explains, the little-known, independently-discovered Sudan function slightly predates it (at least if you trust my source). Other such functions were published before that but never proven to be non-primitive-recursive until later. Deco 03:45, 8 December 2005 (UTC)[reply]
There's no article on the Sudan function. RJFJR 21:34, 8 December 2005 (UTC)[reply]
This is a true fact, probably because it's so little-known. Anyone with access to the original paper by Sudan can write it though. Deco 22:56, 8 December 2005 (UTC)[reply]

Gabriel Sudan invented first this function

[edit]

I read in the following web site: http://mathforum.org/epigone/math-history-list/smixsmeeblu

Subject: Ackermann vs. Sudan function: priority of discovery [was:Ackermann - or Budan or ?] Author: Bill Dubuque <wgd@martigny.ai.mit.edu> Organization: MIT Date: Fri, 12 Sep 1997 08:09:48 -0400

The following message is a courtesy copy of an article that has been posted as well.

mac@abacus.concordia.ca ( JOHN MCKAY ) writes: | | Can someone assess who originated the so-called "Ackermann fn" ? | It appears it may not be Ackermann.

Cristian Calude has written a number of papers on the history of the Ackermann and Sudan functions, e.g. see

Calude, Cristian; Marcus, Solomon; \cedla Tevy, Ionel
The first example of a recursive function which is not primitive recursive.
Historia Math. 6 (1979), no. 4, 380--384. MR 80i:03053 03D20 01A60

Chronologically, Sudan's function is the first example of a recursive but not primitive recursive function (Bull. Math. Soc. Roumaine Sci. 30 (1927), 11 - 30; Jbuch 53, 171). Ackermann's work was published slightly later (Math. Ann. 99 (1928), 118 - 133; Jbuch 54, 56). Both were students of Hilbert, and were working on a problem posed by Hilbert, and were acquainted with each other's work. Sudan's function extends to ordinals and majorizes Ackermann's function (except at a single point). As Smorynski says in his book Logical Number Theory

independently, to two of Hilbert's students, Wilhelm Ackermann and
Gabriel Sudan. Although they gave essentially the same recursion,
Sudan worked with functions of transfinite ordinals and Ackermann
with functions of natural numbers, whence Hilbert cited Ackermann
and Ackermann's is the name associated with the resulting functions.

The paper cited above also has speculations as to why Hilbert and Bernays did not mention Sudan's construction.

According to MR 82k:03061, Sudan's function is as follows

F (x, y)   = x+y
 0
F (x, 0)   = x
 n+1
F (x, y+1) = F ( F (x, y), F (x, y)+y+1 )
 n+1          n   n+1       n+1

Sub expression within table

[edit]

In the table, each line has an entry of the form f(2,(n+3)) - 3 in the n column. Whilst is is a "fudge" to get the right answer, would it be inappropriate to write the first line as 2^0 + (n+3) - 3 which evaluates to the present n+1? -- SGBailey 12:29, 29 January 2006 (UTC)[reply]

physical universe

[edit]

"In fact, α(n) is less than 5 for any conceivable input size n, since A(4, 4) has a number of digits that cannot itself be written in binary in the physical universe.

This sentence is one of at least seven comparisons to the size, in number of elementary particles, of the physical universe. I realize the size of the universe can be used as a bound on what is computable, based on memory requirements, by any "real", physical computer, or as a bound on what is expressible in some particular formal language. But beyond that, is the size of the universe just for analogy, to express how impressive the rate of growth of the function is, or does it actually have some bearing in mathematics?
On a related note, what is meant by "conceivable" in the above sentence? I don't think I've ever heard of an entity that is definately not conceivable, although I'd imagine there is some paradox out there formulated specifically for this purpose. Formally, is the range of a(n) actually bounded above by 5? How can that be, if f(n) = A(n,n) is defined for all natural numbers?
I would suggest changing "for any conceivable input size" to "for any input size that is scribable in decimal notation." Anyway, I enjoyed the article very much. Thanks a lot! --Rwehr 20:06, 15 April 2006 (UTC)[reply]
Please go ahead and edit the page. Kaimiddleton 21:02, 15 April 2006 (UTC)[reply]
Also, has anyone done any work on the memory requirements of Ack? It would seem that if you have a smart compiler, that does save the intermediate values, those memory requirements also get pretty large. Looks like a space-time tradeoff to me. It would be interesting to see any papers on this, if anyone is aware of them. That info should be included in the article. Sword 05:04, 15 July 2006 (UTC)[reply]

Definitions and "versions"

[edit]

The Wikipedia article on Ackermann's function does not have ackermann's function as he originally defined it. The NIST's DADS website, also has an article on Ackermann's function. Unfortunately, it too does not show Ackermann's original definition, but it does have two links to the "versions" of Ackermann's function under the "more info" section. The link to Cowles and Bailey's (hereafter C&B) webpage seems to be the more informative of the two (Munafo's is just the math definitions without the history.) C&B's research looks like its been directly taken from a BBS article, so you'll have to ignore the headers. C&B's webpage shows that the current definitions on the NIST website, MathWorld, and Wikipedia are all R. Peter's 1935 version of Ackermann's function.

Ackermann's original function seems to be a lot more general than the other two variable functions that are presented on C&B's website. It is a three variable function which maps the common addition, multiplication, exponentiation, tetration, ... to a whole (including 0) number, which is the first parameter. It, of course, has the following two whole number input parameters that R. Peter's and the other two variable functions take in as its second and third parameters. This would make it use Polish notation, if the parameters were written and used from the left to right. Also, Herbert's version seems to be the most refined, and I would suggest using it to introduce the reader to Ackermann's function.

[edit]

Several links in the History section take the form '#vonHeijenoort|something', taking the user to the list of external references. Is it better to link to the wikipedia article if we have one? I changed one link (Hilbert's program) before realising there were several, so thought I'd ask! --- Richard CHSTalk 12:48, 26 June 2006 (UTC)[reply]

Extending to transfinite ordinals

[edit]

One can extend the Ackermann function to the transfinite (for ordinals which have a distinguished fundamental sequence) as follows:

A (0, n) = n+1.
A (α+1, 0) = A (α, 1).
A (α+1, n+1) = A (α, A (α+1, n)).

If λ is a limit ordinal with the fundamental sequence which takes k to λk, we let:

A (λ, n) = A (λn+1, n+1).

See Large countable ordinals#Fundamental sequences for the Veblen hierarchy. JRSpriggs 09:22, 20 August 2006 (UTC)[reply]

Let me clarify that it is only the first argument of the Ackermann function which is being extended to infinite ordinals. The second argument and the value are still limited to natural numbers. However, I think (but have not yet proved) that by using this extention to ordinals the Ackermann function can surpass the Conway chained arrow notation as a means of naming large natural numbers. JRSpriggs 04:46, 21 August 2006 (UTC)[reply]
Specifically, A (ωω+1, n) should rise faster as a function of n than any sequence of Conway chained arrow notations which has a relatively simple definition, such as nn→...→n with n copies of n. JRSpriggs 04:55, 21 August 2006 (UTC)[reply]

Please do not compare large numbers to the physical world

[edit]

People who do not have something useful to say about this article seem obsessed to insert some comparison between large numbers and the "number of atoms in the universe" with perhaps some silly function applied. Please stop. It might be entertaining, but it is not appropriate encyclopedic information. --Farever 23:08, 2 February 2007 (UTC)[reply]

Er, I think this is actually a useful way of demonstrating that the values taken on by the function are very large, and more importantly, that computations taking resources proportional to these numbers become rapidly impractical. I don't think it's silly. Dcoetzee 22:24, 20 July 2007 (UTC)[reply]
The comparisons to the physical world are commonplace in popular science and mathematics contexts, and as such should be reflected on Wikipedia. Wikipedia is not a math(s) textbook. Some examples of the use of physical quantities when talking about related functions. Note: though YouTube links would be instructive in this case, Wikipedia seems to have developed a stick up its ass about YouTube for some reason and won't let me post them... I'm just including textual references to the examples that reference videos, so you can search for them yourself :-(
  • (search for XTeJ64KD5cg on YouTube) numberphile's Graham's Number video compares storage of the digits in G to information stored in the brain by pointing out that a brain capable of storing that many numbers would be a black hole.
  • (search for luJpiJEOH1c on YouTube and scan forward to 1m43s) PoETheeds, also on Graham's Number, talks about the number of Planck's lengths in the observable universe.
  • Computational Topology: An Introduction compares the Ackermann Function to the number of electrons in the universe.
  • Other books that use similar comparisons:
    • Instructor's manual to accompany Introduction to algorithms by Julie Sussman, 1991, p. 140
    • SIAM Journal on Matrix Analysis and Applications, "Row and Column Counts for Sparse Cholesky...", Volume 15, Issues 3-4, 1994, p. 1077
    • Algorithms from P to NP: Design & efficiency by Bernard M. E. Moret, Henry D. Shapiro, 1991, p. 162
As you can see, this is not a rare phenomenon, and one that Wikipedia should not shy away from because it offends the pure math(s) crowd. -Miskaton (talk) 18:31, 8 July 2013 (UTC)[reply]


I agree with the other commenters that this sort of comparison is reasonable to include (if sourced). But there is one point we need to be careful of: A lot of times this sort of thing will be (sloppily) presented as a comparison with "the universe". What they really mean is the observable universe. There is no known upper bound on the size of the whole universe; it may indeed be infinite, and we shouldn't be making statements that (even inadvertently) suggest that the question is decided. --Trovatore (talk) 18:44, 8 July 2013 (UTC)[reply]

A(5, 1) in the table

[edit]

I note that Susan Stepney's table chooses to always list A(5,1) as A(4,65533), thus the term appears four times in her table. Our current table instead uses A(5,1) three times, which I think is distracting because it breaks the pattern in that is otherwise clear in row 6. Another possiblity is using A(5, A(6, 0)) as the result for A(6, 1). Do you guys have a preference?--199.33.32.40 01:35, 3 February 2007 (UTC)[reply]

I personally think its stupid if not ridiculous to give examples (i.e. tables) of a function that was designed to write things that can't be written. In other words, the only way of writing the values of the Ackermann function is to use the Ackermann function itself, or functions like it. Any attempt to do otherwise will lead to huge tables (which distorts the entire Web page, which I don't like). I think the highest value that should be written is after which we should default to or simply (too big). AJRobbins (talk) 18:49, 15 April 2009 (UTC)[reply]

GA?

[edit]

This is feeling like it might be a GA once again. I will poll some of the Math WikiProject guys and get some feedback. Then maybe some PR and on to GAN!--70.231.149.0 02:15, 4 February 2007 (UTC)[reply]

Corrected A(5,n) and A(6,n)

[edit]

I hope it's correct now. It was false before.

Inverse Ackermann function

[edit]

I added two external links on the inverse Ackermann function (one to a page of mine, one to a PDF presentation by R. Seidel). The inverse Ackermann function should get its own wikipedia page. Gabn1 09:32, 22 February 2007 (UTC)[reply]

BTW it is not strictly an inverse, because a true inverse function is a partial function (almost any natural number is not a value of Ackermann function). It is an undocumented generalization of the inverse for monotonic functions. Incnis Mrsi (talk) 08:36, 31 August 2011 (UTC)[reply]

necessary comments?

[edit]

"(this presentation is due to Rózsa Péter):"

Is this really important? No offense to anyone, but I think Rózsa Péter should be credited only as a scientist, not a Wikipedia editor (maybe on her own page, but this one seems a bit excessive...though not quite vanity, but perhaps self-promotion...) —The preceding unsigned comment was added by 128.32.42.26 (talk) 23:10, 27 February 2007 (UTC).[reply]

The reference is not to a WikiPedia user, but to the mathematician that invented that form of the function - hence the alternative name "Ackermann-Péter function" in the first paragraph. I'm sure the wording could be improved to make that clearer, but I'm not sure exactly how. Hv 02:21, 1 March 2007 (UTC)[reply]

Termination of function

[edit]

The explanation of why this function always terminates is confusing in my opinion. At least, it does not succinctly convince the reader that the function terminates. It states:

"The recursion is bounded because in each recursive application either m decreases, or m remains the same and n decreases. Each time that n reaches zero, m decreases, so m eventually reaches zero as well. (Expressed more technically, in each case the pair (m, n) decreases in the lexicographic order, which preserves the well-ordering of the non-negative integers.) However, when m decreases there is no upper bound on how much n can increase — and it will often increase greatly."

I don't think that the "more technical" explanation needs to be there as it interrupts the explanation. Furthermore, the "m decreases, so m eventually reaches zero" part is followed by the part stating that if m decreases there is no upper bound on how much n will increase, so it wouldn't follow that m eventually reaches zero without further inspection. I can't offer a better explanation, but I think it could be done. —The preceding unsigned comment was added by Sam Brightman (talkcontribs) 15:58, 2 April 2007 (UTC).[reply]

May be it is worth noting that the coq proof assistant can prove termination of the ackermann function by structural induction: see https://www.lri.fr/~paulin/LASER/course-notes.pdf p.19-20. I was surprised to see that since I thought structural induction can prove termination only of primitive recursive functions. Jochen Burghardt (talk) 22:15, 3 June 2013 (UTC)[reply]

Importance (or priority) of Ackermann function

[edit]

I disagree with the down-grading of the Ackermann function from "mid" importance to "low" importance. It is quite important in understanding the distinction between primitive recursive functions and total recursive functions generally. See Primitive recursive function#Relationship to recursive functions, especially the last sentence which says "... a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function.". JRSpriggs 10:13, 10 June 2007 (UTC)[reply]

It was definitely a borderline decision, and can of course be changed: there is always a bit of subjectivity in this kind of thing. I did appreciate the fact that the Ackermann function is important in mathematical logic as an example of a recursive function which is not primitive recursive, although I wasn't aware of the more subtle point which you quote. It would be nice to mention this fact in this article, which at the moment doesn't seem to bring out the mathematical logic aspects as well as it might. Note though, that "Low-priority" is not intended to be at all dismissive: for comparison, Area of a disk is also Low-priority, and this appears to be accepted by editors. See Wikipedia:WikiProject Mathematics/Wikipedia 1.0/Assessment and Wikipedia:WikiProject Mathematics/Wikipedia 1.0/Importance for more information. Geometry guy 10:46, 10 June 2007 (UTC)[reply]

relation to primitive recursive functions

[edit]

We need some citations and more information on the relationship between Ackermann's function and primitive recursive functions. David.Monniaux 03:55, 17 June 2007 (UTC)[reply]

4th Ackermann number

[edit]

I fixed the entry for the fourth Ackermann number some time ago, and it got reverted by someone (probably carelessly). I'm placing the incorrect text here:

An attempt to express the fourth Ackermann number, 44, using iterated exponentiation is not practical but it can be expressed using tetration in three nested layers as shown below. Explanation: in the middle layer, there is a tower of tetration whose full length is (already a large number) and the final result is the top layer of tetrated 4's whose length equals the calculation of the middle layer.

The following contains the correct expressions: notice that it actually uses tetration as claimed:

An attempt to express the fourth Ackermann number, 44, using iterated exponentiation as above would become extremely complicated. However, it can be expressed using tetration in three nested layers as shown below. Explanation: in the middle layer, there is a tower of tetration whose full length is and the final result is the top layer of tetrated 4's whose full length equals the calculation of the middle layer. Note that by way of size comparison, the simple expression already exceeds a googolplex, so the fourth Ackermann number is quite large.

Sorry I can't remember my password to log in and sign this. I'll sign it later. Ok: signed by my logged-in ID now: Kaimiddleton 21:37, 20 July 2007 (UTC)[reply]

Metaprogramming in C++ example bad

[edit]

Hello. I just removed a C++ metaprogramming example. It was designed in vain - the templates are highly ambiguous. Here is a transcript of the contents:

 template<int x, int y> struct ackermann       { enum { v = ackermann<x-1, ackermann<x, y-1>::v>::v } ; } ;
 template<       int y> struct ackermann<0, y> { enum { v = y+1                                     } ; } ;
 template<int x       > struct ackermann<x, 0> { enum { v = ackermann<x-1, 1>::v                    } ; } ;
 template<            > struct ackermann<0, 0> { enum { v = 1                                       } ; } ;

The second and third are equivalent. In a compiler that ignores the name, you'd wind up with a conflict. It doesn't work with GCC (at least in my test) - PGSONIC 00:59, 17 August 2007 (UTC)[reply]

Remove most of the implementations?

[edit]

I don't see a reason to show the function implemented in a dozen computer languages. It doesn't do anything to add to the understanding of the function and is only of cursory relevance to the article. In particular "Non-Recursive Ackermann in C++" seems more like a geek-badge than encyclopedic content.

There is no point in showing a recursive function can be implemented with a stack as that is how recursion is --Ott2 18:32, 21 September 2007 (UTC)implemented in computer languages anyway.[reply]

Interestingly although "The first implementation in a programming language was in Fortran in 1964, see H. Gordon --Ott2 18:32, 21 September 2007 (UTC)Rice[13], Recursion and iteration, Commun. ACM, 8(2), 1965, pp. 114--115.)", no Fortran implementation is even given (let alone the first). If any computer implementation of the function was relevant to the article it would be that one and it isn't even provided![reply]

My recommendation is to remove the section or to have only the C definition. 69.121.109.152 19:58, 24 August 2007 (UTC)[reply]

I agree that having several implementations doesn't add much content, and also it may encourage to add other implementations. IMHO a good choice will be to quote Rice's Fortran original implementation or a pseudocode version and remove all other implementations. I don't know if there's a rule that says articles should not be contests??.Ismaelbej 17:57, 18 September 2007 (UTC)[reply]
Agreed. The various code examples clutter up the page, and are largely irrelevant anyway -- the whole point is that this function _can_ be computed but it isn't feasible to do so except for a tiny subset of inputs. --Ott2 18:32, 21 September 2007 (UTC)[reply]
I don't see the point of all those implementations. With the possible exception of the Haskell one, all they do is restate the recursive definition at the top of the page. Exactly how to state recursive definitions in various programming languages might be interesting information about those languages, but it isn't interesting information about the Ackermann function. I am about to remove the section. I wouldn't be averse to someone adding the original Fortran code, if anyone has it. Algebraist 22:36, 2 April 2009 (UTC)[reply]

Graham's number

[edit]

Hello - I just came by this article and noticed a statement that I don't believe is correct (underlined):

Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively.

To my knowledge, iteratively repeated power towers as used in the definition of Graham's number are expressible with Ackermann functions and 5 as the first argument: First arguments 3 are in the order of powers, 4 is in the order of power towers, and 5 in the order of repeated power towers. So I would expect Graham's number to be something like A(5, n) with n something larger than 64 (but not much larger; Ackermann function is in the order of powers of 2, whereas Graham's number is defined through powers of 3). Therefore, Grahams number could not be considered an "even larger number" as stated in the article, but upper bounds can be expressed through the Ackermann function with reasonably small arguments. Comments? Thanks, Jens Koeplinger 13:28, 27 August 2007 (UTC)[reply]

A(5, n) can be expressed with 3 upward arrows, while Graham's number needs very many.--Patrick 14:08, 27 August 2007 (UTC)[reply]
Since Ackermann functions are to base 2 and Graham's Number is base 3, it is very difficult to express one form in the other, obviously. However, nested Ackermann operations are roughly equal to the usual Graham definition. for example- A(A(A(....A(1,1)...) with 68 A's is larger than Graham's Number. Mytg8 15:01, 27 August 2007 (UTC)[reply]
Got it, the first argument of the Ackermann function is nested, so we can't write it down with two reasonably small arguments. Thanks both, Jens Koeplinger 16:56, 27 August 2007 (UTC)[reply]
My bad-I should have used the Ackermann numbers,ie, A(k)=A(k,k). So A(A(A(....A(1))...) with 68 A's is the correct form. And to think Friedman's n(4) uses A(187196,187196) A's! Urr, A(187196) A's. One or the other.:) A tad bigger the Graham's, y'think? Mytg8 17:40, 27 August 2007 (UTC)[reply]

Obscureness

[edit]

It was already said that there are too many implementations. Thus I deleted those implementations that I considered obscure. So far so good. The Matlab implementation was promptly readded. But: Matlab is just some CAS, right? I don't see its outstanding importance, least that the Ackermann function needs to be explained in terms of this "language". (I would also be happy to see even less implementations but couldn't decide on a language.) --53.122.197.194 (talk) 13:57, 26 November 2007 (UTC)[reply]

C Program

[edit]

Correct me if I'm wrong, but couldn't that program be int ackermann(int m, int n) {

 if (m == 0)
   return n+1;
 if (n == 0)
   return ackermann(m-1, 1);
 return ackermann(m-1, ackermann(m, n-1));

} ?

Considering that it's 'written' in C and that after a return statement, it wouldn't evaluate the rest? At the very least, I would remove the "m>0" parts, because presumably if m is not zero, it is greater than zero. Sorry, can't log in account right now. Gremlinman. —Preceding unsigned comment added by 24.218.46.235 (talk) 00:05, 5 December 2007 (UTC)[reply]

unsigned int limits the input range to zero or positive integers, which is logical. But this C program is a joke nonetheless, totally useless even for the VAST majority of "conceivable" numbers. It seems to me like the example program is there to make the function "conceivable" also for programmers, which is a good thing. But pseudocode would be more accurate, as the limits of the language would not be so obvious. Hmm. There exist bignum libraries for C, which can tackle numbers of arbitrary size, thus letting the program rather be limited only by outer factors (including memory, time, stack size and so on), but I realize that a program using such libraries would not be as short and simple as this one, and could possible even waste more people's time because they are tempted to test a program which looks more serious... —Preceding unsigned comment added by 129.240.72.42 (talk) 15:28, 11 April 2008 (UTC)[reply]

Michie memoization reference

[edit]

There is a citation needed note on the claim that Donald Michie invented memoization. There are citations on both his page Donald Michie and the Memoization pages. I'm not sure whether to copy that reference here, or if it would be better to remove the claim here since there isn't anything Ackermann-specific about it, and if someone wants to go look at the memoization page they'll see the claim right at the top (and the citation).

Or could the call for citation be about the claim that compilers can do memoization automatically? Certainly they can (Haskell for example [4], among plenty of others). But again, it seems to me that such details belong on the Memoization page.

--JamesPaulWhite (talk) 06:31, 13 April 2008 (UTC)[reply]

"Hyper operator" (sic) should be merged here

[edit]

The "Hyper() function" appears to be someone's private terminology for one of the more common variants of the Ackermann function, and the family of so-called "hyper operators" is just the related hierarchy of fast-growing functions. Here's a relevant quote from that article, which should explain why it ought to be merged with Ackermann function:

This family of operators was described by Reuben Goodstein [1947], using the notation G(k,a,n) for what would here be written as hyper(a,k,n) or a(k)n, etc., which is a variant of the Ackermann function. In the 1947 paper, Goodstein introduced the names "tetration", "pentation", "hexation", etc., for the successive operators beyond exponentiation. (More recent publications refer to this entire family and its variants simply as the "Ackermann hierarchy".)
The Ackermann hierarchy is clearly distinct from the hyperoperators, because in his original paper in 1928, Ackermann defines the operators which are equivalent to the Knuth arrow only for n less than 3. If we consider we find that this is not tetration as Goodstein defines it, and that is not expressible using Goodstein's pentation in any way shape or form. In other words, the Ackermann hierarchy and the Goodstein hierarchy (if calling things by authors is what you care about) are completely distinct families of binary operations, even though they are both defined by , the initial conditions make them different. AJRobbins (talk) 16:12, 16 April 2009 (UTC)[reply]

There is a lot of good information in the Hyper operator article, but it seems to be in the wrong place, using someone's private terminology. (As far as I know, there is no published source for that terminology in this context.) I happen to like variants of the operator notation used there, i.e, a(k)n, but I expect that just about anyone who has worked very much with this hierarchy of functions has probably invented something like it for themselves; but, again, I haven't seen it used in publications (presumably because it would seem too trivial to mention).

If there are no "nice" sources for the hyper terminology and someone thinks it should be merged, then I would be more inclined to merge it with Knuth's up-arrow notation because there are fewer shifts involve. It is impossible to represent hyper operators with the Ackermann function, so I would consider this page a poor choice for merging with all hyper operator. Also, in terms of sources, I believe "hyper" was popularized by Robert Munafo, at least that is the earliest reference I can find. AJRobbins (talk) 02:24, 6 April 2009 (UTC)[reply]

An important point is that each function (operator) in the Ackermann hierarchy, i.e. each function obtained from the Ackermann function by fixing the argument that determines the "operator level" (i.e. 0 <-> successor, 1 <-> addition, 2 <-> multiplication, etc.), *is* primitive recursive. It's the function of all three arguments (or two in the case of the two-argument version in the present article) that's recursive but not primitive recursive.
--r.e.s. (talk) 19:24, 26 September 2008 (UTC)[reply]

  • It does seem to bear a remarkable similarity, but I don't know, I don't seem quite sold on the fact that they are even similar. (Yes, I know it says it in the lead section, but I want more concrete evidence.) I'd say leave them as is for now, at least until we can fully establish what the case with Hyper operator is. --JB Adder | Talk 07:59, 7 December 2008 (UTC)[reply]
It's more than just a similarity. In his 1947 paper, Goodstein presented the (first?) three-argument version of Ackermann's function for which the "indexed functions" at and beyond exponentiation are exactly the functions denoted (much later) by Knuth arrows. That is, G(0,b,n) = n+1, G(1,b,n) = b+n, G(2,b,n) = b*n, G(3,b,n) = b^n, G(4,b,n) = b^^n, etc. Note that it's from Goodstein's indexing convention that tetration derives its name (tetra referring to the index value of 4), and so on for pentation, etc. According to this SpringerLink article, Ackermann's original three-argument version did not have this property, because it indexed the functions beyond exponentiation in such a way that one of the arguments counted the number of operators rather than the number of operands — e.g., the function corresponding to tetration, as used by Ackermann, can be written as b^^(n+1), rather than b^^n. A modern presentation of Goodstein's version of the Ackermann function is in this pdf article by Zucker & Pretorius (pp 26-27), which, by the way, also uses the "hyper-" terminology in that it refers to tetration as the hyperexponential function.
Although I don't have the time to make the changes just now, I think the present article would benefit greatly by including the Goodstein version. There's also a good case to be made for including one of the "streamlined" versions of the Ackermann function, especially the one appearing in Long Finite Sequences by Harvey Friedman: A1(n) = 2n, Ak+1(n) = AkAk...Ak(n)(1), where there are n Ak's in the iteration. Except for a lag in the indexing, Friedman's version coincides with Goodstein's using base b=2; that is, for k = 1,2,3,4,..., Ak(n) = 2n, 2^n, 2^^n, 2^^^n, ...
--r.e.s. (talk) 21:42, 27 December 2008 (UTC)[reply]

Importance of Ackermann function? Yes!

[edit]

Ackermann function is not "just" an example of a recursive function that is not primitive recursive. It plays a fundamental role in computability theory because it is the proper complexity measure of several algorithms, and the complexity measure of several decidable problems. For example, McAloon shows that controlled sequences of incomparable n-dimensional vectors of natural numbers have length bounded by a version of A(n,x) (see McAloon, TCS 1984). This provides an Ackermann upper bound for many algorithms that rely on Dickson's lemma for their termination. And there exist problems with this Ackermann upper bound for which a matching lower bound can be established. An example being the equivalence of bounded Petri nets (see Mayr and Meyer, JACM 1981). PhS (talk) 20:25, 26 September 2008 (UTC)[reply]

Some algorithmic results are already mentioned, but these are great examples to add. :-) It might help to revise the lead to emphasize that it has applications in analysis of algorithms. Dcoetzee 23:43, 26 September 2008 (UTC)[reply]

lede section

[edit]

I moved the definition back to the lower section. It might (I'm not quite sold) be worth adding it to the lede. But it would need to go lower in the lede, not at the top. The goal of the first part of the article is to establish context and give a general description that essentially anybody can understand. So what is presently the first paragraph really should be the first paragraph. — Carl (CBM · talk) 01:17, 29 October 2008 (UTC)[reply]

[edit]

Having read the article linked to as well as all blog comments falling under that article made by xkcd, I see nowhere where he claims his "xkcd number" is the largest ever concisely defined. The article says that Adam Clarkson considered the Clarkkkkson number to be the largest ever concisely defined. Then he says his intuition is that the xkcd number is probably larger but he is not sure, and asks if any of his readers can demonstrate which is larger. Therefore I am amending the sentence.Incantar (talk) 07:14, 11 March 2009 (UTC)[reply]

Missing definition?

[edit]

I cannot find the definition by Rózsa Péter and Raphael Robinson that is referenced in the History section. 150.252.46.189 (talk) 22:51, 26 March 2009 (UTC)[reply]

The Péter–Robinson function is the one that is defined in the 'Definition and properties' section and is generally called the Ackermann function. Algebraist 23:15, 26 March 2009 (UTC)[reply]

primitive recursive vs. primitive recursion

[edit]

The article states that 'one should not confuse the primitive recursive functions with those definable by primitive recursion'. As far as I understand things, the primitive recursive functions are exactly those definable by primitive recursion (i.e. without general recursion), so, unless people have valid objections, I will delete the sentence. --Acipsen (talk) 18:18, 28 December 2009 (UTC)[reply]

Ackermann numbers

[edit]

Why is this section in Ackermann function? No mention is made of the function at all. It would be better moved in toto to Knuth's up arrow page. OR to convert 1^1, 2^^2, 3^^^3 etc into Ackermann function calls some how. -- SGBailey (talk) 21:17, 9 July 2010 (UTC)[reply]

Recursive vs. computable

[edit]

"Recursive" is the same as "total computable", but not the same as "computable". It is not surprising that the set of primitive recursive functions is not the same as the set of computable functions, since all p.r. functions are total and there exists at least one computable function which is not total. The interesting thing about the Ackermann function is that it shows that not every total computable function is primitive recursive, so this should be explicitely mentioned. – Adrianwn (talk) 05:16, 13 July 2010 (UTC)[reply]

I am not familiar with this distinction. As far as I'm aware, computable is just the new name for recursive. For either term, whether you mean "partial" or "total" by default is a matter of context. I think, since we're contrasting with primitive recursive, it's probably better to say a total recursive function that is not primitive recursive. --Trovatore (talk) 07:36, 13 July 2010 (UTC)[reply]
Think about it this way: a function f is "computable" iff a Turing machine M exists which produces the exact same result as f for all inputs for which f is defined, and does not terminate for all inputs for which f is not defined (ignoring the issue of translation for simplicity). For example, the function f(i,x) = 1 if U(i,x) terminates, else undefined (where U(i,x) is the function which evaluates function #i with argument x) is computable, but not total.
I have never heard of "computable" being synonymous with "(total) recursive", and I have never heard of "recursive" not implying "total" in the context of computability; however, I am not an expert on this subject. Maybe it is better to replace "recursive" with "total recursive" for the sake of clarity, although this might lead to confusion when "recursive" is used without qualification.
On a side note: who came up with those terms? "Recursive, mu-recursive, recursively enumerable, …", some of which are equivalent, some of which are not. It's like someone wanted to confuse beginners in theoretical computer science. – Adrianwn (talk) 09:01, 13 July 2010 (UTC)[reply]
As a computability theorist, I can say that "computable function" and "recursive function" are complete synonyms. There is no difference in meaning between them; it became common at some point to simply substitute the word "computable" for the word "recursive". Most contemporary books define computable functions (that is, recursive functions) to be total, and explicitly say "partial computable function" if that is what they mean. — Carl (CBM · talk) 10:50, 13 July 2010 (UTC)[reply]
Can you give some citations, please? A quick search with Google Books turned this up: Nikolai Konstantinovich Vereshchagin, Alexander Shen: "Computable functions", AMS Bookstore, 2003, page 1 (link). This is a contemporary book on the subject and it doesn't seem to suggest that "computable function" implies "total function". Here is another one, unfortunately it doesn't have a preview, only a summary: link.
So even if some computer scientists use "computable" synonymous with "recursive", this is not universally accepted, and Wikipedia should not suggest that it is by using those terms synonymously. – Adrianwn (talk) 12:36, 13 July 2010 (UTC)[reply]
I promise I will look up some sources when I get back to the office Friday; I am traveling and don't have my books at hand.
There are some authors who explicitly say "total" and some who explicitly say "partial", but it is completely unheard of in modern computability theory to use "recursive" for one of these and "computable" for the other. The way to tell which convention an author follows is to look up the definition of a many-one reduction, because the function there is always intended to be total.
After a campaign by Soare, almost everyone in the field has begun using "computable" everywhere they used to use "recursive". It helps that Soare's textbook is the standard reference for computability theory these days. — Carl (CBM · talk) 14:03, 13 July 2010 (UTC)[reply]

[unindent] I have looked up "Recursively enumerable sets and degrees: a study of computable functions and computably generated sets" by R.I. Soare (Springer, 1987) link, in which he states:

We shall accept Church's thesis and from now on shall use the terms "partial recursive (p.r.),", "Turing computable," and "computable" interchangeably.

On the other hand, he equates the total functions with recursive resp. computable:

An algorithmic partial function which is defined on all arguments (i.e., which is total), is called "recursive" or "computable".

Am I missing something?

Please note that in the end I don't care what term is used in the article, however,

  • it should be used uniformly
  • it should be completely unambigous (e.g. "total recursive" and "partial recursive" in case "recursive" has different meanings depending on the author)

Adrianwn (talk) 14:38, 13 July 2010 (UTC)[reply]

Soare's paper "Computability and Recursion" is readily available online, e.g. here. It discusses many of these terminological ambiguities (and their historical origins) in great detail.
PS: I agree with the recommendations you just made. My own preference would be something like this (note the additional links):
"In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total recursive function that is not primitive recursive. The set of primitive recursive functions is a subset of the set of total recursive functions, and Ackermann's function illustrates that the former is a strict subset of the latter."
r.e.s. (talk) 15:02, 13 July 2010 (UTC)[reply]
I like this suggestion, it is simple and unambiguous; however, I would change "total recursive function" to "total µ-recursive function" because this way no implicit suggestions regarding the totality of "recursive" are made, but I won't insist on that. By the way, thank you for the link, I will check it out. – Adrianwn (talk) 19:31, 13 July 2010 (UTC)[reply]
There's no reason to emphasize μ-recursion over any other model of computability (e.g. Turing computability, register machine computability, etc.). So unless there is a specific reason to focus on the μ operator, it's better to just say "recursive" or "computable", which are synonyms. I have no objection to explicitly saying "total", just to trying to use "computable" to mean something different than "recursive", which would be at odds with normal usage. — Carl (CBM · talk) 15:01, 14 July 2010 (UTC)[reply]
Also note that, in any case, we should link to computable function rather than recursive function. The latter is a disambiguation page, the former is our article about computable functions in general. The article on μ recursive functions is only a good target if that explicit model of computation is important. Usually, links are just about the general topic of computable functions, and so they should point at computable function. — Carl (CBM · talk) 15:03, 14 July 2010 (UTC)[reply]
I agree that we should link to computable function rather than recursive function, although I don't see a link in the article to recursive function, only to mu-recursive function.
I'm fine with using "computable" instead of "µ-recursive" where the emphasis isn't needed. However, I'm not convinced that "recursive" is a synonym for "computable", especially since Soare explicitly recommends against using the former as a synonym for the latter in his paper linked above (section 7, recommendation 1). I suggest that we never use the term "recursive" (on its own) to refer to any set of functions. Outside of computability theory, "recursive" always means that a function calls itself directly or indirectly, and this rule should be followed here as well, for the sake of clarity (especially since not all readers of the article are theoretical computer scientists). – Adrianwn (talk) 16:35, 14 July 2010 (UTC)[reply]
Well, you know Soare arrives a bit late in the day. The term recursive has been used for the concept since before there was a discipline called "theoretical computer science", and to me it's still mathematical logic and not computer science, though computer scientists are welcome to study it. I still think total recursive but not primitive recursive is the best formulation here. --Trovatore (talk) 17:29, 14 July 2010 (UTC)[reply]
Of course I'm a logician as well. Actually I grew up using "recursive", and I still always write "r.e." instead of "c.e.". But there are two reasons I generally use "computable" on wikipedia. The first is that this really is the majority usage in computability theory in mathematical logic, in my experience with computability theorists.
The second is that at least this part of Soare's argument has a point: When you say "computable", people generally know what you mean, but when you say "recursive" they generally don't, unless they already know that "recursive" is meant as a synonym for "computable". As Adrianwn says, and as Soare also argues, many readers associate "recursive" with things like the Fibonnaci numbers, rather than with abstract computability.
This is a pragmatic argument, and it would have less validity in research literature where we expect everyone know what's going on. That's why I feel fine using "r.e." in real life. On Wikipedia, I think that there is something to be said for the easier terminology. But I have not made any attempt to standardize this across all articles. — Carl (CBM · talk) 21:16, 14 July 2010 (UTC)[reply]
That is an important point, I agree. The problem in this case is that (as far as I'm aware) no one speaks of "primitive computable" functions. I like the repetition of the word "recursive" in total recursive but not primitive recursive because I think it suggests better what's going on. --Trovatore (talk) 21:24, 14 July 2010 (UTC)[reply]
Actually, well-known people do use "primitive computable" (check Google books or scholar). The switch is very complete; they also change "DNR" to "DNC", etc. But I have not started using that phrase myself. — Carl (CBM · talk) 21:45, 14 July 2010 (UTC)[reply]

Since there are no objections to the terminology as used in the article now, should we use it in other articles as well? To summarize:

  • "computable" and "total computable", if no specific model of computation is refered to
  • no use of "recursive", not even "total recursive" nor "partial recursive"
  • "primitive recursive", since this is (nearly) universally used

In my opinion, we should. – Adrianwn (talk) 07:15, 18 July 2010 (UTC)[reply]

No, I don't buy that. First of all I still prefer total recursive but not primitive recursive for this article. And while we can recognize the fact that many workers now use computable as a synonym for recursive, we can't erase the latter term, and I don't think we should be forbidding its use.
By the way there is a problem in the lead section of computable function where it says that computable functions are the same as partial recursive functions. Since computable and recursive are exact synonyms in context, that doesn't really make sense. --Trovatore (talk) 07:30, 18 July 2010 (UTC)[reply]
And the link there looped back to the article itself... I changed the links and rearranged the first paragraph some.
@Adrianwn: I follow a system like that in my own edits here. I only use "recursive" when referring to that specific model of computation, or to refer to primitive recursive function. I realize better now that I need to include "total" or "partial" in each article, at least enough to establish the local context. — Carl (CBM · talk) 12:28, 18 July 2010 (UTC)[reply]
@Trovatore: fair enough, we should not forbid "recursive" due to its widespread use in literature. However, would you agree that the term "recursive", without "partial" or "total", is ambiguous? – Adrianwn (talk) 16:01, 19 July 2010 (UTC)[reply]
Yes, but so is "computable". --Trovatore (talk) 18:54, 19 July 2010 (UTC)[reply]
When does "computable" mean anything else than "Turing-computable" (or something equivalent)? – Adrianwn (talk) 20:05, 19 July 2010 (UTC)[reply]
Carl has explained this above. Computable may mean "partial computable" or "total computable", depending on the author. (As far as I know, so may "Turing computable", for that matter.) --Trovatore (talk) 20:28, 19 July 2010 (UTC)[reply]

Will delete the extension section

[edit]

I am going to delete the section on extending the Ackerman functions to the complex plane. It appears to be based on a misunderstanding. As this link shows http://mathoverflow.net/questions/2944/, one can find many different entire functions that agree with say A(m,n) for a fixed m and all n. The present section seems to doubt that there is any extension at all.

Perhaps something different was meant. If so someone else can rewrite the section and restore it. But as it is, I find actively misleading. MathHisSci (talk) 23:19, 16 December 2010 (UTC)[reply]

why not plug graham's number into the ackermann function?

[edit]

xkcd is awesome! — Preceding unsigned comment added by AdamDavid (talkcontribs) 21:25, 17 April 2011 (UTC) Umm... You probably shouldn't do something like this — Preceding unsigned comment added by EZ132 (talkcontribs) 16:45, 18 February 2020 (UTC)[reply]

ERROR in forth equation in History.

[edit]

The forth equation in History is WRONG. — Preceding unsigned comment added by 72.129.87.43 (talk) 00:36, 28 July 2011 (UTC)[reply]

Programming Examples

[edit]

I've just deleted 7 programming examples. I see no reason we need them. If your programming language supports recursion and you've passed Computer Science 101, then you know how to turn this function into code. It exposes no interesting issues in computer science. The output of the function isn't even interesting in any sense; it's just a simple example of a computable function that's not primitive recursive. The only way code for this function could be interesting if it were showing how to implement a recursive function in a language that doesn't natively support that, and I don't think this is the right article for that.

If we do need a programming example, can we at least have one pseudo-code one instead of seven examples?--Prosfilaes (talk) 18:46, 7 May 2012 (UTC)[reply]

Definition via higher-order functionals

[edit]

There appears to be some confusion about the definition of the Ackermann function using Succ and Iter. Using the iteration operator defined by

I think the right definition for the Ackermann function is the following:

and not

(note the parens).

The second definition cannot be right anyway as it would imply

looping indefinitely. Or am I confused? Anyway, I would like to see a source for these definitions. — Tobias Bergemann (talk) 19:53, 14 July 2012 (UTC)[reply]

Well, the wrong version (i.e. the one currently in the article, with ) is a type error. Your version works and agrees with the top left 4x5 entries of the table.
IIRC, a definition which is primitive recursive and uses HOFs can be found in the Paper "Total Functional Programming" by David Turner, but it may be slightly different - i.e. not extracting out something like the Iter here. --Daniel5Ko (talk) 00:27, 16 July 2012 (UTC)[reply]
I have changed the article back to the version using Ack(m+1) = Iter(Ack(m)) (undoing an an edit by an anonymous user from April, 2012), as the other version seems to be obviously wrong.
The definitions in the article appear to have been introduced without source with an edit by an anonymous user in August, 2007. They were using Ack(m+1) = Iter(Ack(m-1)) though, and this was fixed by another anonymous user in July, 2008.
David Turner's paper on total functional programming does in fact define the Ackermann function (using Haskell notation), but he does not define a separate iteration function. — Tobias Bergemann (talk) 10:26, 16 July 2012 (UTC)[reply]
Okay. Now I've taken the time to look at the paper again, and realized that from the definition there the definition here can be derived in a quite straight forward way:
From the paper:
ack 0 n = n+1
ack (m+1) 0 = ack m 1
ack (m+1) (n+1) = ack m (ack (m+1) n)
Slightly tweaked with a local helper function:
ack 0 = succ
ack (m+1) = h where
	h 0 = ack m 1
	h (n+1) = ack m (ack (m+1) n)
Getting rid of some m-mentionings inside the body of h:
ack 0 = succ
ack (m+1) = h (ack m) where
	h f 0 = f 1
	h f (n+1) = f (ack (m+1) n)
Getting rid of the last remaining m by using ack(m+1) = h (ack m) and h (ack m) == h f when seen from inside the local h:
ack 0 = succ
ack (m+1) = h (ack m) where
	h f 0 = f 1
	h f (n+1) = f (h f n)
Now, h can be made global and renamed iter. I'm not sure how useful it is to extract iter at all, but I'm quite sure that doing so is not more OR than calculating an integral. --Daniel5Ko (talk) 12:39, 16 July 2012 (UTC)[reply]
By the way, definition in this style, even with a function called "iter", can be found on page 104 in [5]. Unfortunately it's not quite the same Ackermann function, and not just slightly shifted or something, but utterly wrong, since there ack n m == m+1 for all nonnegative n m. A really bad typo! ;) --Daniel5Ko (talk) 21:54, 16 July 2012 (UTC)[reply]

Possible confusion between Ackermann's function and Friedman's function because both use the capital letter A

[edit]

In the current page: "Logician Harvey Friedman defines a version of the Ackermann function as follows:

For n = 0: A(m, n) = 1
For m = 1: A(m, n) = 2n
Else: A(m, n) = A(m - 1, A(m, n - 1))

He also defines a single-argument version A(n) = A(n, n).[6]"

I'm recommending, whether or not Friedman originally used 'A' when introducing his function, that Wikipedia change the letter for his function, to 'F' maybe. The reason is in the next para from the article,(see under next line): it's not going to be clear to a reader in a hurry which 'A' is talked about. Maybe it doesn't matter mathematically which 'A' is talked about, if the exact same argument can show both aren't prim recursive, but it's still confusing.

"A single-argument version A(k) = A(k, k) that increases both m and n at the same time dwarfs every primitive recursive function, including very fast-growing functions such as the exponential function, the factorial function, multi- and superfactorial functions, and even functions defined using Knuth's up-arrow notation (except when the indexed up-arrow is used). It can be seen that A(n) is roughly comparable to fω(n) in the fast-growing hierarchy. This extreme growth can be exploited to show that f, which is obviously computable on a machine.... " 76.218.104.120 (talk) 08:54, 8 January 2014 (UTC)[reply]

I just got confused by this. Which of the definitions are really equivalent (define the same function) and which are different (and how do they relate to the Ackermann’s function)? The cited PDF is not available now, but I found Friedman’s version of Ackermann’s function in abstract of one of his papers, which might be the one originally cited. It seems he defines it only for m, n ≥ 1. Palec90 (talk) 00:54, 12 January 2015 (UTC)[reply]

CS1 deprecated date parameters

[edit]

Editor CBM has reverted automated edits made to this article that remove the article from Category:Pages containing cite templates with deprecated parameters. I have added {{bots|deny=Monkbot}} to the top of the page which I think will prevent the bot from revisiting.

Trappist the monk (talk) 19:46, 12 January 2014 (UTC)[reply]

Support the bot's edit. The article need not retain deprecated parameters. Glrx (talk) 00:37, 13 January 2014 (UTC)[reply]
The word "deprecated" means that future uses should be avoided; it does not mean that current uses should be changed. Also, I see no reason to think that this article uses CS1. It may use templates which could also be used for CS1, but that is a separate matter. The same templates can be used to achieve many citation styles by putting various information into various fields; that does not mean there is an error. Indeed the use of the cite templates on this page long predates the existence of CS1 (2007 vs. 2010-2011). — Carl (CBM · talk) 01:06, 13 January 2014 (UTC)[reply]
Who said either way was an error? In any event, "deprecated" does not mean that all previous work is cast in stone and may not be changed. Yes, there are lots of silly little bots that make micro changes that tickle watch lists and waste lots of human editors' time. I get that. And breaking a date into separate year/month/day fields avoids more silliness with different bots/scripts coming around to change mdy format into dmy or ymd. And still other bots changing "date=1984" into "year=1984". Even with all of that nonsense, I think the date field is far and away more common than separate fields. Furthermore, keeping the date in one field avoids some common data errors when templates get copied and changed. I'm not big on consistency, but the change appears reasonable, and other editors are doing the work (they aren't making me do it). The change is OK by me. Glrx (talk) 03:08, 13 January 2014 (UTC)[reply]

Undefined m[] notation

[edit]

I removed these two equations involving the m [ ] notation, since it was unclear what it means.

In any case I think these equations are not essential, since a full definition of the 3 argument Ackermann function is given in the following Definition and Properties section. Perhaps the m notation has a standard meaning in the literature of hypercomputation, of which I am unaware, but if so I think a further explanation is needed in the present context.CharlesHBennett (talk) 19:34, 6 July 2015 (UTC)[reply]

For what it's worth, makes use of what Hyperoperation#Notations calls "Square bracket notation", with (see Knuth's up-arrow notation). As you write, the notation as it was used in section Ackermann function#History was unclear. It is however also used later, in section Ackermann function#Ackermann numbers. Anyway, the article used to use the (I think) more conventional up-arrow notation, see revision 627620678, 2014-09-30. – Tobias Bergemann (talk) 20:23, 6 July 2015 (UTC)[reply]
The paragraph was never adjusted to account for the removal of those equations - there was still a colon to this day! - so I put 'em back in. I used the up-arrow version that was there back in 2014, since that should be (marginally) more familiar than m[] notation. - Naleh (talk) 08:17, 2 March 2018 (UTC)[reply]

Deleted section on Ackermann numbers

[edit]

Yesterday, the section on Ackermann numbers was removed by user Cagrhsi (talk · contribs), with the edit comment "The Ackermann numbers were determined to be a non-notable addition to this page during the Chihiro number controversy." I don't know what "Chihiro number controversy" refers to, and I can't really comment on the notability of the Ackermann numbers. Anyway, this edit left dangling redirects Ackermann number and Ackermann numbers. Should these be deleted now? – Tobias Bergemann (talk) 08:06, 16 October 2015 (UTC)[reply]

@Tobias Bergemann:
Looks like the "Chihiro number controversy" was a WP hoax; the WP article was deleted.
Ackermann numbers appear to be real. Wolfram article lists 4 references.
The two sequences are substantially the same. The only difference is Ackerman numbers start with 0[0]0 and Chihiro numbers start with 1[1]1.
I'd simply restore the deleted section; that Chihiro numbers were a hoax does not mean Ackermann numbers are also a hoax.
Glrx (talk) 01:12, 18 October 2015 (UTC)[reply]
Thanks. So, Ackermann numbers are not a hoax. However, the term does not appear to be widely used, and I for one won't restore the deleted section.
Note to self: for future reference re the Chihiro number hoax see Wikipedia:Administrators' noticeboard/IncidentArchive837#Alleged hoax (mathematical topic) featured on front page and Wikipedia talk:WikiProject Mathematics/Archive/2014/May#Input requested about possible hoax. – Tobias Bergemann (talk) 08:29, 18 October 2015 (UTC)[reply]
Or perhaps the deleted section could be added to Knuth's up-arrow notation as suggested above. – Tobias Bergemann (talk) 08:36, 18 October 2015 (UTC)[reply]

Applications

[edit]

Does this function have any real-world applications, or is it only interesting in theoretical math? -- Beland (talk) 22:45, 21 October 2015 (UTC)[reply]

Assessment comment

[edit]

The comment(s) below were originally left at Talk:Ackermann function/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

A longer lead, and more balance would improve this. The "Implementations" and "Inverse" sections in particular would benefit from some review. Geometry guy 15:22, 9 June 2007 (UTC)[reply]

Last edited at 15:22, 9 June 2007 (UTC). Substituted at 19:44, 1 May 2016 (UTC)

Should we prioritise Ackermann's original function?

[edit]

Ackermann's original function is presented in the article under History, but the Ackermann-Péter function appears earlier than it, in the introduction. Would it be appropriate to move the "true" Ackermann function to the introduction and present it before leading into the simplified Ackermann-Péter function? The latter is of course better-known, but I think it may be more correct to present Ackermann's original function first. 00dani (talk) 01:18, 30 August 2016 (UTC)[reply]

I think that the 3-argument version is of historical interest, while the 2-argument version is what is usually called the "Ackerman function". Since this article is not primarily historical, putting the contemporary usage first and the history later seems reasonable to me. — Carl (CBM · talk) 01:33, 30 August 2016 (UTC)[reply]
[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Ackermann function. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 12:26, 3 October 2016 (UTC)[reply]

Proof section moved here

[edit]

The following "proof" of the fact that the Ackermann function is not primitive recursive was added to the article. I have removed it since it is extremely unclear and at the very least contains numerous typos. The definition of the class is not clear since x is not defined (presumably it is the maximum of the xi?). AxelBoldt (talk) 03:20, 26 April 2017 (UTC)[reply]

Proof that it is not primitive recursive

[edit]

Remember the properties . Let .

Now we will show that initial functions are in , and that it is closed under functions composition and primitive recursion (which are equivalent to PR). Initial functions: (constant function), (successor) and (projections).

Function composition: .

Primitive Recursion: if (hp1) and (hp2) , if , by induction on n (inductive hypotesis), (base of induction), , QED.

Therefore, PR, but for , then A and APR, QED.

Ackermann B function

[edit]

Rules:

  B(2, 0)
= B(1, 1)
= B(0, B(1, 0))
= B(0, B(0, 1))
= B(0, A(1, 1))
= B(0, A(0, A(1, 0)))
= B(0, A(0, A(0, 1)))
= B(0, A(0, 2))
= B(0, 3)
= A(3, 1)
= A(2, A(3, 0))
= A(2, A(2, 1))
...
= A(2, A(1, 3))
...
= A(2, A(0, 4))
= A(2, 5)
...
= 13

  B(3, 0)
= B(2, 1)
= B(1, B(2, 0))
...
= B(1, 13)
= A(B(1, 12), 1)
= A(A(B(1, 11), 1), 1)
...
= A(A(A(A(A(A(A(A(A(A(A(A(B(1, 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1)
...
= A(A(A(A(A(A(A(A(A(A(A(A(13, 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1)
= A(A(A(A(A(A(A(A(A(A(A(A(12, A(13, 0)), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1)
= A(A(A(A(A(A(A(A(A(A(A(A(12, A(12, 1)), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1), 1)

  B(4, 0)
= B(3, 1)
= B(2, B(3, 0))
= B(2, <too large>)

That's pretty salad, but not super salad. -- EZ132 (Talk | Contributions | Uploads | log) —Preceding undated comment added 17:00, 18 February 2020 (UTC)[reply]

Alternate form of inverse ackermann

[edit]

Gabriel Nivasch[1] gives another form of the Inverse Ackermann function. First he defines the inverse ackermann hierarchy: Then the inverse ackermann function is defined by , the smallest integer k so that . The two-parameter version is defined by . This definition might be easier to understand. -- EZ132 (Talk | Contributions | Uploads | log) —Preceding undated comment added 16:59, 18 February 2020 (UTC)[reply]

References

  1. ^ Inverse Ackermann, "Inverse Ackermann Without Pain"

Non-sequitur claim should be removed.

[edit]
"Note that not being primitive recursive does not mean that it cannot be calculated using a for loop."

This note links to an author's personal page (where he also sells Jesus books, not that there's anything wrong with that) containing code that very well may be, but isn't obviously, an Ack implementation. Just to be clear: of course you can calculate anything with for-loops if you simulate a program stack and recursion using dynamic memory. An example isn't required, and this one is not particularly illustrative because readers who aren't aware you can do this are going to have to accept that it works on faith. The bigger problem, however, is that this statement and link appears at the end of a proof that has nothing to do with a claim about for-loops. I think the claim and link should be removed from the proof section as it is unrelated. It may be worthwhile to add the link to a section discussing the for-loop thing, but only if more context is provided—but I am not convinced this is a good idea either because the example is not very illuminating. TricksterWolf (talk) 21:58, 22 June 2021 (UTC)[reply]

I concur, when people describe the primitive recursive functions in terms of “bounded for loops” they mean a for loop where the number of iterations is calculated in advance and the count of the iteration cannot be manipulated inside the loop. The linked program is “bounded” by the length of an array that has its length modified by the execution of the looped block. A C-style for loop f(i=0,i<expression,i++) isn’t that kind of for loop if you can modify i or reevaluate “expression” based on variables that may have been changed m. I’m going to go ahead and remove it since I think it is phrased imprecisely and could lead to confusion unless clarification is added about what kinds of for loops are usually meant by “bounded for loop”. 136.25.104.172 (talk) 21:24, 8 October 2021 (UTC)[reply]

How long do I need to scroll that page in order to get short concise definition?

[edit]

As the title says 46.242.11.163 (talk) 22:12, 7 December 2024 (UTC)[reply]

I added a summary of the (multiple) definitions in the lead. —David Eppstein (talk) 22:39, 7 December 2024 (UTC)[reply]