MathOverflow Asked by Edmund Harriss on January 14, 2021
The history of mathematics over the last 200 years has many occasions when the fundamental assumptions of an area have been shown to be flawed, or even wrong. Yet I cannot think of any examples where, as the result the mathematics itself had to be thrown out. Old results might need a new assumption or two. Certainly the rewritten assumptions often allow wonderful new results, but have we actually lost anything?
Note I would like to rule out the case where an area has been rendered unimportant by the development of different techniques. In that case the results still hold, but are no longer as interesting.
I wrote up a longer version of this question with a look at a little of the history:
http://maxwelldemon.com/2012/05/09/have-we-ever-lost-mathematics/
Edit in response to comments
My thinking was about results that have been undermined from below. @J.J Green’s example in the comments of Italian algebraic geometry seems like the best example I have seen. The trisection and individually wrong results do not seem to grow into areas, but certainly I would find interesting any example where a flawed result had built a small industry before it was found to be wrong. I am fascinated by mathematics that has been overlooked and rediscovered (ancient and modern) but that is perhaps a different question.
I'm not sure that Drach's 1898 thesis on differential Galois theory "built a small industry", but it was certainly accepted and praised by his examiners before Vessiot pointed out a very serious flaw. However, there was no public acknowledgement of this at the time (or later) by any of the parties involved.
It wasn't until the 1983 publication of Pommaret's Differential Galois Theory that the story came to light. In his 1988 book Lie Pseudogroups and Mechanics, Pommaret reproduced and translated into English the original examiners' reports, and the key correspondence describing the error.
For more context and details about Drach's work, see T. Archibald, "Differential equations and algebraic transcendents: French efforts at the creation of a Galois theory of differential equations 1880 - 1910", Revue d'histoire des mathématiques, 17 (2011) 373- 401.
Answered by Phil Harmsworth on January 14, 2021
There are some results in coding theory (explicit constructions of codes, published in 1990) that are thought to be lost. See the heading Lost Codes in https://www.win.tue.nl/~aeb/codes/Andw.html - page maintained by one of the authors of the 1990 paper.
Answered by Dima Pasechnik on January 14, 2021
Volume II of Frege's Grundgesetze der Arithmetik (Basic Laws of Arithmetic) had already been sent to the press when Bertrand Russell informed him that what we now call "Russell's paradox" could be derived from one of his basic laws. I do not know to what extent Frege's work was known and publicly accepted (volume I was published 10 years before volume II), but this seems a clear case where a major body of work was undermined "from below", to use the words of the OP.
Upon learning of Russell's observation, Frege quickly wrote up an appendix to volume II, where he writes, "Hardly anything more unfortunate can befall a scientific writer than to have one of the foundations of his edifice shaken after the work is finished. This was the position I was placed in by a letter of Mr. Bertrand Russell, just when the printing of this volume was nearing its completion." (This translation appears in the Wikipedia article.)
Answered by Todd Trimble on January 14, 2021
Hilbert's $16^{rm th}$ problem.
In 1923 Dulac "proved" that every polynomial vector field in the plane has finitely many cycles [D]. In 1955-57 Petrovskii and Landis "gave" bounds for the number of such cycles depending only on the degree of the polynomial [PL1], [PL2].
Coming from Hilbert, and being so central to Dynamical Systems developments, this work certainly "built a small industry". However, Novikov and Ilyashenko disproved [PL1] in the 60's, and later, in 1982, Ilyashenko found a serious gap in [D]. Thus, after 60 years the stat-of-the-art in that area was back almost to zero (except of course, people now had new tools and conjectures, and a better understanding of the problem!).
See Centennial History of Hilbert's 16th Problem (citations above are from there) which gives an excellent overview of the problem, its history, and what is currently known. In particular, the diagram in page 303 summarizes very well the ups and downs described above, and is a good candidate for a great mathematical figure.
Answered by Rodrigo A. Pérez on January 14, 2021
I guess Conways "lost proofs" qualify:
http://www.aimsciences.org/journals/pdfsnews.jsp?paperID=2447&mode=full
Answered by Blah on January 14, 2021
I don't know if this is an example of what you're asking. In mathematical logic, the Hilbert Program of the 1920's intended to come up with a finitary consistency proof and a decision procedure for analysis and set theory. Many luminaries including Hilbert himself, Bernays, Ackermann, von Neumann, etc. gathered in Göttingen for this purpose. Ackermann in 1925 published a consistency proof for analysis (that turned out to be incorrect) and many other promising results emerged. Then in 1931, Gödel's incompleteness theorem shut the whole thing down. Some valid theorems came out of it, but the program as a whole had to be (in some interpretations) completely abandoned.
Answered by none on January 14, 2021
A exposition along this vein about Arabic mathematics.
http://www-history.mcs.st-andrews.ac.uk/HistTopics/Arabic_mathematics.html
Answered by Eugene on January 14, 2021
I was once told of a paper in homological algebra where a new class of functors was introduced, generalizing Ext and Tor. For some years they were studied, and various properties were proved. Finally someone managed to give a complete description of the entire class. It consisted of two elements, Ext and Tor. (Sorry, I don't have more details.)
Answered by Per Manne on January 14, 2021
I feel the answer is obviously "yes", and indeed that much of 19th century mathematics was lost, in a serious sense, for much of the 20th century. I was struck recently by discovering that Henry Fox Talbot, the photographic pioneer, had written on what is clearly the area around Abel's theorem for curves, and that probably it is a long time since anyone reconstructed what he was doing. Also that George Boole's main work, as far as his contemporaries were concerned, dropped out of sight within a couple of decades.
The fact is that mathematics now is (a) axiomatic and (b) dominated by a canon. I'm reminded of Bertrand Russell's nightmare - where, a century after his death, the last copy of the Russell-Whitehead Principia Mathematica is in danger of being thrown out by an ignorant librarian. It actually isn't obvious that even such a pioneering work makes it into the mathematical logic "canon" around later developments. (I hear protests!) Maybe it is worth pointing out Hilbert's interest in Anschauliche Geometrie, in other words non-axiomatic, intuitive geometry. And the canon should be "porous", as has been argued by some of the Moscow school. It seems quite an illuminating take on mathematics as a living tradition that simple accretion of "known results" is misleading.
Answered by Charles Matthews on January 14, 2021
There are "Lectures on Lost Mathematics" by B. Grünbaum. They were given at the University of Washington in 1975. The notes are available here
Answered by Kristal Cantwell on January 14, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP