Validity of results

Can you vouch for the validity of the results in papers that you cite and use in your own articles?

In April this year Blagojević, Cohen, Crabb, Lück and Ziegler posted a preprint on the arXiv (arXiv:2004.12350) writing in the abstract “This invalidates a paper by three of the present authors […] who used a claimed intermediate result from […]”. This, of course, got me interested and so I took a look at the introduction of that preprint.

The story is that there is an influential paper which was quoted and whose main result was used in quite a number of papers since then, but whose proof turned out to be incorrect (but apparently the main result itself is correct, as was proven now by these five authors). This was only noticed, because Blagojević, Lück and Ziegler used in their paper from 2016 not only the main result of the flawed paper, but also intermediate results that turned out to be actually wrong.

What bothers me the most here is the quite long list of papers that use the main result of the flawed paper apparently without noticing that the proof is wrong. I already had the general feeling that in mathematics too few people actually read in detail and therefore also check the validity of influential / seminal papers, or of papers that they essentially use in their own work. And the above told story reinforced this feeling in me.

I also admit my own guilt here: I think I have read less articles by other authors than I have written myself. Of course, I could talk my way out of this by saying that I don’t have a permanent position yet and therefore it is more important for me to write my own paper than to read other’s paper. But I have the feeling that this queue of excuses won’t stop when I finally get a permanent position.

So is there any reasonable way to change this, i.e., motivate more people to read other’s papers? Or am I completely wrong in my supposition, people are actually reading more than I suppose they do, and all this is actually no problem at all?

One thought on “Validity of results”

  1. I think increased research in the area of computer-assisted proofs will eventually solve the problem. To motivate more people to examine extremely technical material does not sound like a realistic goal. A more worthwhile goal would be to try get more mathematicians interested in the area of computer aided mathematics and artificial intelligence. If they are willing to formulate their own special knowledge, heuristics, counterexamples etc. in terms of an algorithmic proof theory, the first step towards an automation of their reasoning process is taken. A case in point is Wilf-Zeilberger theory: https://link.springer.com/article/10.1007/BF02100618. Admittedly, this paper is about algorithmic proof theory in the area of combinatorial identities and the difficulty is certainly dependent on the field. The usual objection to computer proofs is that a proof-algorithm may contain errors, in its conception and especially its implementation. However, I think that in the long run, finding errors made by a proof-algorithm could be significantly easier than finding errors in human-made proofs. For one thing the uniformity of notation and conventions makes things easier to comprehend. In addition, one proof-algorithm may be able construct very many different proofs of different results. If there is a bug, some irregularities in the proofs can be found. Such irregularities point to the exact bug in the code, i.e. each bug is responsible for many errors of a similar kind. A human on the other hand makes many mistakes of a different nature, all of which have to be spotted individually and are hard to classify / predict.

Comments are closed.