Conference Topic
This conference is aimed at discussing and exploring the practice of mathematics and logic from an epistemological standpoint. That is, we look for papers on how the concrete practices of mathematicians and logicians may influence the results of their enquiries and, more broadly, the notion of what counts as proof. Through this conference, the PratiScienS group also aims at promoting a methodological reflection on the difficulties that a critical analysis of such practices in logic and mathematics implies and on issues related to the development of the appropriate exploratory conceptual tools.
‘Practices’ and Knowledge-Making in Mathematics and Logic
The concept of ‘scientific practices’ has become quite worn-out, due to the many and varied uses it has been put to. In our project, we look at ‘practices’ as constitutive of the procedures leading to the production of stable research outcomes, and their validation as results by human groups. Thinking about practices as such ‘knowledge-making processes’ can deliver some useful critical deconstructions of the various sub-meanings attached to the concept. First, practices, in the sense of knowledge-making processes, can be used to refer to the very concrete know-how deployed by practitioners in their ordinary activities. Second, ‘practices’, especially when about logic and mathematics, can also be used to mean the demonstrative procedures enacted by practitioners, and as they are reported in publications - e.g. the writing of a proof for a theorem.
So far, philosophy of mathematics and logic has mostly been focused on foundational issues. Recent work has been carried out however on the role of practices in mathematics and logic and a growing interest in the issue can be felt. Within this new trend, this conference is aimed at widening analytical perspectives. We aim at exploring and discussing: first the elaboration of proofs and the related results – theorems, for instance – as a coherent and connected whole; secondly the processes and strategies deployed by mathematicians and logicians in their attempts to elaborate these demonstrations and arrive at their results. Thus, ‘practices’ may refer to interactions between researchers, their modes and structures for communicating, their research institutions, and their daily activities and routine. Then, on more epistemological grounds, one is also drawn to explore, at the same time, the ways in which ‘practices’ express the interactions between researchers and their topics of research, the degree and the kinds of the intimacy of these interactions and how they evolve over time.
Which Specificities for Mathematics and Logic?
The large majority of scholarly studies exploring scientific practices tend to explore disciplines making extensive use of experimental techniques and empirical approaches, physics in particular. A direct consequence of this habit is the limited – or even the lack of – general relevance of the concepts and analyses proposed as far, for the study of mathematics and logic. The comparison between practices in the formal sciences and in the experimental sciences, with the appeal to analogies based primarily on observations made within experimental disciplines, is an issue that urgently calls for critical questioning. With experimentation cast as the principal method in the so-called experimental sciences, the suggestion is to also explore and analyse the activities of contemporary mathematicians and logicians through this lens. In particular, the suggestion is to look for possible likely candidates, equivalent for experimentation in mathematics and logic and to explore their significance as regards to the production of results in these fields.
An important development to be accounted for here is the current rising and pervasive use of computers in mathematics, and the various impacts of such uses as regards to the possible emergence of what may count as a new and different form of practice still awaiting categorisation. Authors are beginning to explore the idea of ‘experimental mathematics’ in relation to the use of computers for the elaboration and establishment of complex demonstrations, and for simulations’ practices and their results. More generally, the habit is also to focus on demonstrations as the main object and core of mathematical and logical research. Consequently, exploring logic and mathematics from the standpoint of practices usually involves the discussion of demonstration practices, and the material devices and artefacts, including pen and paper, used in these disciplines. Still a more complete account calls also for further critical exploration of issues such as the use of diagrams and other visual methods in mathematics and logic, their epistemological status and their impacts during exploratory research stages. Briefly, the issue of practices in mathematics and logic is worth a full-scale exploration in itself, and a most interesting, emerging, research question here is that of whether researchers’ practices, well-established or novel, affect what counts as proofs and validated results.
The Conference and PratiScienS
This conference is part of a series of conferences organised by the PratiScienS research group at the Archives Henri Poincaré, Université Nancy 2, Nancy, France. Previous conferences tackled with the topics of robustness (2008, click for more details) and the contingency vs inevitability debate (2009, click for more details). The group explores the issue of practices in their relations with the construction and validation of scientific knowledge. Consequently, this conference is also aimed at generating discussions tackling with the issue of practices in logic and mathematics, in the ways suggested and framed by the project.
Invited Speakers
- Jessica Carter, University of Southern Denmark, Denmark
- Karine Chemla, REHSEIS-SPHERE, CNRS & University of Paris Diderot Paris 7, France
- Catarina Dutilh Novaes, University of Amsterdam, The Netherlands
- Jeremy J. Gray, Open University, UK
- Brendan Larvor, University of Hertfordshire, UK
- Danielle Macbeth, Haverford College, US
- Marco Panza, IHPST-University of Paris 1, France
- Jean-Michel Salanskis, Université Paris Ouest Nanterre La Défense, France
- Dirk Schlimm, McGill University, Canada
- Henrik Kragh Sørensen, University of Aarhus, Denmark
- Jean Paul Van Bendegem, Vrije Universiteit Brussel, Belgium
Contributors
- Andrew Aberdein, Florida Institute of Technology, US
- Marianna Antonutti Marfori, University of Bristol, UK
- Vincent Ardourel, IHPST-University of Paris 1, France
- David Corfield, University of Kent, UK
- Liesbeth De Mol, Universiteit Gent, Belgium
- Valeria Giardino, Institut Jean Nicod, Paris, France
- Norma B. Goethe, National University of Cordoba, Argentina, and University of Göttingen, Germany
- Albrecht Heeffer, Universiteit Gent, Belgium
- Erich Reck, University of California at Riverside, US
- János Tanács, Budapest University of Technology and Economics, Hungary
Scientific Committee
- Léna Soler (LHSP – Archives Henri Poincaré, IUFM Lorraine, France) and PratiScienS
- Gerhard Heinzmann, LHSP – Archives Henri Poincaré, Université Nancy 2, France
- Paolo Mancosu, University of California at Berkeley, Berkeley, US
- Philippe Nabonnand, LHSP – Archives Henri Poincaré, Université Nancy 2, France
- Jean-Michel Salanskis, Université Paris Ouest Nanterre La Défense, France
- Andrew Warwick, Imperial College London, London, UK
MONDAY 21ST OF JUNE 2010 |
|
REGISTRATION AND INTRODUCTION | |
8h30 - 9h | Registration |
9h - 9h20 | Welcome and Introduction |
TOWARDS A PHILOSOPHY OF MATHEMATICAL PRACTICE | |
9h20 - 10h10 | Jean Paul Van Bendegem (Vrije Universiteit Brussel, Belgium), “Mathematical Practice and Its Philosophical Relevance” |
10h10 - 11h | Brendan Larvor (University of Hertfordshire, UK), “What Can Toulmin Teach Us about Mathematical Inferences?” |
COFFEE BREAK | |
11h20 - 12h | Marianna Antonutti Marfori (University of Bristol, UK), “Proof and Rigour in Mathematical Practice” |
12h - 12h50 | Henrik Kragh Sørensen (University of Aarhus, Denmark), “Experimental Mathematics and the Notion of Proof” |
LUNCH | |
14h30 - 15h20 | Danielle Macbeth (Haverford College, USA), “Reasoning in the Languages of Mathematics” |
15h20 - 16h | Andrew Aberdein (Florida Institute of Technology, US) , “Epistemic Luck and Mathematical Practice” |
COFFEE BREAK | |
16h20 - 17h10 | Catarina Dutilh Novaes (University of Amsterdam, The Netherlands), “Towards a Practice-Based Philosophy of Logic” |
17h10 - 17h50 | David Corfield (University of Kent, UK), “The Robustness of Mathematical Practice” |
TUESDAY 22ND OF JUNE 2010 |
|
HISTORICAL PERSPECTIVES ON PRACTICES | |
9h30 - 10h20 | Karine Chemla (REHSEIS-SPHERE, CNRS & University of Paris Diderot Paris 7, France), “Proving the Correctness of Algorithms by Means of Diagrams. Liu Yi and the Solution of Algebraic Equations” |
10h20 - 11h | Norma B. Goethe (National University of Cordoba, Argentina, & University of Göttingen, Germany), “Stepping Inside Leibniz’ Intellectual Workshop” |
COFFEE BREAK | |
11h20 - 12h | Albrecht Heeffer (Universiteit Gent, Belgium), “Epistemic Justification in Abbaco Arithmetic and Algebra” |
12h - 12h40 | Erich Reck (University of California at Riverside, USA), “Dedekind, Fruitful Definitions, and Mathematical Abstraction” |
LUNCH | |
14h30 - 15h20 | Jeremy J. Gray (Open University, UK), “Poincaré on Knowledge of Geometry and Geometry as a Source of Knowledge” |
15h20 - 16h | János Tanács (Budapest University of Technology and Economics, Hungary), “Elective Affinities. The Role of Intellectual Cooperation in the Early Period of János Bolyai’s Attacking the Euclidean Parallel Postulate” |
COFFEE BREAK | |
16h20 - 17h10 | Dirk Schlimm (McGill University, Canada), “Towards a (Failed) Renewal of Logic” |
19H30: CONFERENCE DINNER | |
WEDNESDAY 23RD OF JUNE 2010 |
|
LOOKING INTO PRACTICES | |
9h30 - 10h20 | Jessica Carter (University of Southern Denmark, Denmark), “Diagrams and Proofs in Analysis” |
10h20 - 11h | Valeria Giardino (Institut Jean Nicod, Paris, France), “Seeing As and Multiplicity of Interpretation in Mathematics” |
COFFEE BREAK | |
11h20 - 12h | Liesbeth De Mol (Universiteit Gent, Belgium), “From Practices of Mathematical Logic to a Natural Law? The Case of Alonzo Church and Emil Post” |
12h - 12h40 | Vincent Ardourel (IHPST-Université Paris 1, France), “The Role of Theoretical Physics in Research in Constructive Mathematics” |
LUNCH | |
14h30 - 15h20 | Marco Panza (IHPST-Université Paris 1, France), “The Twofold Role of Diagrams in Euclid’s Plane Geometry” |
15h20 - 16h | Jean-Michel Salanskis (Université Paris Ouest, France), “Concluding Remarks” |
END OF THE CONFERENCE |
INVITED PAPERS
Jessica Carter (University of Southern Denmark, Denmark): “Diagrams as Representations in Proofs from Analysis”
The talk will discuss the role of diagrams in mathematical reasoning. This will be done in the light of a case study from contemporary mathematical practice, more precisely from analysis. In the presented example diagrams were used in order to obtain certain combinatorial expressions. In the talk I will argue that, although these diagrams are removed in the final versions of the proofs, they still play an important part in these proofs. The diagrams play a role in concept formation as well as representations of proofs. Thus by pointing to roles that diagrams play in mathematical reasoning, it will be argued that diagrams do play a role in mathematical reasoning.
Karine Chemla (REHSEIS-SPHERE, CNRS & University of Paris Diderot Paris 7, France): “Proving the Correctness of Algorithms by Means of Diagrams: Liu Yi and the Solution of Algebraic Equations”
Several writings bear witness to how at the end of the 10th century or the beginning of the 11th century Liu Yi used diagrams to account for the correctness of algorithms yielding “the” root of algebraic equations. I want to analyze these diagrams, how the writing shapes a relationship between the algorithms and the diagrams, and finally how the diagrams contributed to writing down, and working with, the equation. In particular, I shall show, on the one hand, how the concept of equation depended on the diagram of the rectangle and, on the other, how Liu Yi uses specificities of diagrams to which previous Chinese texts bear witness, putting them to a new use and thereby testifying to the extension of the concept of algebraic equation.
Catarina Dutilh Novaes (University of Amsterdam, The Netherlands): “Towards a Practice-Based Philosophy of Logic”
In different sub-fields of philosophy, focus on actual practices has been an important (albeit still somewhat non-mainstream) approach for some time already. This is true in particular of general philosophy of science, and to a lesser extent of philosophy of mathematics. Against this background, it is perhaps surprising to notice that no such practice-based turn has yet taken place within the philosophy of logic. Why is that? Is there something about logic that makes it intrinsically unsuitable for a practice-based approach? What are the prospects for insights on traditional philosophical questions pertaining to logic (e.g. the nature and scope of logic) to be gained from a focus on the practices of logicians?
In the first part of the paper, I delineate what a practice-based philosophy of logic would (could) look like, insisting in particular on why it can be relevant and how it is to be undertaken. The ‘how?’ question is especially significant, as crucial methodological challenges must be addressed for the formulation of a methodologically robust practice-based philosophy of logic. In the second part, I illustrate how a practice-based approach to (the philosophy of) logic could be developed by means of a case-study: the role played by formal languages in logic, in particular in the practices of logicians. I argue that formal languages play a fundamental operative role in the work of logicians, as a paper-and-pencil, hands-on technology triggering certain cognitive mechanisms – more specifically, countering some of our more ‘natural’ cognitive mechanisms which are not particularly suitable for research in logic (as well as in other fields). I substantiate these claims with empirical data from the psychology of reasoning tradition. The history of logic clearly shows that the results obtained once logicians had full-fledged formal languages at their disposal are quite different from those obtained without the use of this particular technology, and my analysis purports to offer at least a partial explanation to this phenomenon. By means of this analysis, I hope to show that a practice-based philosophy of logic can be a fruitful enterprise, in particular if accompanied by much-needed methodological reflection.
Jeremy J. Gray (Open University, UK): “Poincaré on Knowledge of Geometry and Geometry as a Source of Knowledge”
Poincaré had a strong interest throughout his life in non-Euclidean geometry, and held controversial views on how the mathematical possibility of this geometry affected our understanding of the nature of space. His views make an interesting contrast with others in the same field, such as the Italian mathematician Federigo Enriques and the German philosopher Leonard Nelson.
Brendan Larvor (University of Hertfordshire, UK): “What Can Toulmin Teach Us About Mathematical Inferences?”
The Philosophy of Mathematical Practice depends upon rejecting the claim that fully formalised proofs (as studied by proof theory) model real mathematical proofs with sufficient accuracy to explain the nature and success of mathematical knowledge. Real mathematical proofs are variously said to be inhaltlich (Lakatos), appeal to the semantics of the non-logical expressions (Rav), or involve topic-specific forms of inference (Poincaré, Robinson). These treatments leave open the scope and nature of the non-formal elements in mathematical inference. However, recent work on purity of methods (Detlefsen) and purity of proof (Arana) suggest that an analysis is possible. Moreover, Toulmin’s early work on argumentation theory introduced the concept of an ‘argumentation field’, defined as a body of inferential practices (e.g. legal argument, moral argument, etc.).
In this talk, I will distinguish a) proofs that appeal to the non-formal content of the theorem; b) topic-specific proofs that appeal to non-formal content beyond the scope of the theorem but confined to its topic; c) field-specific proofs, i.e. proofs that employ non-formal inferences characteristic of a field in Toulmin’s sense. Drawing examples from proofs of the prime number theorem and related results, I shall argue that in mathematical practice, a) is rare, and mathematicians have good reasons for preferring b) and c). I shall also suggest that for philosophers of mathematical practice, c) is more fundamental than b), and discuss the question of the status of this remark (is it an empirical claim or an a priori feature of our discipline?). Finally I shall suggest that a), b) and c) are historical category, but this is unsurprising given the previous point, because practice is a historical category. Main claim: the distinction between a), b) and c) provide the framework for a more nuanced analysis of the non-formal elements of mathematical inference than has hitherto been available.
Danielle Macbeth (Harverford College, US): “Reasoning in the Languages of Mathematics”
The received view, both in mathematics and in philosophy, is that mathematical languages, that is, the systems of written signs we devise in mathematics, are nothing more than a convenient shorthand, that writing serves in mathematics, as written natural language does, to register or record results that are obtained independently. I argue that at least in many cases the systems of signs that are used in mathematics do not merely record but actually embody the relevant bits of mathematical reasoning, that they put the reasoning before our eyes in a way that is simply impossible in written natural language. Three cases are considered: diagrammatic reasoning in Euclid, algebraic problem solving in Descartes and Euler, and the deductive proof of theorems from definitions in Frege.
Geometrical concepts are defined in Euclid by parts in spatial relations; hence the contents of the concepts of such figures can be iconically displayed in drawn diagrams. But a Euclidean diagram is not merely a collection of such drawings. As the course of reasoning through the diagram reveals, various collections of lines and points are icons of, say, circles or triangles only when viewed in a certain way. The demonstration is ampliative for just this reason: because we are able perceptually to take parts of one whole and combine them with parts of another whole to form a new whole, we are able to discover something new.
Reasoning in Euclid is intra-configurational; it stays within the diagram. And this is possible because collections of lines signify geometrical figures only relative to ways of regarding them. Algebraic problem solving is quite different. Because the primitive signs of the symbolic language of algebra are meaningful antecedent to and independent of a context of use, the only way to move from one thought to another in this system is by rewriting. Reasoning in algebra is trans-configurational. Demonstrations in this system can nonetheless be ampliative because the contents of different equations can be combined by putting equals for equals. Much as the task of the Euclidean geometer is to find the diagram that provides the medium of reasoning from the starting point to the desired endpoint, so the task of the algebraist is to find a middle to show that two different and apparently unrelated expressions are in fact expressions for one and the same function.
Reasoning in Frege’s concept-script combines aspects of both systems. As in Euclid the primitive signs designate only in a context of use and relative to an analysis, but as in algebra the reasoning is trans-configurational. Because the primitive signs function as they do, Frege can exhibit the contents of concepts as they matter to inference. But those contents can also be regarded in other ways as well. This, combined with the fact that contents from different Begriffsschrift formulae can be combined in inferences governed by hypothetical syllogism, entails that even deductive reasoning from defined concepts can be ampliative in Frege’s system. The truth that is derived obtains independently of the activity of writing, but the discovery of that truth does not. In all three cases, it is by our paper-and-pencil activity that we come to be aware of objective mathematical truths.
Marco Panza (IHPST-University of Paris 1, France): “The Twofold Role of Diagrams in Euclid’s Plane Geometry”
Proposition I.1 of Euclid's Elements requires to “construct” an equilateral triangle on a “given finite straight line,” or on a given segment, in modern parlance. To achieve this, Euclid takes this segment to be a certain segment AB displayed through an appropriate diagram, then describes two circles with centre in the two extremities A and B of it, respectively, and takes for granted that these circles intersect each other in a point C distinct from A and B.
This last step is not warranted by his explicit stipulations (definitions, postulates, common notions). Hence, either his argument is flawed, or it is warranted on other grounds. According to a classical view, “the Principle of Continuity” provides such another ground. M. Friedman has rightly remarked, however, that in the Elements “the notion of ‘continuity’ […] is not logically analyzed” and thus there is no room for a “valid syllogistic inference of the form: C1 is continuous […] C2 is continuous [, then] C exists” (where C1 and C2 are of course two circles).
A possible solution of the difficulty is to admit that Euclid's argument is diagram-based and that continuity provides a ground for it insofar as it is understood as a property of diagrams.
Proposition I.1 is, by far, the most popular example used to justify the thesis that many of Euclid's geometrical arguments are diagram-based. Many scholars have recently articulated this thesis in different ways and argued for it. My purpose is to reformulate it in a general way, by describing what I take to be the twofold role that diagrams play in Euclid's plane geometry.
Euclid's arguments are about geometrical objects. Hence, they cannot be diagram-based unless diagrams are supposed to have an appropriate relation with these objects. I take this relation to be a quite peculiar sort of representation. Its peculiarity depends on the two following claims that I shall argue for:
C.i) The identify conditions of EPG objects are provided by the identity conditions for the diagrams that represent them;
C.ii) EPG objects inherit some properties and relations from these diagrams.
In arguing for these claims, I’ll try to describe an argumentative practice including appropriate manipulations of diagrams aiming both to provide abstract objects and to provably assign proprieties and relations to them. I shall particularly insist on the fact that these abstract objects are not elements of a fixed domain of quantification, but are rather put at the disposal of the mathematicians only locally, in the context of a single argument within which they are individually represented by appropriate diagrams (conceived as tokens, rather than types).
In a sense, the procedures providing these objects have thus no stable outcome: the objects they provide are not provided, once for all. Still, arguing on them (or around their being provided) allows proving stable universal propositions.
I also shall try to explain the logical form of these propositions, which is not the usual one of universal statements of predicative logic. Briefly speaking, I shall argue, to take a simple example, that proposition I.5 of the Elements does not state that all isosceles triangles have equal angle at the base, but rather that constructing isosceles triangles requires construct equal angles at the base of them.
Dirk Schlimm (McGill University, Canada): “Towards a (Failed) Renewal of Logic”
The geometer Moritz Pasch (1843-1930) developed an empirical epistemology for geometry and analysis and in the course of this work he also became more and more interested in the logical inferences that are licensed in mathematics. This talk presents his attempts to contribute to a ‘renewal of logic’ and discusses his motivations as well as the reasons for why this project led to a dead end. By looking sympathetically at such failed attempts we get a better understanding of the conceptual difficulties that were involved in arriving at the modern conception of logic and proof.
Henrik Kragh Sørensen (University of Aarhus, Denmark): “Experimental Mathematics and the Notion of Proof”
Although experimental methods have been integral to the process of suggesting mathematical connections and results for centuries, such methods are almost universally seen as insufficient for justification of mathematical truths. However, during the last decades, the combined emergence of high-speed computing facilities and interactive mathematical software has allowed for an experimental approach which some protagonists claim even has justificatory powers.
Experimental mathematicians employ computers in their research practice to visualize and experiment with mathematical entities and structures and to aid in the exploration of details. When they do, they need to address new technical and philosophical issues such as whether the emergent phenomena are genuine facts or merely computational artefacts. Such analyses of the robustness of experimental mathematics are indicative of an opening of the philosophy of mathematics to some of the discussions in the philosophy of empirical science.
Despite provocative claims by some experimental mathematics, the notion of proof remains the gold-standard of mathematical justification. The process of turning experimentally obtained insights into proofs is complex and involves various mechanisms of reconstruction and distillation. Based on a case-study of the so-called BBP-formula, some such steps are indentified. These can then be applied to other cases, before their general philosophical perspectives are discussed. This last discussion touches upon themes from philosophical studies of robustness, disciplinary agency and computer simulations.
Jean Paul Van Bendegem (Vrije Universiteit Brussel, Belgium): “Mathematical Practice and its Philosophical Relevance”
An often-heard complaint aimed at philosophers who study practices in the sciences, mathematics, and logic, is that it is not clear at all what the philosophical relevance might be of such work. Note that there is hardly any discussion about the importance of the subject for historians and sociologists that seems accepted. The aim of this broadly conceived presentation is to show that there is indeed a philosophical relevance. I will present three themes to support this claim.
(1) ‘Real’ proofs versus ideal(ized) proofs. It is obvious that ideal proofs, since they are usually formulated in a strongly regimented language, eliminate a number of characteristics that are essential for ‘real’ proofs (as they appear in the mathematical literature). Examples are: stylistic considerations, rhetorical aspects, the explanatory value of a proof, to just mention these three. Seen from this perspective, the programme of the Bourbaki group is, besides a foundational attempt, also a highly specific stylistic choice: it proposes a very particular way of writing down mathematics. It seems obvious that such choices will co-determine which mathematical theories and methods will prove to be successful and thus contribute to an understanding of what mathematical progress could be.
(2) What makes a mathematical problem interesting? Not a single mathematician would be interested to read the proof of the statement that “If 2+2 = 5, then Riemann’s hypothesis is the case” for this problem is not interesting. But how do mathematicians select interesting problems? It seems clear that this matter is related to research agendas, research programmes and research traditions. Answers to this problem will surely lead to answers to the on-going discussion whether revolutions do or do not occur in mathematics. As similar questions are deemed extremely relevant in the philosophy of science (for physics, etc.), so are they in mathematics. The question whether there is a necessity present in the development of mathematics has been considered to be a problem at the core of the philosophy of mathematics.
(3) The forgotten yet relevant aspects. Looking at all aspects of mathematical practice and not exclusively at the end-products delivered, generates a number of philosophically relevant questions and discussions (again deemed relevant in the philosophy of science). How does mathematical ‘discovery’ (if that is the right term) proceed? Is there an underlying logic, as Lakatos did believe? Or is it a matter of clever heuristics? And in what ways does the possibility of computer assisted search for proofs affect mathematical practice itself and our (philosophical) image of mathematics? Or, as recent example showed, can a mathematical problem be solved not by a single mathematicians, but instead by a loosely organized network of mathematicians and (clever) amateurs? A related question is whether it is possible to express these processes in a formal framework (models of belief change, of directed search, of information flow, and so on)?
Finally, when we will have answers to all these questions, the “big” question can (and should) be asked: what are the underlying metaphysics and epistemology of the study of mathematical practice itself? Or, to put the matter rather bluntly: does it take a Platonist to study the practice of a (mathematical) Platonist? Or can a constructivist do it as well?
CONTRIBUTING PAPERS
Andrew Aberdein (Florida Institute of Technology, Florida, US): “Epistemic Luck and Mathematical Practice”
A recent survey distinguishes six different varieties of epistemic luck [Pritchard, D., 2005, Epistemic Luck, Oxford UP]. At least four are consistent with knowledge, and at least one is not. This paper contends that confusion between varieties of epistemic luck has impeded the understanding of mathematical practice: controversy over one variety has obscured the importance of the others.
1. Capacity Luck
“It is lucky that the agent is capable of knowledge.” [Pritchard, op. cit., 134]
Capacity luck arises when the agent is only fortuitously capable of knowledge, as when sudden death is narrowly averted. Capacity lucky beliefs are still knowledge, whether or not mathematical. Yet capacity luck raises questions specific to mathematics: could humanity have evolved different mathematical capacities? How else might mathematics have developed?
2. Evidential Luck
“It is lucky that the agent acquires the evidence she has in favour of her belief.” [Pritchard, op. cit., 136]
Evidential luck turns on the lucky apprehension of supporting evidence. If the evidence provides sufficient justification, the belief is unproblematically knowledge, however lucky its acquisition. Serendipity is lucky in this sense, as in the accidental dissemination of Gerhard Frey’s suggestion that the Shimura-Taniyama conjecture implies Fermat’s Last Theorem: “I sent it to only a few people, but it got out of control somehow (which was fortunate for mathematics)” [Mozzochi, C.J., 2000, The Fermat Diary, American Mathematical Society, 10]
3. Doxastic Luck
“It is lucky that the agent believes the proposition.” [Pritchard, op. cit., 138]
Doxastic luck arises when the agent might in similar circumstances not have formed the belief, despite having the same data. This is consistent with knowledge. It describes an experience more familiar in mathematics than in other sciences: a novel insight can strike a mathematician considering a problem which has frustrated well-informed peers.
“I looked at Barry [Mazur] and I said, “You know, I am trying to generalize what I have done so that we can prove the full strength of Serre’s epsilon conjecture,” and Barry looked at me and said, “Well you have done it already, all you have to do is add on some extra gamma zero of m structure and run through your argument and it still works, and that gives everything you need.” This had never occurred to me as simple as it sounds.” [Mozzochi, op. cit., 11]
The speaker, Ken Ribet, was better informed, but doxastically unlucky.
4. Content Luck
“It is lucky that the proposition is true.” [Pritchard, op. cit., 134]
Content luck comprises unlikely truths, such as coincidences. Their unlikelihood is no obstacle to their being known. However, in mathematics, the very existence of such luck is controversial. As Sir Michael Atiyah says of Coxeter groups:
“These surprising connections in mathematics ... are not accidents. They are somehow fundamental. Even if you didn’t know they were there before, once you see them you have to investigate and by investigating you discover lots and lots of things. They are a very important part of mathematics in terms of directing the search of mathematicians into new areas.”[Roberts, S., 2007, King of Infinite Space: Donald Coxeter, The Man Who Saved Geometry, Profile, 137]
Atiyah’s heuristic echoes Robert Merton’s “serendipity pattern . . . an unanticipated, anomalous and strategic datum which becomes the occasion for developing a new theory or for extending the existing theory” [Merton, R.K., 1968, Social Theory and Social Structure, Free Press, 158]. Such data is a lucky thing to have, but not itself lucky: it exhibits evidential, not content, luck (as Merton recognizes in calling it ‘serendipity’).
5. Veritic Luck
“It is a matter of luck that the agent’s belief is.” [Pritchard, op. cit., 146].
Veritic luck arises where the agent has a true belief, with some justification, but justification consistent with the belief being false. Hence, the belief is true only by luck, and cannot be knowledge. Fermat was veritically lucky to believe his famous conjecture, as was Andrew Wiles, when propounding the flawed 1993 version of its proof. Disputes over the degree of rigour required for proof turn on the characterization of veritic luck.
6. Reflective Luck
“Given only what the agent is able to know by reflection alone, it is a matter of luck that her belief is true.” [Pritchard, op. cit., 175]
Reflective luck is concerned with ‘knowledge’ obtained without reflective awareness of the process underlying its acquisition. Whether this form of luck is pernicious is a touchstone for the rival internalist and externalist conceptions of knowledge. Some problematic examples of mathematical ‘knowledge’, such as computer-assisted proof, may be explicated as reflectively lucky.
The dangers of confusing these different varieties of luck are best illustrated by example. Roy Sorensen observes that a mathematician developing a novel technique, as Cantor was with his diagonal argument, cannot always anticipate whether it will achieve community acceptance. This makes the cogency of his proof partly a matter of luck—but of which variety? If the sense in which “[h]istory can vindicate one’s logic” [Sorensen, R.A., 1998, Logical luck, The Philosophical Quarterly, 48, on 332] is that the admissibility of proof techniques is socially determined, then the ‘truth’ of certain mathematical propositions may be contingent, and thereby content lucky. On this interpretation, Cantor’s belief that his procedure established the uncountability of R was content lucky, since the mathematical consensus might never have accepted diagonalization. But Sorensen’s observation that “it is not as if Cantor was in a position to predict that his argument would pass the test of time” [Sorensen, op. cit., 332] seems to ascribe veritic luck. Since Cantor was using a cutting-edge technique with disturbing similarities to known paradoxes, he could not yet have full confidence in its rigour. Careful attention to the varieties of epistemic luck helps resolve such ambiguities into a richer account of mathematical practice.
Mariana Antonutti Marfori (University of Bristol, UK): “Proof and Rigour in Mathematical Practice”
The traditional view of mathematical proof from mathematical logic is that of formal deduction. According to what Rav [Rav, Y., 1999, Why Do We Prove Theorems?, Philosophia Mathematica, 7, 5-41] labelled Hilbert’s Thesis, every mathematical proof should be convertible into a formal derivation in a suitable formal system. On this view, rigor is a necessary feature of proof, and formalizability is a necessary condition of rigor: a rigorous proof is defined as a proof which (i) does not conceal non-logical information, and (ii) whose inferences can be seen as valid solely in virtue of purely logical relations between concepts [Detlefsen, M., 2009, Proof: Its Nature and Significance, in Gold, B., R.A. Simons, Ed., 2009, Proof and Other Dilemmas, The Mathematical Association of America, 16-17].
This conception of the nature of mathematical proof is ubiquitous – at least – among philosophers of mathematics and logicians, but its epistemological consequences have not been adequately addressed. In this paper, I look at formal derivations as the epistemic standard for mathematical justification, and consider the implications of this view for an account of mathematical knowledge. Any such account which is unable to explain the ordinary practice of proving cannot aim to improve our understanding of mathematical knowledge, so the focus will be on the relationship between formal derivations and proofs in ordinary mathematical practice.
To better understand the grounds for the acceptance of formal rigour as the epistemic standard of proving in mathematics, it is useful to briefly recall its origin. This view of proof has emerged out the foundational crisis at the beginning of the 20th century. The foundational concerns brought about the need for longstanding conceptions of mathematical knowledge to be revised, and produced a shift of attention in mathematical epistemology from the actual ways of acquiring mathematical knowledge to the study of mathematical sub-disciplines conceived as axiomatised bodies of knowledge. Foundational projects of systematisation of mathematical knowledge were composed of two parts: the first aimed to identify suitable axiomatic foundations for mathematical truths, and the second aimed to gain the certainty of mathematical results by identifying the best methodology for preserving truths through inference processes. In this context, mechanically checkable derivations emerged as the best guarantee that theorems were flawlessly deduced from axioms. Incompleteness results and paradoxes seriously undermined any projects of giving axiomatic foundations to mathematics – at least in their original formulations – but they did not extend as far as to undermine the goal of providing epistemic foundations for the edifice of mathematical knowledge. So from an epistemological point of view the idea of formal proof remained the epistemic standard for mathematical knowledge. A first challenge to this view stems from the consideration that the formal conception of rigour loses much of its original appeal when set aside from foundational concerns, and needs therefore a stronger philosophical support.
A second and more decisive challenge to idea of formal proof as the epistemic standard for mathematical knowledge concerns the explanatory power of the traditional view of proof with respect to the ordinary mathematical practice. I argue that the traditional view is at odds with mathematical practice under two respects, and taken at face value, it would imply highly undesirable consequences for any view of mathematical knowledge. As a matter of fact, proofs in ordinary mathematical practice are not instances of formal derivations: they are not commonly formalised either at the time they appear on mathematical journals or afterwards, nor are they even generally presented in a way that makes their formalisations apparent or routine. So if we were to accept the conclusion that rigorous proof is a necessary condition for mathematical knowledge, not only we would have to conclude that almost everything that is published in mathematical journals does not constitute mathematical knowledge, and mathematicians do actually know nearly nothing about mathematics, but also that there was little, if any, knowledge of mathematics before the Twentieth century. Moreover, we should also commit to the view that mathematicians do not know any statements which they are unable to prove rigorously. Hence statements that come to be accepted on the basis of plausibility arguments, analogies, diagrams, and so forth, would not constitute mathematical knowledge. This view sets the standards for mathematical knowledge too high.
The traditional view may be reformulated in a more plausible way by saying that informal proofs in ordinary mathematical practice represent incomplete proof-sketches, which could be made formal by any skilled expert in the relevant field. According to this view, ordinary informal proofs have an epistemic value only insofar as they stand for, or indicate, formal derivations. The resulting new picture of mathematical knowledge would then go along these lines: mathematical truths can be known without rigorous proofs, but such knowledge fails to meet the standards we ideally demand of mathematical knowledge. By constructing rigorous proofs (i.e., by formalizing ordinary informal proofs), we could improve our knowledge of these truths by coming to know them for certain. This amended formulation of CVP, however, does not seem able to meet the important explanatory challenge represented by two important sociological facts which any account of mathematical practice has to tackle. Firstly, the lack of any formal proofs in ordinary mathematical practice would make the success of mathematics a complete mystery, and would not provide any explanation of how informal proofs actually prove theorems. Secondly, the large convergence of mathematical community on what makes for an adequate proof looks as mysterious a fact in the light of the consideration that formalisation is seldom appealed to in order to resolve controversies in the mathematical community. So if no such account is provided by any advocates of the traditional view, we are left with either a picture of mathematics as an unsuccessful and lazy practice, or with a mysterious connection between informal proofs and mathematical truths.
This indicates that there must be some informal notion of rigour that normatively governs practitioners’ work, and that careful consideration of mathematical practice provides adequate grounds for challenging the traditional view of proof. My suggestion is that the flaw in the traditional view is the equation between mathematical rigour and formalisability. Of course, this does not show that formal tools are useless to mathematical practice. On the contrary, formal tools are an essential methodological ingredient for a precise analysis of the informal notions implicit or explicit in ordinary mathematical practice, and are therefore essential to the understanding of the role of informal rigour and informal standards of proving in mathematical knowledge. However, a formal analysis of informal rigour should not be considered sufficient for a philosophical understanding of mathematical proofs and knowledge. A critical study of mathematical practice cannot be carried out without sociological and historical studies of the structure of practice itself, so the methodology for the philosophical investigation of mathematics should be rethought in the light of the importance of a deep understanding of all aspects of mathematical practice.
Vincent Ardourel (IHPST-University of Paris 1, France): “The Role of Theoretical Physics in Research in Constructive Mathematics”
The paper focuses on a recently introduced practice in the domain of constructive mathematics. Constructive mathematics differs from classical mathematics by being grounded on different logical foundations. For instance, constructivists do not use the existential quantifier $x for an object x if there is no effective method that can produce this object x. Usually research in constructive mathematics is not concerned about applications to physics. However I shall study a different way of doing research in constructive mathematics. Indeed, since a couple of years, some constructivists work on issues in theoretical physics. In this presentation, I would like to discuss to what extent such a practice of research efficiently contributes to the progress of constructive mathematics.
In order to fulfil this goal, I will first point out the specificity of research in constructive mathematics. More precisely, I will show that the criteria of efficiency in such a research differ from those in classical mathematics. Then, I will discuss the role of theoretical physics in mathematical research. I will begin to show how according to H. Poincaré, theoretical physics contributes to further classical mathematics. He holds that physics has two functions in mathematical research: to reveal mathematical problems and to help find out the means to solve mathematical problems. After this discussion on classical mathematics, I could show how theoretical physics contributes differently to the progress of constructive mathematics. I claim that the role of theoretical physics in research in constructive mathematics is to provide mathematical statements that constructivists try to reformulate constructively. I shall rely on a particular example of this new practice, the research of a constructive formulation of two statements in quantum mechanics, Gleason theorem and the unbounded linear operators. I argue that in this case, this research practice can be said efficient. Such a practice led respectively to the production of a constructive result and to the initiation of a research programme in constructive mathematics.
The efficiency of such a research practice may be justified by the significant efforts made toward finding constructive formulations for theoretical physics statements. But why is such an aim regarded as an important one by some constructivists? I claim that it is because the effective constructive formulation of theoretical physics statements is viewed as an important part of a pragmatic argument for the debate between classicism and constructivism in philosophy of mathematics. Such a view of the constructive formulation of theoretical physics statements will be discussed. Particularly, I will point out that such a view seems to involve that one function of mathematics is to be a language for theoretical physics.
David Corfield (University of Kent, UK): “The Robustness of Mathematical Entities”
Two very prominent kinds of phenomenon within mathematics form a sort of reciprocal pair. First, we have that a certain concept or notion is manifest in a wide range of situations. For example, duality appears in: projective geometry between lines and points; platonic solids, e.g., the dodecahedron and icosahedron; Stone duality between certain spaces and algebras; Fourier analysis; Poincaré duality between the homology and cohomology of complementary dimensions; duality between syntactic theories and semantic models; Pontryagin duality for locally compact abelian groups, and so on. Second, the reciprocal kind of phenomenon occurs where a single entity possesses a wide range of properties and structure. It is this latter kind that I wish to discuss in this paper.
An important way of singling out an entity is to define it via some universal property, in particular via freeness. To be the free such-and-such, an entity must possess the structure and properties required in the description and nothing more. Then given any general such-and-such there will be a unique structure preserving map to it from the free one. For instance, the integers form the free abelian group on one generator. This means that given an abelian group and a designated element, g, there is a unique homomorphism from the integers which sends 1 to g.
Another important algebraic entity, Symm, the set of polynomials on a countably infinite number of commuting variables with integer coefficients, is also free – the free ‘lambda-ring’ on one element. More topological examples can also be constructed in this way. For example we can define Tangles as the free X, for some construction. In this case, X = braided monoidal category with duals on one object. What this amounts to is that we take an object of the category to be a finite collection of points sprinkled on a plane. Then an arrow going from a first plane to a second plane of points is a collection of bonds each linking two points either in the same plane or in different planes, and a collection of knots sitting between the two planes. They can be tangled up with each other anyhow, as the name suggests. The freeness of this entity, and the ensuing mapping from it to similarly structured categories, is part of what is called quantum topology.
Now, the crucial observation is that in very many cases when an entity is defined via a universal property, it is found to possess other important properties. Let us illustrate this by reconsidering the integers. Along with their characterisation as the free abelian group on one generator, the integers carry a commutative ring structure, and in fact form the free commutative ring. As such they form the initial object in the category of commutative rings, and so for any such ring there is a unique ring homomorphism from the integers to it. Similarly,Symm displays a huge range of characteristics including algebraic properties relating to the representation of symmetric groups and to the cohomology of an important classifying space. Rather importantly there is a ‘comultiplication present’, so that Symm encodes a decomposition as well as a composition.
Symm bears so many different properties that Michiel Hazewinkel describes it as his ‘star’ example in a very interesting paper entitled “Niceness Theorems.” In this paper, Hazewinkel approaches the issue of explaining why an entity characterised in one regard should possess further properties: “It appears that many important mathematical objects (including counterexamples) are unreasonably nice, beautiful and elegant. They tend to have (many) more (nice) properties and extra bits of structure than one would a priori expect…” [Hazewinkel M., 2008, Niceness Theorems, http://arxiv.org/abs/0810.5691].
Sometimes this is clear. For instance, it is fairly straightforward to show that as a consequence of the universal characterisation of the integers as the free abelian group on one generator, a multiplication can also be defined on the integers in such a way that they possess a ring structure. However, that the integers should be not any ring, but the initial ring, or in other words the free ring on no generators, is not clear from its first characterisation.
We can also define interesting entities by other constructions, for example, the rational numbers as the field of fractions of the integers, and the reals as the Dedekind-MacNeille completion of the rationals. Each possesses a panoply of different properties and structures. Now, the superposition of many interesting features in the same object explains why they crop us so frequently, and suggests an answer to the puzzle as to why when there are many possible structures that mathematicians could study, some of them act almost as ‘attractors’. There is an air of inevitability to some entities as though one cannot fail to encounter them when working in a certain direction.
Now this ‘attractor’ phenomenon might be attributed to many factors. Possibly humans have a limited number of ways of thinking, so work with entities constructed out of choices from a restricted menu. Or perhaps research mathematicians have been socialised to work in a limited set of ways with the same result. The main thrust of this paper is that there is a third viable option – that in the realm of structural possibility, there are privileged members. A strong sense of reality attaches to these members, deriving from, what we might call following William Wimsatt, their robustness. This is a notion that Wimsatt developed in the philosophy of science, which goes by alternative names such as multiple determination. In his chapter “The Ontology of Complex Systems,” Wimsatt explains how he chooses to approach the issue of scientific realism with the concept of robustness: “Things are robust if they are accessible (detectable, measureable, derivable, defineable, produceable, or the like) in a variety of independent ways” [Wimsatt W., 2007, The Ontology of Complex Systems, in Wimsatt, W., 2007,Re-engineering Philosophy for Limited Beings, Harvard UP, on 196]. In this paper I shall explore whether this idea of independent access makes sense in mathematics, and whether it can account for the sense of inevitability presented by certain mathematical entities.
Liesbeth De Mol (Universiteit Gent, Belgium): “From Practices of Mathematical Logic to a Natural Law? The Case of Alonzo Church and Emil Post”
The year 1936 can be considered as one of the most important years for the foundations of theoretical computer science. In that year Alonzo Church, Emil Post and Alan Turing published a paper in which they each proposed a thesis which is now known as the Church-Turing thesis. Roughly speaking this thesis states that anything that can be computed (in finite time) can be computed by a formalism (in finite time) that is logically equivalent to Turing machines (or lambda-calculus or Post’s machines). One fundamental consequence of this thesis is that there are certain decision problems that cannot be solved in finite time. The fact that this all happened in 1936, has been called the confluence of ideas by Robin Gandy. However, this confluence did not come out of the blue. Especially in Church's and Post's case there are several years of research that preceded and ultimately culminated in their seminal papers, research that I will characterize here as a practice of mathematical logic. (Note that Turing was only 24 when he published his paper On computable numbers).
The aim of this talk is to trace the differences and similarities between Church’s and Post’s approaches and to explain how the characteristics of their respective ‘practices’ led to their results. It will be argued that, unlike Turing, both Church and Post first became convinced of their theses not through external arguments but through an analysis of the formal systems themselves. Furthermore, both of their approaches can, to a certain extent, be characterized as heuristic in nature. This is in a certain sense rather ironic given the fact that both Church and Post are mathematical logicians that, initially, belonged to a tradition of research into the logical foundations of mathematics and the search for an ultimate formalization of mathematics that would turn mathematics into a decidable algorithm. As von Neumann once said it, “the contemporary practice of mathematics, using as it does heuristic methods, only makes sense because of [the possibility of] undecidability.” It will furthermore be shown that, once Church and Post were convinced of the computational power of their respective formalisms, they proposed several “strategies” to add to the “robustness”' of their theses. However, whereas Post understood his thesis as an empirical hypothesis about the mathematical powers of the human mathematician, Church did not allow for this kind of interpretation. According to him, his thesis should be understood as a definition rather than a hypothesis. The final question of this talk is in how far the particularities of Church’s and Post’s practices can help to explain this disagreement.
Valeria Giardino (Institut Jean Nicod, Paris, France): “Seeing As and Multiplicity of Interpretation in Mathematics”
In my presentation, I will discuss a common practice in mathematics: the use of diagrams and of external representations in general to reason about a mathematical problem. I will argue that a crucial feature of this practice is the fact that the same representation can be read in many ways and that these different interpretations bring to different operations, that means to different kinds of manipulations. This seems to be the typical functioning of dynamic diagrams, the usefulness of which resides precisely in their being ‘constructional’: according to the way they can be read, they tell the observer in which way they can be further manipulated, and which techniques can be applied.
My presentation is divided into three parts.
In the first part, I will briefly sketch some of the examples used by Wittgenstein to describe the activity of ‘seeing as’ [Wittgenstein, L., 1958, Philosophische Untersuchungen, Philosophical Investigations, Blackwell, 2nd ed.; Wittgenstein, L., 1980. [1968], Bemerkungen über die Philosophie der Psychologie I and II, Remarks on the Foundations of Psychology, I and II, G.H. von Wright and Heikki Nyman, Ed., Blackwell]. In my view, Wittgenstein’s discussion of this ability points to an important aspect of ‘mathematical perception’ and its relationship with our inferential capacities. I will give some examples of the way in which the same set of diagram can be ‘seen’ in different ways and each interpretation brings to different operations.
In the second part, I will introduce the notions of manipulation practice and of constructional diagrams. My idea is that deductive competence concerns representations that are useddynamically: diagrams which are inference-promoting can be correctly understood only when they can be reproduced and correctly manipulated. The observer demonstrates to know how to reproduce and correctly manipulate a diagram when, for example, she knows which of their spatial properties are invariant.
In the third part, I will look at the role of multiplicity of interpretation in the history of mathematics and consider Grosholz’s analysis, which stresses the role of controlled and highly structured ambiguity in mathematics [Grosholz, E., 2007, Representation and Productive Ambiguity in Mathematics and the Sciences, Oxford UP]. I will avoid Grosholz’ choice of the term ‘ambiguity’, and I will talk of multiple interpretation. This potential multiplicity of interpretation involves all types of representation, and is particularly interesting in the case of diagrams.
To conclude, the aim of my presentation is to discuss a common mathematical practice: the use external representations to reason about a problem. As Wittgenstein pointed out, these external representations are open to many interpretations and to all possible transformations: geometry indeed is the space of possibility. Nevertheless, when we interpret them in some particular way, we are forced to reduce these possibilities, and to apply to them only the transformations that we know to be relevant to the task as well as allowed within the particular theory.
Norma B. Goethe (National University of Cordoba, Argentina, & University of Göttingen, Germany): “Stepping Inside Leibniz’ Intellectual Workshop”
In my paper I look at the inter-connection between Leibniz’ theory of signs and is actual practice in writing science during the Paris years (1672-1676). Taking advantage of recently published material, I propose to take a fresh look at some of Leibniz’ most striking insights concerning 'tangible' signs, the ideal of a ‘universal character’, and his view of the essence of science and the growth of knowledge. From 1672 through 1676, Leibniz pursued his mathematical studies in Paris. He studied Descartes´ work in geometry, and building on it developed his own methodological views, which would prove most fruitful, leading him to valuable results in mathematics. At the same time, his conception of signs underwent an important transformation. On Leibniz’s mature views there is no abstract human thought that does not require something sensible. He describes his character in visual terms: just as in mathematical symbolic writing, his symbols or characters are to provide with the ‘tangible’ thread necessary to develop and fix our thoughts. Despite our limitations, tangible characters give us ‘the means of being infallible’ because, by rendering our reasoning sensible, we can easily recognize errors at a glance and rectify them. In a similar way, to write out a proof provides us with a way to 'see' whether the results hold and to communicate them to others.
This case study is of particular interest because Leibniz himself insisted upon the centrality of tangible forms of signs for the growth of knowledge. In his practice, however, he was also guided by the idea of a science which would ideally embody the norms for different forms of writing, so that in the ideal case, “writing and thinking will keep pace with each other, or rather, writing will be the (tangible) thread of thinking.”
Albrecht Heeffer (Universiteit Gent, Belgium): “Epistemic Justification in Abbaco Arithmetic and Algebra”
Historical context
By the end of the fifteenth century there existed two independent traditions of mathematical practice. On the one hand there was the Latin tradition as taught at the early universities and monastery schools in the quadrivium. Of these four disciplines arithmetic was the dominant one with De Institutione Arithmetica of Boethius, itself based on the Arithmetica by Nichomachus of Gerasa as the scholastic authority. Arithmetic developed into a theory of proportions as a kind of qualitative arithmetic, which appealed to esthetic and intellectual aspirations rather than being of any practical use. With geometry we think of Euclid’s Elements, but knowledge of Euclidean geometry was very limited, contained to the first two books and was mostly based on Arabic translations and commentaries. On the other hand, the south of Europe also knew a flourishing tradition of what Jens Høyrup [Høyrup, Jens, 1994, In Measure, Number, and Weight, State University of New York Press] calls “sub-scientific mathematical practice.” In the cities of North Italy, the Provence, and Catalonia sons of merchants and artisans, including well-known names such as Dante Alighieri and Leonardo da Vinci, where taught the basics of reckoning and arithmetic in the so-called abbaco schools. The teachers ormaestri d’abbaco produced between 1300 and 1500 about 250 extant treatises on arithmetic, algebra, practical geometry and business problems in their vernacular. The mathematical practice of these abbaco schools had clear practical use and supported the growing commercialization of European cities [Heeffer, A., 2010, Forthcoming, The Body in Renaissance Arithmetic, Paper at the Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB'10), 29/3/2010-1/4/2010, Leicester]. These two traditions, with their own methodological and epistemic principles, stood completely separated.
Epistemic justification of sub-scientific practices
While argumentation, demonstration and proof have been well studied for the scholarly traditions in mathematics, forms of epistemic justification have been mostly ignored for the sub-scientific mathematical practices. With Van Kerkhove, Van Bendegem, and Mancosu [Mancosu, P., Ed., 2008, The Philosophy of Mathematical Practice, Oxford UP; Van Kerkhove, B. & J.P. Van Bendegem, 2007, Perspectives on Mathematical Practices, Springer], the historical epistemology of mathematical practices has become an interesting new domain of study. Such an approach favors a strong contextualization of mathematical knowledge, its development and its circulation, by studying material and cognitive practices of mathematicians within their social and economical context in history. The abbaco period on which our research focuses is the one preceding the scientific revolution and therefore a grateful subject for research. We characterize the sixteenth century as a transition period for the epistemic justification of basic operations and algebraic practices. While the abbaco tradition draws the validity of its problem solving practices from correctly performing accepted procedures, the humanists of the sixteenth century provided radical new foundations for algebra and arithmetic based on rhetoric, argumentation, and common notions from ancient Greek mathematics [Cifoletti, G., 1993, Mathematics and rhetoric: Peletier and Gosselin and the making of the French algebraic tradition, Princeton University, PhD Dissertation].
Despite the lack of argumentative deductive structures in abbaco treatises, epistemic justification is very important to this tradition. Otherwise said: precisely because the abbaco tradition was missing the argumentative principles, as we know them from Euclidean geometry, it had to rely on a strong foundation for its basic operations and practices.
The validation and justification of basic operations functioned as a precondition for the gradual symbolization of arithmetical operators. Accepted operations and procedures for problem solving could be applied blindly without accounting for the values of quantities one is dealing with. This process of abstraction lead to an operative symbolism practiced within a rigid rhetorical setting. An excellent illustration of this are the rules of signs for which a justification was provided by Maestro Dardi in his Aliabra algibra of c. 1380. In other words, the abbaco tradition introduced symbolic reasoning before an algebraic symbolism was established. The development towards a symbolic algebra during the sixteenth century can thus be seen as a consequence of this process of justification and abstraction. The belief in the validity of standard operations and practices ultimately lead to the acceptance of negative and imaginary solutions and the expansion of the number concept during the sixteenth century.
We will discuss several forms of justification in published and yet unpublished abbaco manuscripts. Some of these justifications rely on geometrical proofs, as the roots of quadratic equations known from Arabic algebra. Others rely on standard procedures which are pictured by graphical schemes as for example the multiplication of rational numbers or the crosswise multiplication of binomials. Still other justifications have a tangible character and depict actions involving hands, fingers or other body parts [Heeffer, A., 2010, Forthcoming, The Body in Renaissance Arithmetic, Paper at the Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB'10), 29/3/2010-1/4/2010, Leicester]. To our surprise we even found justifications for the principle of mathematical induction in for remainder problems in five fourteenth-century manuscripts [Heeffer, A., 2010, Forthcoming, Regiomontanus and Chinese Mathematics, Philosophica]. This precedes the earliest documented text on this subject by Maurolyco with about 150 years [Freudenthal, H., 1953, Zur Geschichte der vollständigen Induction, Archives Internationales d’Histoire des Sciences, 6, 17–37].
The epistemic justification of procedures and practices in the abbaco tradition not only is a rewarding research subject of its own, our results allow us to contrast the abbaco style of justification with that of the scholarly tradition. Where the scholarly and sub-scientific tradition stood completely separated before 1500 they became confronted with each other during the course of the sixteenth century. In fact, the major changes in mathematics that took place during this century allow us to characterize the emergence of symbolic algebra as a transition in the epistemic justification of basic operations and procedures.
Erich Reck (University of California, Riverside, US): “Dedekind, Fruitful Definitions, and Mathematical Abstraction”
As part of the recent turn to “practice” in the philosophy of mathematics, a topic that has started to attract attention is the search for “right definitions” in mathematical research. In particular, Jamie Tappenden has emphasized, and begun to analyze, the sense in which such a search has epistemological significance [Tappenden, J., 2008, Mathematical Concepts: Fruitfulness and Naturalness, in Mancosu, P., Ed., 2008, The Philosophy of Mathematical Practice, Oxford UP, 256-301]. His analysis revolves around the notion of fruitfulness; but he also connects the “rightness,” or “naturalness,” of a definition to the issues of explanation and understanding. Tappenden’s discussion is anchored in a reflection on developments in nineteenth-century mathematics, with Bernard Riemann as the central figure. Another nineteenth-century mathematician he considers is Richard Dedekind. More specifically, Dedekind’s definitions of the notions of integer and prime number, as part of his contributions to algebraic number theory, provide illustrative examples for Tappenden. In this paper, I will attempt to build on this work by investigating the epistemological significance of definitions in Dedekind's writings more generally, i.e., the systematic role they play for him in the construction and validation of mathematical knowledge (thus also expanding on Reck [Reck, E., 2003, Dedekind’s Structuralism: An Interpretation and Partial Defense, Synthese, 137, 369-419; Reck, E., 2008, Richard Dedekind’s Contributions to the Foundations of Mathematics, in Zalta, E., Ed., 2008, Stanford Encyclopedia of Philosophy, s.l.; Reck, E., 2009, Dedekind, Structural Reasoning, and Mathematical Understanding, in Van Kerkhove, B., 2009, New Perspectives on Mathematical Practices, WSPC Press, 150-173].
It is certainly true that finding appropriate definitions was a central goal for Dedekind. He was not only following Riemann in this respect, but also two of his other teachers: C.F. Gauss and G.L. Dirichlet [Ferreirós, J., 1999, Labyrinth of Thought, Birkhäuser Stein, H., 1988, Logos, Logic, Logistiké: Some Philosophical Remarks on Nineteenth-Century Transformations of Mathematics, in Aspray, W., & P. Kitcher, Ed., 1988, History and Philosophy of Modern Mathematics, University of Minnesota Press, 238-259]. It is also true that Dedekind's work in number theory should be considered carefully in this connection, including his definitions of integer and prime number; similarly for his other “non-foundational” work, especially in algebra and related fields (his introductions of the notions of field, module, lattice, his re-conceptualization of group theory, etc.). However, Dedekind's more “foundational” writings display an equally strong tendency to focus, very deliberately, on basic definitions (of the notions of real number, natural number, set, function, infinity, etc.). In each case, the results shape the mathematical edifice built on them. Comparing Dedekind's non-foundational and foundational writings reveals, moreover, that it is not just a matter of finding fruitful definitions in particular cases for him, in a piecemeal fashion, but of developing an overarching, unified methodology for doing so, and one that moves mathematics from a “computational” in a more “conceptual” direction [Corry, L., 2004, Modern Algebra and the Rise of Mathematical Structures, Birkhäuser, 2nd ed.; Ferreirós, 1999, op. cit.; Stein, 1988, op. cit.; Tappenden, J., 2008, Mathematical Concepts and Definitions, in Mancosu, P., Ed., 2008, The Philosophy of Mathematical Practice, Oxford UP, 256-301].
While Dedekind was strongly influenced by Gauss, Riemann, and Dirichlet, and while all of them emphasize the “conceptual” side of mathematics, there are significant differences between their approaches. These, too, come to the fore by considering Dedekind’s foundational and non-foundational writings together. It will be helpful in this connection not just to contrast his approach with those of more “computational” contemporaries, such as Ernst Kummer and Leopold Kronecker, as well as with those of his fellow “conceptualists,” Gauss, Riemann, and Dirichlet, but also to trace Dedekind’s influence forward, in two directions: on the formal axiomatic perspective championed by David Hilbert [Sieg, W. & D. Schlimm, 2005, Dedekind’s Analysis of Number: System and Axioms, Synthese, 147, 121-170]; and on the mathematical methodology employed by Emmy Noether [Corry, 2004, op. cit.; McLarty, C., 2008, Emmy Noether’s Set Theoretic Topology: From Dedekind to the Rise of Functors, in Ferreirós, J. & J.J. Gray, Ed., 2008, The Architecture of Modern Mathematics, Oxford UP, 187-208], and through her, on category theory (as represented, e.g., by Lawvere and Schanuel in a book revealingly entitled Conceptual Mathematics) [Lawvere, W. & S. Schanuel, 1997, Conceptual Mathematics, Cambridge UP]. As will become evident, it is precisely Dedekind’s attention to definitions, within the context of his distinctive “conceptual” approach to mathematics, which had the strongest impact on later developments in mathematical practice. Moreover, Dedekind’s approach to definitions is guided by several related kinds of “structuralist” abstraction, to be illustrated by examples and analyzed further in turn.
The main results of these investigations are the following, then: Tappenden is quite right that the definitional side of mathematicians has epistemological significance and deserves philosophical attention. I also agree that finding the “right definitions” has to do with fruitfulness, as well as with mathematical explanation and understanding. However, to recognize the full extent of such fruitfulness in a case like Dedekind’s, and to probe the depths in which the explanation and understanding of mathematical phenomena are affected, four related things are called for: to consider various cases of definitions together and to compare them systematically, i.e., to see them as part of a more encompassing position; not to distinguish artificially between “foundational” and “non-foundational” works for this purpose; to recognize several kinds of “conceptual mathematics” as the background, including differences between Dedekind and his teachers, Gauss, Riemann, and Dirichlet; and to analyze these differences in terms of the notion of abstraction, or more specifically, of several variants of “structuralist” abstraction. (In all four respects I am not so much disagreeing with Tappenden, I believe, as augmenting his work.)
János Tanács (Budapest University of Technology and Economics, Budapest, Hungary): “Elective Affinities: The Role of Intellectual Cooperation in the Early Period of János Bolyai’s Attacking the Euclidean Parallel Postulate”
In my presentation I am going to examine how the methodological interaction between the two Bolyais and that between János Bolyai and Carl Szász, his classmate at the Royal Academy of Engineering at Vienna affected the first period of János Bolyai’s attack on the Euclidean Parallel Postulate. My aim is to show how these interactions helped János Bolyai to give up trying to prove directly the postulate as well as recognizing the problems raised by the unsuccessful direct proofs and also the attempts to replace it by a self-evident postulate.
While Farkas Bolyai’s influence on his son in general is fairly well known there can be found hardly any thorough discussions of the specific details of their methodological interaction and mutual intellectual influence affecting János’ work. So it is worth examining the interference of Bolyai Farkas: his efforts to replace the Parallel Postulate by another one and divert János Bolyai from his early efforts to prove it directly. The historical evidence taken from the manuscripts written by János Bolyai can show that János Bolyai (unlike Farkas Bolyai) recognized the importance of the fact that his father’s numerous attempts at substituting the postulate could be seen as belonging to a series of logically circular, question begging direct proofs.
Until now it is also unexplained why János Bolyai abandoned the road to prove directly the Parallel Postulate. Since there were no logical or mathematical evidences supporting the assumption that the Parallel Postulate could not be proved directly it was not evident at all that there is no reason to search for such kind of proof. And the question is: what could count as reason for him to give up the attempts at a direct proof in an age without any relative consistency proofs which could really serve as a kind of evidence supporting the improvability of the postulate and showing the relative independence of it from the residual system of axioms?
In connection with this question I am going to use historical evidence in order to demonstrate the following things. First, that in the period of trying to prove the postulate directly János Bolyai worked together with his classmate Carl Szász, and they formed a special so called prove-and-refute community. They were engaged not merely in constructing a direct proof of the postulate, but in order to make sure that the proof is really sound and not circular they also tried to refute it hunting for an implicit mathematical statement logically equivalent with the Parallel Postulate.
Second, these historical sources also shed light on that the amount of the futile attempts provided the empirical evidence and the reason to give up the idea of proving directly the Parallel Postulate.
This also can show that Carl Szász played an important role not exclusively in the introduction of the concept of the “rebound line” which role is more or less well known in the vast literature on the history of the non-Euclidean geometry, but he also played a methodological role in the recognition of the circularity of the direct proofs.
The role played by Carl Szász in this prove-and-refute play seems crucial in the light of the numerous former attempts: in which the mathematicians were working all alone in trying to find a direct proof without noticing the circularity of it and thus it was left to another mathematician to discover the petitio principia. So the contribution of Carl Szász to the mathematical thinking of János Bolyai seems indispensable. Therefore I am going to argue that the two-person teamwork, the mutual influence between János and Carl cannot be reduced to the simple additional activity of two mathematicians, as is usual within the framework of epistemological individualism. I also presume that the co-workers had to be epistemologically uninterchangeable subjects in order to provide the logical possibility of noticing the circularity.
In my view then: two important features of János Bolyai’s early period can be distinguished.
First, János Bolyai was embedded in a mathematical community in the Academy in Vienna, and especially his friendship and collaboration with Carl Szász were decisive in his early results which played modest but important role in the process of his later discovering of the non-Euclidean geometry.
For János Bolyai by exchanging ideas with Szász recognised that the increasing number of the futile attempts at direct proofs or refutations increases also the amount of the empiricalevidence of the improvability of the Parallel Postulate.
The second peculiarity of János Bolyai’s early period is that the numerous unsuccessful efforts produced by this co-working constituted the inductive evidence to abandon attempts at direct proofs. There was no and as János recognised, there cannot be other kind of evidence in fact only that what a mathematical activity could produce, thus he gave up the effort to find direct proof and could raise the question of the improvability of the Parallel Postulate and independence of it from the residual system of axioms.
János Bolyai and Carl Szász working together really seems to be essentially collective.
A conference organized by PratiScienS, with support from:
- ANR
- MSH Lorraine
- Archives Henri Poincaré
- Université Nancy 2
Organisation Committee:
- Sandra Mols, PratiScienS and LHSP – Archives Henri Poincaré, Université Nancy 2, France
- Léna Soler (LHSP – Archives Henri Poincaré, IUFM Lorraine, France) and PratiScienS
- Valeria Giardino, PratiScienS and Institut Jean Nicod, Paris, France
- Amirouche Moktefi, PratiSciens and IRIST, Université de Strasbourg, Franc
PratiScienS is supported by the Laboratoire d'Histoire des Sciences et de Philosophie-Archives Henri Poincaré – UMR 7117 CNRS (Nancy, France), the Agence Nationale de la Recherche (ANR), the Région Lorraine, the Maison des Sciences de l’Homme Lorraine – USR 3261 (Nancy, France) and the Université Nancy 2 (Nancy, France).