Philosophy Asked by csp2018 on December 26, 2021
I’ve looked at several resources to learn about logic and metalogic, and they first present syntactic consequence and semantic consequence as separate things and then try to show how each implies the other in a sound and complete system.
But how did humans develop formal systems and the notion of syntactic consequence in the first place? Wouldn’t they have had to develop such systems based on what semantic consequences they agreed with?
I’m confused by the process of just arbitrarily setting up a formal system and saying ‘if a formula follows the rules of inference then there is a proof,’ and then trying to prove if it agrees with some semantic model.
I’m sure there is a good reason, but I would like to get a good intuitive sense of why, and these kinds of “soft” issues are usually glossed over in learning materials about logic that I’ve come across.
Not really an answer, but an attempt at giving an idea of a the syntactic approach.
Suppose you want to prove that if n = a+a then, logically, n = 2a.
If you want to prove the statement is true for a small domain say, for 0, 1, 2...... 9 , you may use a semantic method. That is, you will consider all the possible interpretations of the sentence:
0+0 = 2.0
1+1 = 2.1
2+2 = 2.2
etc.
and once you will have verified that the sentence is true for all possible interpretations, you will be able to say that the sentence is valid, which means that from " n = a+a" one can infer that n = 2.a " validly.
So, you will resort to a syntactic method. That is, you will try to derive the consequent from the antecedent of the conditional using only manipulation of symbols according to syntactic rules.
if n = a+a
then n = 1.a + 1.a = a . ( 1+1) = a.2 = 2.a.
( using : " 1 is the identity element for multiplication" , " distributivity law" and "commutative law for addition").
This shows how useful is the syntactic approach ( mechanical manipulation of symbols) . But the question arises : is this syntactif method sound? What proves that actually, in all possible interpretations ( and there is an infinite number of interpretations) "a+a = 2.a" is true? Also, are there formulas that are true in all interpretations, in spite of the fact we cannot prove them using syntactic methods?
In propositional logic, you can check the validity of a reasoning using a semantic method ( namely truth tables) , but when the number of atomic sentences is higher that 3 , you are happy to use a syncatic method ( for example natural deduction).
So we need formal systems ( but we also need proofs of the fact that they are sound and hopefully complete).
Answered by Floridus Floridi on December 26, 2021
You wrote...
But how did humans develop formal systems and the notion of syntactic consequence in the first place? Wouldn't they have had to develop such systems based on what semantic consequences they agreed with?
... and you are entirely correct: As has been pointed out in the comments, the study of semantic consequence lead to the notion of syntactic consequence. The opening lines of George Boole's The Laws of Thought (the title is itself suggestive) are telling:
- The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method; to make that method itself the basis of a general method for the application of the mathematical doctrine of Probabilities; and, finally, to collect from the various elements of truth brought to view in the course of these inquiries some probable intimations concerning the nature and constitution of the human mind.
The introduction then carries on to briefly discuss the historical development of such an investigation, starting with Aristotle.
Once a system of symbolic logic based on semantic reasoning, i.e. truth-preserving argumentation, has been developed, then that system can be studied in isolation, thus beginning the study of syntactic consequence, where the rules of logical inference become purely mechanical. Put very simply – Leibniz, Babbage and Lovelace were ahead of their time, for instance – the progress was as follows:
Symbolic logic is of course crucial to modern logic and set theory, but it is interesting to note that Zermelo came up with his eponymous axioms in 1908, a decade before first-order logic was brought into its current form by Hilbert and Bernays in 1917–1918.
A final note: The development of the study of logic and reasoning is quite similar to that of grammar, which isn't so surprising, considering the connections between the two. An oversimplified account:
*Native speakers speak grammatically without having to study grammar.
**It seems so obvious to us now, but coming up with grammatical categories (nouns, verbs, prepositions, etc.) was a tremendous breakthrough.
***This glosses over a lot of the actual historical motivation, e.g. the idea of a universal grammar.
Answered by dwolfeu on December 26, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP