CHAPTER 66.METALOGIC.

1.Language and Meaning.

2.Definition and Proof.

3.Infinity in Logic.

4.Conceptual Logic.

1.Language and Meaning.

It is a truism that ‘there is a grain of truth in all falsehood’. Always keep this in mind when evaluating theories. No idea would ‘make it’ in the world, if it did not have some appearance of plausibility. The trick is to remove the husk of confusions, to go past the surface appearances of things.

Some individuals are dishonest, and deliberately try to fool people; though as we all know, ‘you can fool some of the people some of the time, but you cannot fool all of the people all of the time’. But mostly, of course, errors arise inadvertently, because people are careless in their thinking, lulled into security by familiar words and superficial consistencies. Mistakes snowball, with people uncritically accepting what others have done, especially if those others have acquired prestige and fame.

Deep inside, many people fear ridicule, and are often tempted to gloss over what they cannot understand, rather than dare challenge current authorities. On the other hand, of course, every nincompoop with a crazy theory claims to be a misunderstood and cruelly rejected Galileo. The point I am trying to make is this: I ask the reader to be both open-minded and critical; to think anew for him or her self.

The ideas of modern logic have become so accepted by the academic establishment that it does take a special effort of independent thinking to overcome their power.

The reader is now referred to the article entitled ‘Metalogic’, byHao Wangof Rockefeller University in New York, in theNew Encyclopaedia Britannica(23:272-279).

In my view, the term ‘metalogic’ may be taken in a broad sense, to refer tothe study of the perceptual and conceptual foundations of the science of logic. But the term is of modern coinage, and is currently associated with the specifically modern view of what these foundations are. For this reason, it is defined in the article as ‘the study of the syntax and semantics of formal languages and formal systems’.

The reader is asked to keep in mind the distinction between the open sense of metalogic, and any partisan position about its application. What is at issue, here, is not the legitimacy of such a study, but the correctness of the current view of its content. I have already pointed out certain confusions which lie at the root of the modern position, such as the confusions between formalization, symbolization, systematization and axiomatization.

A fundamental issue for logical science, isthe relation between words and things— meaning, intention, reference, or significance. The study of this relation is known assemiotics(with or without the finals); it isJohn Lockewho first applied the term to this context, in the 17th century. Essentially, this is a theoretical study of the function of language as such in knowledge; it is a branch of philosophy, with both epistemological and ontological components. Language, as a medium of thought, memory, and communication, whatever its actual embodiment.

Obviously, the study of languageswhich exist or have existed, is a helpful empirical accessory. This is the role oflinguistics, and branches of it, like philology and etymology. Linguistics studies word and sentence formations; the sounds, shapes of alphabets; the varieties and changing meanings of vocabularies; the structures, uniformities and differences, of grammars; the historical development and geographical varieties, of past and present languages everywhere, looking at all their literary, cultural, or social manifestations. It may even tie in with ‘physiological, psychological, ethnological, sociological’, and similar researches.

Now, more recently,Charles Morrisdistinguished three branches for semiotics (which is not to be confused with linguistics, note well): ‘syntax’ (or ‘syntactics’), which studies words and their patterns of arrangement, without reference to their meanings; ‘semantics’, which studies the meanings of words, without reference to their users; and ‘pragmatics’, which considers the users as well. These distinctions were endorsed by Rudolph Carnap, and became commonplace in modern metalogic.

On the surface, these distinctions might seem reasonable enough, and indeed their having been made by modern logicians would seem to belie my contention that modern logicians lack a clear understanding of the relation of words and things: after all, do they not by means of these distinctions acknowledge the three components of ‘meaning’? However, reflect.

We might list existing words; we might describe how they happen to come together in actual languages; or we might discuss words collectively. But, without reference to any actual or proposed meanings, there is nothing more to say about specific words. So, in fact,‘syntax’ and ‘semantics’ are inseparable. Similarly, we might well view ‘pragmatics’ as a branch of linguistics; but it would be artificial to isolate within the theoretical study of semiotics all the propositions which mention the medium between words and things, and label them as ‘pragmatics’.

The relation of words and things ismediated by a conscious being. Symbols used by a computer or robot cannot be said to be representative of anything, except insofar aswehumans assign them some external object. A machine cannot intend anything, only we can. We may indeed think of specific words and their meaningsin abstraction fromourselves, the ‘users’; but the equation between them is inextricably tied to us. In short, all theoretical discussion of semiotics involves mention ofall threeconcepts: words, their meanings, and those to whom they are meaningful; it is not possible to meaningfully divide semiotics.

The tie between semiotics (the philosophy of language) and linguistics (research into actual languages) has been pointed out: the latter provides a database of examples for the former to take into consideration in its theoretical investigation. However, one important difference need be pointed out, and that is:the essential difference between a grammatical sentence and a logical proposition.

A sentence need not be meaningful: if it consists ofseparatelymeaningful words which are strung together in accordance with theusualpatterns of our tongue, it is grammatically okay. A proposition has to be a logically tenable construct: logic mayupon reflectiondeclare a sentence, albeit its apparent meaningfulness (in the sense just described), to be in fact meaningless. This is a conceptual judgment, with reference to the extraordinarycollectiveimpact of the words used.

In some respects, as we have seen repeatedly, logic is not bound by language. In principle, it could be wordless: it needs only consciousness, words are only instruments for it; any language will do, an existing one or an invented one, provided we all know what we mean by it. In practise, logical science uses a modified version of ordinary language (to avoid ambiguities and equivocations), or even a symbolic equivalent. But, in other respects, logic is more restrictive than grammar: it pays more attention to the end result of word constructions, their overall meaning.

Logic performs this additional selection with reference to internal consistency. For grammar, anything goes, if the parts are understandable; for logic, the result, the whole, too, must be understandable. You may say that grammar is syntactical and logic is semantical, but that would not be an accurate rendition of what modern logicians understand by these terms.

For example, the Liar Paradox. We saw that, although ‘This statement is false’ is a grammatically meaningful sentence, it results in a double paradox. In trying to make sense of this logically impossible result, we noticed that we used the indicative ‘this’ to refer to a construct which included it. Not finding any other explanation for our predicament, we inferred that it was caused by the artifice of self-reference. And indeed, upon reflection, we realized that the very idea of something pointing to itself (its whole self, not just a finger pointing to a chest), is so convoluted as to be unconscionable.

We thus came to the conclusion that such a sentence is logically meaningless: it is a non-proposition, it is as if nothing had been uttered. It is perhaps not the inherent self-contradiction alone which implies meaninglessness: it may merely serve to reveal a conceptual problem, or to confirm its seriousness. Perhaps all self-contradictions are ultimately meaningless, but in this case the concept involved was already unsound-looking.

In any case,once we become aware of the illusoriness of such sentences, we may no longer even say of them that they are false(let alone true). To say of them that they are false, already elevates them to a status of conceivability: but they do not even qualify for that. They just have no place in the universe of logic.

For classical logic, a meaningless sentence like the liar paradox is neither true nor false. That is, both ‘this statement is false’and‘this statement is true’ are neither true nor false; once we grasp that the former statement is meaningless, it follows that the latter is too. The meaninglessness of self-reference is in either case intuitively obvious: the double paradox arising from the first statement only serves to further highlight and confirm that meaninglessness, and the second statement is just as meaningless even though it leads to no overt contradiction. The paradox is not the conceptual source, but a symptom, of the inherent meaninglessness.

Now, classical logic, as we have seen, is concerned only with meaningful sentences — propositions — and its purpose is to determine how they are to be judged true or false. For us,the meaningless has no logic: once a statement is uncovered as intrinsically unconscionable, it ceases to be a topic of discussion; no sense can be made of it, there is no profit in looking for its ‘logical properties’, only confusion and contradiction may be expected to result.

In contrast, it seems to me, modern logic would like to be so‘generic’, as to be a formal study over and above meaning, and therefore applicable to meaningless sentences, as well as the meaningful. It is to such a broad formal inquiry that moderns seem to apply the term ‘metalogic’. For this reason, logic is for them primarily ‘syntactic’ (purely symbolic, entirely a priori and deductive), and only thereafter, more or less optionally, ‘semantic’ (applicable to meaningful systems, ‘satisfiable’).

2.Definition and Proof.

Modern logic is built on the idea that everything in a system must be defined and proved. On the surface, this seems like a perfectly reasonable demand, and indeed it has been a driving force of classical logic, and for that matter science in general (substituting ‘as much as possible’ for ‘everything’). However, logicians like Carnap have attempted to push this demand toan impossible extreme, by giving the words ‘define’ and ‘prove’ very narrow interpretations.

They argue, effectively: We have to ‘define’ every word by a previously definedword. If so, the regression is bound to be infinite. If it is not to be infinite, then there must be some arbitrary first words. Ergo, knowledge is as conventional as language (this is of course anon-sequitur).

They argue, similarly: We have to ‘prove’ to be true every sentence we claim. If so, the regression is bound to be infinite. If it is not infinite, then there must be some arbitrary first sentences. Ergo, knowledge is an axiomatic system (again anon-sequitur, since as we shall see an alternative position is possible).

Furthermore, they argue: ‘truth’ itself is a word that has to be ‘defined’, and that sentence in turn has to be ‘proved’ as a theorem within the system, or be an axiom of the system. So, truth too is arbitrary, in both those respects. Thus we read, ‘there is a definite sense in which it is impossible to define the truth of a language in itself’.

Their ideal of logic was therefore to construct a calculus which would consist of the barest minimum of ‘axioms’ from which all subsidiary ‘theorems’ would be derivable mechanically — say, by a calculator such asAlan Turingimagined — with reference to precise ‘formation rules’ and ‘rules of inference’.

It is interesting to note that they nevertheless introduce and discuss their theories, not in the ideal language they are presenting, but in ordinary language. Evidently, this implies that their language and logic is a subset of ordinary knowledge; that is, that they depend for their understanding and conviction on the knowledge subsumed by ordinary discourse and methodology.

If their ‘formal languages’ and ‘formal systems’, so-called, are so ideal, then they should be able to present thementirelyin their own gibberish. In that case, would they or anyone understand or believe anything they say, do you think? Clearly (to use a Randian phrase), they are ‘stealing the concepts’, and they fail to fulfill their own metalogical ambitions. To be perfectly independent, an ideal language and logic would have to be comprehensible and convincing without any use whatever of ordinary language. If they wrote such a ‘purely symbolic’ book, would anyone go for it?

Thus, to continue, what was not definable and provable by and within a system, seemingly had to be referred over to something outside the system, presumably a larger or antecedent system. Ultimately, as we saw, that implied the reliance on some arbitrary system.

Within that framework,it is no wonder that Kurt Godel’s theorems of consistency and completeness are labeled as ‘fundamental discoveries’. In a celebrated 1931 paper, Godel argued as follows, in reply to the said ‘paradoxes’ (I am paraphrasing the philosophical thrust of his theorem; I am not concerned with its mathematical ramifications). Note that logicians call a system syntactically ‘complete’ if ‘there is in[it]any sentence having a definite truth-value in the intended interpretation such that neither that sentence nor its negation is a theorem’.

If a system asserted itself as ‘complete’ (entirely self-contained), it would be admitting of itself: ‘I am not definable and provable by myself, within myself (or ultimately, not in a nonarbitrary way)’. But Godel apparently interpreted that sentence as equivalent to, or giving rise to, the liar paradox. Hence, he inferred, such a system would be ‘inconsistent’.

Contrapositely, if the system was ‘consistent’, the statement of its consistency would need be internal to the system (which would be arbitrary), or external to the system so that the statement ‘I am not definable and provable etc’. would hold, and that would be an admission of ‘incompleteness’.

Thus, Godel concluded that a system cannot be both complete and consistent. This principle, however puerile it may seem, was welcomed as a crucial defense of reason, because it seemingly put a limit on the excessive arbitrariness of the purely linguistic programme, like perhaps the Logical Positivism of Wittgenstein. It showed thatsomelimits exist; it suggested that there were rules of behavior even on the purely syntactic level, over and above any semantic model. It gave more specific shape to, and justified, the whole idea of a ‘formal metalogic’.

I think that is a fair assessment of the views in question, at least in the context of my knowledge of them. Let me now try and answer them. All these arguments contain ‘grains of truth’ which make them seem credible and perpetuate them in logical circles; but all of them are dead wrong in my view.The whole enterprise of seeking ‘fully and internally defined and proved’ or at least ‘openly limited’ knowledge is fallacious.

As one acquaints oneself with modern logic, one notices that it comprises a number of normative concepts which are not found in classical logic, or at least not with quite the same significance. They misunderstand concepts like definition, truth, proof, and validity. Classical logic uses words to that effect with the utmost caution, whereas modern logicians indulge in them freely, as if they have some clear and absolute knowledge of these things. Their theories effectively deny the power of knowledge, but apparently they except themselves from such judgment.

a.Definition.

In classical logic, definition does not consist in equation of words. The concept of ‘definition’, as a process, is gradual sharpening of our focus on an object; we select as ‘definite’ that manifestation of an object which has the sharpest focus. Thus, an object viewed through a microscope or telescope is accepted as ‘at its best’ when it seems at its most solid and colorful. Definition is essentially an act of seeing, perceiving, paying attention to, an object, and selecting one of its enduring manifestations (whether concrete or abstract, whether simple or complex) as its ‘defining’ aspect.

Definition involves two (compound) propositions: first, that certain phenomena have appeared to us, and that they had such and such configuration; second, that those certain aspects of those phenomena are their most enduring, and (in the light of all previous experiences, perceptual or conceptual) somehow intuitively most ‘interesting’ to us.

Both these propositions are empirical in the widest sense: both are based on a mass of perceptions, conceptual insights, and logical intuitions. Note that the relational concept ‘is’, is itself very abstract; so one can in no wise claim any proposition to be entirely concrete in content. Every act of perception is allied with acts of conception (in the simplest sense).

Also noteworthy: all these propositions are quite thinkable without any use of words whatsoever. In fact, a large part of our everyday cogitation is completely wordless: we perceive, we conceive, we mentally imagine, without reference to words. All that is necessary for ‘definition’ is our consciousness and something to be conscious of. The ‘defining’ aspect is first of all an aspectof the objectitself.

We may choose to represent something thus seen by a word or symbol, but we do not thereby create anything. We give meaning to the word, merely by (mentally) relating it to the experience, in the way of a token for it. But we do not thereby invent the meaning itself, the object, the aspect of the object, its seeming exclusiveness and import. The only arbitrary thing we do is choosing a certain combination of sounds and/or shapes, as the one we will attach to that object. Butthatthe object exists,thatit has such and such a configuration, and so forth,and thatwe honestly (rightly from the start, or ultimately wrongly) experienced these events is indubitable.

Also to be kept in mind, the ontological notion ofpredicationis fundamental to definition. This refers to a sense of ‘SisP’ more elusive and yet deeper, than the mere numerical classification of S in P, in the sense that ‘something S’is one and the same as‘something P’ (what is called ‘extensionality’ in class logic). The latter is a permutation of the former, one of its implications; they are not ‘equal’. The meaning of the copula ‘is’ is richer than its quantitative aspect; it has a qualitative aspect which should not be totally ignored by logic.

b.Truth, Proof, Validity.

Similarly, in classical logic, some material proposition is called ‘true’, if it appears to have more intuitive credibility than its contradictory, or all its contraries, in the light of all our accumulated perceptual and conceptual experiences, and all our logical insights of both inductive and deductive kinds. A formal proposition may also be called ‘true’, insofar as it is (in part) a material statement, concerning its constants specifically, which seems entirely unaffected by the content or status of its variables.

When we use words like ‘intuitively true’ or ‘logically true’, it does not mean that we believe them to be true in different senses, but merely to point out the kind of proposition involved (how full or bare it is). There is only one kind of truth; all truths are both intuitive and logical. All insights, including the logical, require an act of consciousness; it is impossible to think without thinkingofsomething, there is no thought without somecontent.

Our abstract knowledge of the ‘laws of thought’ of Aristotle is not the predominant source of our conviction that some particular intuited contradiction is an ‘unacceptable’ phenomenon. These ‘laws’add toour conviction (if we have learned them), because they remind us thatsimilar events have occurred before, and that events of that kind are ‘unacceptable’ and to be dealt with in some way or other. But in each case, the particular intuition still has an independent force of its own.

Thus, these ‘laws’ are not ‘axioms’ in the modern sense. Indeed, if one reflects, it is obvious that toapplya principle to a particular situation presupposes an ability torecognizethat situation as a case in point. With regard to a situation of contradiction, it is precisely theinsightof an inherent flaw in the given conjunction which allows such recognition. It follows that we do not need the ‘laws’ (in the way of modern ‘axioms’), since their application can only be a later event, and is just as particularly intuitive.

The practical value of Aristotle’s principles, is that theyremind usto consider the data in an orderly fashion, so that any contradictions which might in fact exist are madevisible. They encourage us tolook out forcertain kinds of problems; that is all. The normative ingredient of ‘unacceptability’ is inherent in the phenomena themselves; awareness of the ‘laws’ is not thecausebut an effect of our disbelief in contradictory situations. The ‘logical necessity’ involved is the sense of self-sufficiency of such experience.

In classical logic, the word ‘proof’ is preferably used with reference to material propositions, and its value is in the overwhelming majority of cases contextual; only in very rare cases do we encounter extremely unassailable logical necessity. Proof is generally a mix of inductive as well as deductive processes; there is no such thing as purely deductive proof. For moderns, in contrast, ‘proof’ is a mechanical process.

Classical logic prefers to use the word ‘validation’ when dealing with formal propositions, because their dependence on empirical developments is comparatively minimal; once their constants are induced, there is almost no expectation that the logical insights made in relation to them will ever need revision (though it does happen: witness the historical error concerning first figure syllogism with two potential premises). For us, validity is not something applicable to ‘all possible worlds’, as the moderns say; it concerns all the possibilities inthis hereworld (without prejudice as to its dimensions, physical or mental): onlyitconcerns us and is accessible to us.

Modern logicians like Godel yearn to ‘prove consistency’. But in classical logic, a proposition is internally ‘consistent’, or a set of propositions are mutually ‘consistent’, if we have had no logical insight of contradiction concerning it or them. Consistency is not something that is demonstrable deductively, it is only an inductive conclusion, based on our not having come across any inconsistency albeit having carried out a diligent search for one. If we thus presume something to be logically possible, and are thereafter confronted with a clear intuition of contradiction, and no alternative explanation be found, all other considerations must yield to the overriding claim of logicalimpossibility.

Another concept which recurs often in modern logic is ‘decidability’. This refers to the degree of dependence of the truth or falsehood of a logical relation on the truth of falsehood of its clauses. For instance, in ‘material implication’ (that is, negative conjunction), if the antecedent is false or the consequent is true, then the whole implication (which just means ‘not-{p and notq}’, remember) is also true — thus, to that extent ‘decidable’ (in the context of p or of notq, it is of course not ‘decidable’).

But in modal logic, most logical forms are, to varying degrees undecidable. For instance, strict implication cannot be equated to [the positive side of] the truth-table it shares with ‘material implication’, but only subalternates such a table [because their negative sides are quite different]. It is therefore surprising to read that Turing, in 1936, made the ‘discovery… that every complete formal system (though not every logical calculus) is decidable’. Classical logic was built on the very idea of forms, having some degree of undecidability (a fully decidable form would be useless); as an extreme case, the form ‘if —, not-then —’ is entirely undecidable. To establish that there are partly or fully undecidable forms, one need only point them out: there is nothing to ‘prove’ about it.

Pursuing further, ‘proof’, as just explained, is a process depending on a mass of experiences: all the experiences which gave rise to the terms, copulas, and other features of thede-repropositions involved, plus all the logical intuitions (which are also experiences) concerning the relations between thede-repropositions involved. Our verbalization of such logical intuitions into a formal logic, in no way justifies our viewing the principles of formal logic as ‘axioms’ in a verbal ‘system’. Our conviction is not due to any preferred ordering of words, but to the fact that we had a certain complex of wordless experiences.

Furthermore, the words that I have just written in the preceding paragraph, which are incidentally ‘metalogical’ in a much richer sense, are not themselves to be viewed as ‘axioms’. What counts is their underlying meaning, and the conviction it carries. The words merely make it miraculously and wonderfully possible for me to communicate with you, bydrawing your attentionin certain directions, towards the same objects as I was looking at as I was writing them. ‘Turning your attention’ to certain objects, does not imply determiningthe contentof what you thereby see (except by temporary exclusion, in thatduring that timeyou will to some extent not be aware of other things).

Thus, yes, it follows thatnoverbal knowledge is complete, or self-contained: neither in its ‘definition’, since meaning is not other words, but certain objects we have experienced some way or other; nor in its ‘proof’, since all proof is ultimately inductive: even seemingly pure deductions are with reference to intuitions as to what seems contradictory, not to mention the perceptual and conceptual sources of the formal premises. Let us say even more: no wordless knowledge is ever complete, either; our world is in constant flux and forever revealing new things to us!

The very pursuit of a ‘complete formal system’ is thus flawed from its inception. The inference that incompleteness implies arbitrariness is without justification: all we can say is that the given world we experience, in every which way, is ‘arbitrary’: we have no other to refer to. But surely that is not cause for concern: it suffices that wehavea world, the world of appearances is all we need to have knowledge. So long as there issomethingto know, we have knowledge (however phenomenal): there is no basis for a ‘logical’ demand for more or other things to know. Something may be incomplete and yet sufficient in itself.

Returning parenthetically specifically to Godel’s theorem, I am not at all convinced that his proposed opposition between completeness and consistency is justified. The statement ‘P is not provable by P’ (that is, ‘P implies P’ does not imply ‘P’) is perfectly consistent, and is not equivalent to the statement ‘P disproves P’ (that is, ‘P implies nonP’). No liar paradox is implied, the analogy is most superficial. The self-assertion of a system is not its justification (every proposition asserts itself); its justification is consistency with all the data of our experience, including the absence of logical intuitions of inconsistency. In contrast, the self-reference of the liar’s indicative is meaningless precisely because it has nothing outside itself to refer to.

That a closed system cannot ‘prove’ itself, indeed implies that it cannot ‘disprove’ its own negation, if we understand ‘proof’ in some overwhelming sense, since in the plain sense of implication, the system does of course both imply itself and deny its own negation. All that means, is that we must indeed refer our system outward — not to other words, but to appearances, objects of consciousness. It is they that ‘prove’ or ‘disprove’ anything, and scrutiny and reflection shows that they do so through the complex relations of cognition, recognition, distinguishing, naming/meaning, and gradual adduction (rather than simple implication).

As for the appearances themselves, they do not in turn need proof: they are not words, they are given objects, they areallthe objects we actually have and may justifiably appeal to and discuss, at the stage of the proceedings we happen to be in (this is said, obviously, without intent to prejudicially exclude creationism and divine inspiration from the eventual scope of our world of appearances).

It is our experiences (concrete, abstract, and logical) which are in the truest sense the ‘axioms’ (the ultimate logical antecedents) of knowledge; and these are ultimately particular propositions, and not generalities as modern logicians desire in vain. There is no inconsistency or difficulty whatsoever in such a position, and it is taken for granted by all people of common sense.

Let us now consider certain terminologies and statements. One should always look at the plainmeaningof what one reads; and not be intimidated and assume that there is some other, deeper meaning, clear to a select few, but not to oneself. Surely, if the authors meant more than what they are saying, they would saythatin plain English; surely, if they are so highbrow, they can formulate a clear sentence. Therefore, one may presume them to be saying just what they seem to mean, and no more (unless of course, it is taken out of context).

All the following statements are claimed to be ‘stable and exact conceptions… that explicate the intuitive concept’ we have of their subject-matter. Many of the ‘proofs’ presented are apparently worked out in relation specifically to numerical concepts, to mathematics, but are mostly understood as having a larger impact, since they provide specimens for ‘axiomatic theory’.

We are concerned here with what Hilbert called ‘proof theory’. In this context, ‘proof’ is taken as a ‘carry[ing over]‘ of truth from a given item to some non-given item. We are told an axiom is ‘valid’ if it is ‘a tautology… a sentence true in all possible worlds’; and that not only can this be ‘checked’, but ‘only valid sentences are provable’. Completeness is taken to apply to a system (like the ‘propositional calculus’), if ‘every valid sentence in it… is a theorem’.

The ‘decision’ of validity can be ‘tested mechanically’ by showing that whatever combinations of truth and falsehood are assumed for the letters in the sentence, the sentence as a whole will always ‘come out true’. In a many-valued logic, the ‘independence of the axioms is proved by using more than two truth-values’, although those values may be ‘divided into two classes: the desired and the undesired’; here, an axiom is independent if decidedly desirable, and otherwise it is not.

‘Functions mechanically computable by a finite series of purely combinatorial steps’ were called ‘recursive’ by Godel. ‘Recursion theory’ is now able to ‘prove not only that certain classes of problems are mechanically solvable (which could be done without the theory) but also that certain others are mechanically unsolvable (or absolutely unsolvable)’. In the latter case, we have ‘no algorithm, or rule of repetitive procedure for solving’ them.

Note the ‘grains of truth’ in many of these statements; but their proponents remain unconscious of the ‘husks’ of their circularities: they do not test them on themselves. For instance: ‘a world may be assumed in which there is only one objecta‘, so that ‘all quantifiers can be eliminated’ and we can ‘reduce[that world]to the simple sentenceA{a}‘; in this way, ‘all theorems of the (predicate) calculus become tautologies (i.e. theorems in the propositional calculus)’.

I ask you: if ‘A{a}’ can be said about ‘a’, do not the symbols ‘A{ }’ also exist? In that case, how can ‘a’ be claimed to be a solitary existent, and all quantifiers eliminated? That is surely a serious inconsistency! Also, on what basis does the author of such statements at all trust his or her intuitions as to what ‘follows’ what, when so and so is ‘assumed’? Surely that constitutes an appeal to something outside the projected framework — another inconsistency!

Lastly, if I tell you tautologies in the Hokan-Coalhuiltecan language, will you grasp them? What apoorview of logic these people have, who ‘reduce’ everything to ‘axioms’ which they themselves claim to be nothing more than repetitive nonsense! Clouds circling on and on.

They claim by these and similar ‘methods’ to ‘prove… that the calculus is consistent[and]also that all its theorems are valid’. Or again: ‘its completeness was proved by Godel in 1930; its undecidability by… Church and Turing in 1936’. Look for instance at the following argument, please (it is drawn from the same source, almost word for word). Completeness is taken to mean that ‘for every closed sentence in the language of the theory, either that sentence or its negation belongs to the theory’.

a.if a calculus is complete, then:

either X or nonX belongs to the theory

and ‘all valid sentences are theorems’

b.whence (still for a complete calculus):

if X is consistent, then nonX is not a theorem

if nonX is not a theorem, then nonX is not valid

if nonX is not valid, then X is satisfiable

hence, if X is consistent, then X is satisfiable

(that is, X has an interpretation or model)

Comments: (a) How can it beknown to start withthat the calculus is ‘complete’? If X and nonX are meaningless, then surelyneitherof them ‘belongs’ to the theory. (b) How is Xknownto be ‘consistent’ in the first place? Why cannot nonXalsobe presumed ‘consistent’? Surely the disjunction in (a) of X and nonX is intended to mean that the theory does notab-initioimply either of them, but is compatible with both; in which case, both may be consistent. Otherwise, the argument is entirely circular: that X is consistent, and that nonX is neither a theorem nor valid, are tacitly granted in (a) and then claimed to be ‘derived’ in (b).

Lastly, with regard to (b): why should consistency imply ‘satisfiability’? X may well seem in itself free of contradictions andstillbe meaningless (for instance, ‘This sentence is true’). My impression is that by words like ‘if-then’, ‘consistent’, ‘valid’, they refer to a logicso elementary and nonmodal, that they are all exactly equivalent: the ultimate in particularity and triviality in theorizing — and they are therefore bound to lead to over-generalizations concerning metalogic.

Thus, the argument as a whole consists of a tangled web of equivocations and ambiguities, ofquid-pro-quoand inane tautologies, and alternatelynon-sequiturorpetitio principiisophisms. From such an argument the following grand conclusion is drawn: ‘therefore, the semantic concepts of validity and satisfiability are seen to coincide with the syntactic concepts of derivability and consistency’. What does it all mean? Nothing — or whatever we choose it to mean.

These people have completely confused themselves and each other, with a multiplication of different words for the same things, and words with borrowed but not admitted connotations. They do not consider how their starting points might or might not be arrived at, or how the links between the theses of their hypotheticals are to be established. The cart is put before the horse, and worse still the horse may be a goat, and they travel round and round! This is not logicalscience, by any stretch of the imagination.

3.Infinity in Logic.

Meaning is not a relationship between two sets of words, but between words and things. No ‘model theory’ would be communicable, if the words used by those who describe ‘uninterpreted systems’ to us were not plain English, which means something to us even if ultimately nonverbally. You can go around in circles till you are blue in the face, and you will still only have circles.

Systems with a limited plurality of interpretations are conceivable, but systems totally devoid of interpretation are simply meaningless, as are systems withan infinityof interpretations. This brings us to another trend in modern metalogic: the attempt to evade the issue by relying on infinite formulas. A system has to ultimately be ‘satisfied’ by nonverbal information; it cannot be ‘satisfied’ with reference to an infinite chain of other verbal constructs (called ‘models’).

This issue is not to be confused with the ‘open-endedness’ of the quantifiers ‘all’ and ‘some’. They have an element of indefiniteness, referring usually to a not-fully-enumerated series of individuals; but each individual, as it presents itself, is self-sufficient in its existence, though it is classified with reference to its evident similarities to preceding ones. Whereas in the modern infinities here criticized, each case is defined bythe nextcase.

Logicians have no basis for a belief that an infinity of purely symbolic constructs will acquire meaning and truth at some hazy infinity, like in mathematics, when a curve tends to some vanishing point, and may be presumed to actually cross the line ‘at infinity’. Logic cannot ignore the Zeno Paradoxes. The infinite tape of a Turing machine is not physically possible, so why discuss it at all?

We may call thisthe Anchor Principle: A relation relates somethingtosomething; forms do not exist without contents, they have to be eventually pinned down. An infinite nesting of relations within relations within relations remains meaningless, until and unlessa termfinally consummates it. Infinity is unfathomable, and cannot be treated by logical science as by itself capable of zeroing in on some actuality. ‘The buck has to stop somewhere’.

To be conscious of myself being conscious, I must first be conscious of something else, and then, after that first consciousness is aroused, I can take note of it in a supplementary act of consciousness. If a statement has an infinity of meanings, then it has effectively no meaning, because infinity includes everything, all opposites: that is the ultimate in ambiguity and indefiniteness; there has to be some limit in number of meanings, for the statement to have some specificity.

Modern logicians have, for instance, suggested a study of ‘infinitary logic’, which would ‘include functions or relations with infinitely many arguments, infinitely long conjunctions and disjunctions, or infinite strings of quantifiers’.William Hanfof the U.S. is mentioned in this regard. I must admit that I do not, without knowing any more about it, see how such a study is conceivable, or could bear any fruit.

In fact, knowledge evolves as follows. Our systems are alwayssomewhatmeaningful andcontextuallytrue. They mostly grow, but sometimes they are modified (they give up some of their assumptions, lose extraneous fat or old skin); as they grow, they also become more defined and proven (or less so, if big inconsistencies or doubts make their appearance). This process tends towards infinity (again, we find that ‘grain of truth’), where omniscience of the world limits the alternative experiences and interpretations to just one (when knowledge will be whole and perfect), where everything has full meaning and final truth — but there is no logical need to presume that this goal is reachable.

The justification for active formal studies, as with scientific experiment, is that they hopefully accelerate that ongoing process of knowledge growth, by strengthening our faculties of consciousness, concentrating our awareness, and orienting us more purposefully towards existing phenomena of many kinds. The role of such studies is not ultimately to vindicate our experiences, but to describe their processes, so as to yet more efficiently, more broadly and deeply, more fully get to know. The blueprints must eventually fit the experiences; this is a test for them, not for the experiences.

The experiences are given, though gradually. We only need to know what the apparent patterns they exhibit are, and whether one apparent pattern is to be preferredto anotherwhich is also apparent. Ultimately, we believe — this is the Law of the Excluded-Middle, note (much maligned by ‘intuitionist’ logicians) — thatsome oneof those patterns will remain unchallenged. Just as, viewing with a magnifying glass, some positions are more blurred than others, more ambiguous, and we try to find the most sharply defined position among them, the one with the least ambiguity.

It may well be that none of the patterns discerned thus far will be that special one, but what is sure for each one is that it either will or will not be it: there is no third alternative. Theinductive valueof this principle is that, when we encounter a contradiction, a seeming coexistence of both an ‘is’ and an ‘is not’ with the same terms exactly, we can be sure that the solution is not something other than these two.

The role of logic is to elect, as being ‘real and not illusory’, onesubsetof our experiences,rather thanany other subset of them; one pattern which appears and which we discern, rather than any other. Experience, appearance, as such, as a whole, is already self-sufficiently credible. Logicitselfis but a subset of it, and therefore cannot in any wise ever be construed to somehow stand as judge and jury of it.

The meanings and truths of knowledge (including logical science), are an ongoing dynamic product of a syndrome of perceptions, conceptual insights, logical intuitions, ingenuity, and many other interactive factors (including our physiological and psychological makeup). The only consistent position is such a holistic and open one.

Infinity is one of the misconceptions at the very root of modern metalogic. To understand how it arose, we must refer to certain crucial errors modern logicians made in their formulations of class-logic, or set-theory (the reader is referred to ch. 43-45, to avoid repetitions).

a.Modern logicians confuse subsumptive and nominal terms.That is, for instance, dogs and “dogs” are not clearly distinguished by them. But dogs is a subsumptive term, it isnota class at all, it is a non-class. Onlynominalterms (expressed distinctively, in inverted commas or with the preamble ‘the class of..’.) qualify as classes; and of those, “dogs” is a class (or first-order class), and “dog-classes” is a class of classes (or second-order class). The relationships between these various kinds of terms are precisely formally definable, as we saw, and they may not be equated.

b.Consequently, also, they (often, though not always) confuse classes with classes of classes, and hierarchies with orders.Note that modern logicians are of course aware that there is a difference between classes and classes of classes, and that membership of individuals in a class does not qualify them for membership in classes of that class — that is, that membership is not transmissible (they say, ‘transitive’) from one order to the next. It is after all they who discovered this field of logic!

But confusion still arises (especially in symbolic contexts, and in some examples they give) between, say, a genus or overclass of “dogs”, like “animals”, and an upper-order class, like “dog-classes”. And the root of this confusion is the said confusion between the roles of subsumptive and nominal terms.

c.Modern logicians consequently assume that there are orders of classes higher than the second.For them, ‘classes of classes of classes’ is a meaningful concept, different from ‘classes of classes’. Just because we can say ‘the parent of the parent of the parent of..’., it does not follow that we can say ‘the class of the classes of the classes of….’ Try to think of an example of the latter, if you can.

As I explained, the sub-classes of “dogs” (like “retrievers”) form a hierarchy, of the first order, and the sub-classes of “dog-classes” (like “retriever-classes”) form another hierarchy, of the second order; and these hierarchies are indeed distinct, though exactly parallel, and they may containany number ofclasses (of the appropriate order). But that does not imply that there exists an infinite number of orders: the concept of orders is quite distinct from that of hierarchies.

We can form a concept like “dog-classes” because dogsdiffer from each other: retrievers differ from bulldogs, and so on; whereas a concept like “classes of dog-classes” has no differences to refer to other than thosealready encapsulatedby the concept of “dog-classes”: it is therefore exactly identical to it, in intent and extent.

Orders higher than the second are therefore justverbalfigments of the imagination: they refer to nothing new. All they do is keep reproducing the first and second orders with new names. Their infinite manipulations will not add an iota to the science of logic.

The impression that there exists any number of orders is due the existence of multiple hierarchies within each of the first two orders (this is the ‘grain of truth’ behind that fallacy); the inference drawn that there are more than two orders, merely serves to display that modern logicians confuse orders with hierarchies.

d.Modern logicians tend to give credit to the idea of self-membership, because they conceive of “classes” or “classes of classes” as themselves classes, and more deeply because they do not make a clear distinction between nominal and subsumptive terms.

But as we have seen, classification is a relational concept; the class of all classes is “things” and that of all classes of classes is “things-classes”. There is no indubitable example of self-membership, except in the case of “things” and “things-classes”, and these summum genera can be definitionally excluded. All other alleged examples can be explained away, so that we may inductively write-off the whole idea of self-membership, which is anyway conceptually unconscionable (how can a container contain itself?).

Parenthetically, with regard tothe Russell Paradox, we have seen that it is resolvable,notby means of this rejection of self-membership, but with reference to the process ofpermutationit involves. The mere presence of the word ‘is’ in the string ‘is a member of’, does not allow us to split the latter relation into a subsumptive ‘is’ plus a predicate ‘a member of….’ Modal syllogism provides us with a clear independent confirmation that such limits to permutation exist, since interpretation of ‘is capable of’ (a colloquial for ‘can’) as equivalent to ‘is {capable of…}’ results in an invalid syllogism (see ch. 17).

I submit therefore that, at every fork in the road of the logic of class membership, modern logicians have taken the wrong turn. Their choices have been repeatedly improbable and contrary to reason, seemingly with a view to innovate at all costs. One is consequently highly tempted to wonder, in some cases, whether there is not a subconscious urge of nihilism — to deny common sense, to shock and bewilder students, to dominate. Instead of logical science being our vehicle to understanding of reality, it has been turned into a pit of confusion.

I certainly do not want to give the impression that I believe myself all-knowing. My knowledge of modern logic is admittedly patchy and limited; what you see is what you get: all I know is mentioned in these pages. It is not much, because my personal interest in these matters has never been highly stimulated; my interest is in a logic which is of daily utility. Notwithstanding, I believe that the judgments made here are essentially correct, because I find that the deeper I dig, the more I disagree. The extrapolation may be wrong. Okay: I can live with that thought.

4.Conceptual Logic.

In assessing modern logic, we must ask: what is logic, what is its purpose? The mathematically inclined study of logic is a very narrow field, in the grand domain of logic. Our ambition is to develop aconceptuallogic, eventually capable of understanding the wealth ofqualitativerelationships in this wonderful world. Within that larger enterprise, logicians have found it necessary for a while to concentrate their efforts on the quantitative aspects of these relationships; but these are merely effects of, not identical with, the qualitative aspects.

The approach of modern logic is thus very specialized. It was to some extent necessary; it was valuable; but enough is enough. There are deeper and more important issues to look into. Look at the enormous arena of the physical universe; then look at the size of a man’s brain, or the total volume of all the brains on earth. That is exactly the relative importance between conceptual logic, and logic of the first and second classes.

Conceptual logic is the ‘zero’ order of subsumptive relations, of all predications before any permutation. Class logics of the first and second orders (there are no more orders, as already argued) are merely additional layers over and above the zero order, and of much narrower scope. The logic of classes is also entirely derivable from the logic of subsumption (as we saw); it is only new in the sense of having only recently been explicitly considered. It is an interesting field, but it is not all of logic.

Modern metalogic cannot claim to be beyond meaning; it is always tied to some meaning. But that some meaning is only a fraction of the total meaning. If the full meaning is not taken into consideration, our abstractions are bound to give us a distorted image of things. We must range far and wide to get a proper perspective on things; openly, humbly, flexibly, with respect for the complexities of the issues, their many facets and their depth. It is pointless to rigidly simplify, to reject whatever we are unable to assimilate thus far, to write off whatever befuddles our intelligence.

There is need of a more profound ‘philosophy of logic’. Let us now refer to aNew Encyclopaedia Britannicaarticle on this topic, written by K.J.J. Hintikka while at Florida State U. in Tallahassee (25:719-723). We are told that logic is ‘the study of truths based completely on the meanings of the terms they contain’. This is a traditional view, and I agree with it in essence. Indeed, the article goes on ‘the meanings in question may have to be understood as embodying insights into the essences of the entities denoted by the terms, not merely codifications of customary linguistic usage’. Again, a very sensible position.

However, Kant effectively took ‘the meanings of the terms’ to signify ‘the verbal definitions of the words’, in the sense of tautology. This interpretation has strongly influenced and pervaded modern logic, witness Carnap’s ‘syntactic language’ for instance. But another interpretation is feasible: we know the ‘truth’ of anything, because we areaware ofthat thing, to whatever degree; the ‘meanings’ of our terms (words) are the things they refer to,of which we are conscious. The equation between words is called true,becausethose words represent for us such and such objects we have perceived and/or conceived, and these objects were seen to behave in the way asserted. The verbal aspect of judgment is incidental.

Logical science looks at certain specific aspects of the total picture, and attempts to discern certain patterns. For instance, Aristotle’s argument ‘if sight is perception, the objects of sight are the objects of perception’ may at first glance seem obvious, a materially evident inference. But the logician says, ‘no, be careful; sometimes such arguments commit the fallacy of composition, confusing the parts of things with the whole’. He then goes on and tries to isolate the distinctive factors of correct such arguments.

In this case, there seems to be a productive substitutive syllogism, of the following form. In place of the specific relation of ‘seeing’, we put a genus of it, ‘perceiving’, without touching the terms (subject and object) of the relation. It is in effect a change of copula:

All seeing is perceiving

I see a certain object

therefore, I perceive that object

or, in more conditional form,

all seeing of an object is perceiving of that object.

Thus, the argument consists simply in adeeper insightinto a case of the ‘seeing’ relation, and discerning its ‘perceiving’ aspect (by comparison to hearing, and so on). The terms ‘I’ and ‘the object’ remain unaffected, because they are also found in the other species of ‘perception’, so that they are accepted as conforming to ‘the perception relation’ in general.

In contrast, ‘all seeing is enlightening’ would lead to an illicit process, for the reason that the relation of ‘enlightening’ refers to some other sorts of terms — in this case, as the object enlightens me, the Subject (instead of vice versa); thus, we must be careful (and preferably specify who is being enlightened). Thus, before making a general statement about any process, we mustinductivelyfind the distinct ‘isomorphisms’ of apparently legitimate cases.

In this way, logical science generalizes and formalizes. It just reports the general aspects and conditions of right-seeming arguments, and distinguishes them from the general aspects and conditions of wrong-seeming arguments. The meanings of the words involved are determining, because words refer us to certain preceding conceptual processes (in our example, the awareness of similarity between seeing and say hearing, and their difference from say enlightening, in the ways their subject and object are placed).

Note the primary importance in formalization of comparison and contrast — the intuitions of sameness and difference. Logical concepts, just like all concepts, are built up by identifying and distinguishing phenomena. That different people and peoples evolve common languages or languages with common structures and meanings, is due to the uniformities in the Objects, more so than to physiological uniformities in the Subjects. Not that the latter are irrelevant, of course. Worthy of mention in this connection, is the research by Swiss psychologistJean Piaget, into ‘the developmental stages of a child’s thought by reference to the logical structures he can master’.

Deductive logic always depends on a certain amount of induction. The intuitions of the logical practitioner are no less trustworthy in principle that the intuitions of the logical theoretician; the latter is just more deliberately careful than the former, he compares and contrasts more. ‘Form’ is itself a content of the world; it is merely considered in isolation from other contents, by the logician. If he is lucky and perspicacious, he will from the beginning make assumptions of lasting value; but there is no guarantee that centuries later someone else will not find fault with his work.

We must precisely understand the stratifications involved in our enterprise. The logic practitioner intuits the material logical relations between materialde-rerelations (let us call this the ‘zero order’ of logic). The theoretical logician intuits the common and distinctive aspects (or ‘forms’) of materialde-rerelations having such logical relations, as well as the common and distinctive aspects (or ‘forms’) of material logical relations, and records the material (informal) logical relations between the formalde-rerelations (this is ‘first order’ logic), and even between the formal logical relations (this is logic of the ‘second order’, and there are no still ‘higher’ orders).

The latter two levels (logical science) areperformed byan exercise of logical art, the ground level. They are not somehow removed and superior, mere linguistic pronouncements. The language of logic is itself an object, a part of the world, to be explained within that world. It cannot be studied in total abstraction from the world. How could Ludwig Wittgenstein believe that ‘language-games’ can ‘give the expressions of language their meanings’, or Willard Van Quine, of Harvard, consider that ‘relations of synonymy’ — presumably, he means similarity — ‘cannot be fully determined by empirical means’?

It is indeed impossible, as Godel asserted, to completely and consistently axiomatize logic — but why should that surprise? All knowledge is and must be empirically based; words can never on their own acquire meaning or truth. Only a very small part of thinking is ‘mechanical’. Formal logic can to some extent be made ‘recursive’,only because ofthe preceding intuitions of informal logic.

Even lower animals have some degree of consciousness; but computers (sorry, trusty Old Pal) and robots do not and never will; I cannot speak about ‘androids’ (not having met any lately!). The role of will in consciousness is inawakening and directingit (switching and scanning functions). In lower animals this power is supposedly less ‘free’ than in humans. But consciousness itself is a unique phenomenon, however it is moved. No amount of manipulations of ‘data symbols’ will give a machine consciousness of what they mean; the concept of ‘artificial intelligence’ is misnamed, a gross exaggeration.

Quine’s objections, around 1950, to ‘the non-empirical character of analytic truth (logical truth in the wider sense… arising from meanings only)’, might seem to class him as a defender of empiricism. But to me his position only serves to reveal his failure to trace the empirical roots and development of logic. Logical Positivists, who believe ‘that logical truths are really tautologies’, might seem like pragmatic realists. But I wonder how they lay claim to this ‘really’ of theirs, and how come their words have some communicable content. It seems clear to me that these people have passed all their lives making the trivial manipulations of modern symbolic logic.

Happily, some modern philosophers still do believe that ‘logical… truths are informative’, and not trivial. The issue of ‘cross-identification’ — recognition of individuals, as well as of the uniformities among individuals — is correctly pinpointed as crucial, ontologically and epistemologically. The age-old problem of ‘universals’ is an ongoing challenge for logicians. One cannot quantify, without first having something (qualitative) to quantify. There is no simple solution;the complexities of induction have to be analyzed one by one, specimen by specimen, in excruciating detail. One thing is sure, asAyn Randhas eloquently said (rephrasing Aristotle’s Law of Identity), ‘existenceexists‘ (942).