**CHAPTER 64. CRITIQUE OF MODERN LOGIC. **

1. Formalization and Symbolization.

2. Systematization and Axiomatization.

4. Improvements and Innovations.

**1. Formalization and Symbolization. **

Let us first clarify the important difference between formalization of logic, and its symbolization, because these ideas are often confused with each other, as well as with the ideas of systematization and axiomatization.

A categorical proposition like ‘S is P’ or like ‘All S can become P’, or
a conditioning proposition like ‘if P, then Q’ or like ‘either P or Q’, is
called ‘a form’ or characterized as ‘formal’, because its terms or theses are *variables*.
The expressions S, P or P, Q are meant to stand for any set of appropriate
‘contents’ which may arise, as in ‘the sky is blue’ or ‘if we don’t stop
pollution, many people will suffer’.

The discovery of ‘formal logic’ by Aristotle, consisted in the
realization that processes like opposition, eduction, or syllogism, could be
validated without reference to specific terms. He succeeded in showing us that
such inferences depended for their validity only on the *constants* involved in the forms: the copula, the polarity, the quantity, and eventually
the modality.

Other Greek logicians, like his pupil Theophrastus and Philo of Megara, broadened the idea of formalization, by applying it to hypothetical propositions, in which the variables were propositions instead of terms, and the constant was the ‘if-then-‘ relation. And since then, many other applications have been developed.

In this way, the (contentual) art of logic was refined into a (formal) science. The intuitions of logic were demonstrated to display certain regularities. The expressions S, P or P, Q were effective instruments, not because they were isolated letters, but because they were generalities open to any particular manifestation.

Aristotle could equally have used *whole
words* like ‘the subject’ and ‘the predicate’ for the terms, and indeed did.
Likewise, the antecedent and predicate were originally referred to verbally, for
instance by the Stoic Chrysippus, as ‘the first’ and ‘the second’ theses. The
use of letters was quite incidental; what mattered was the underlying concept of
position and function in certain kinds of statements.

Or again, Aristotle could just as well, and did, teach us *by
way of examples*. Consider any inference, say the conversion of ‘All X are Y’
to ‘Some Y are X’. It suffices to say: ‘all trees are plants’ implies (forces us
to accept to avoid contradiction) ‘some plants are trees’, and *thusly* (similarly) for any other things, other than ‘trees’ and ‘plants’ (leaving us
with only ‘all’, ‘some’ and ‘are’ to focus on). Such specimen or paradigmatic
schemata would also qualify as ‘formal logic’, in the full sense. We do not
absolutely need X’s and Y’s *to communicate* this; they are just neat tools to facilitate the process.

The achievement of those logicians was *conceptual*;
it had very little to do with ‘symbolization’. Formalization was a major
advance, by virtue of making us *aware* of the various components of propositions, and the role each played in
determining inference. They were *pointed
out* to us. The variables were clearly *distinguished* from the constants, and the latter from each other, and it was found that the
impacts of two or more propositions on each other *depended
only on* their forms, not on their contents.

Changes in the values of the variables, without changes in constants, did not affect the form of the result; whereas changes in the constants, without changes in the values of the variables, could yield different formal results. This is the extraordinary significance of formal logic, and its beauty and magic.

Also qualifying as ‘formal logic’, though it makes no use of symbols, is *the
descriptive method*. A case in point is the ‘rules of distribution’, which
are used to explain inferential processes by simply telling us about their inner
workings in plain words. In that sense, all discussions on logic qualify as
formal, including the symbolic. Note in any case that there is no *purely* symbolic communication, we always have to explain the symbols to each other
first, in ordinary language, to give them some meaning.

What makes logic formal, in conclusion, is its *goal* of somehow describing our thinking processes about the world, with a view to
prescribing some of them rather than others. The means for carrying that out may
vary. There are different research and teaching strategies and techniques.
Symbolization, to whatever degree, is just one of many such strategies, which
may or may not be the best.

Let us now contrast modern symbolic logic. All it did was to substitute
certain typographies (isolated Latin or Greek *letters*,
or newly invented doodles), for pre-existing words. These ‘symbols’ did not
serve as variables, but merely as stand-ins for constants: they performed no
function that the original verbal expressions did not or could not fulfill. They
implied no new conceptual insight whatsoever; they added exactly zilch to human
knowledge, *nada, rien*.

Thus, for examples (say), the copula ‘is’ might be symbolized by a sign of equality, and the polarity ‘not’ written as a minus sign (or a curl); the quantifiers ‘all’ and ‘some’ (or ‘there exists’) are replaced by an upside-down capital-A and a laterally-inverted capital-E; necessity and possibility are signaled by square and diamond shapes; a dot plays the role of ‘and’, ‘if-then-‘ becomes an arrow (or a horseshoe or a fishhook), and ‘v’ stands for the inclusive ‘or’. Many other such symbolizations have been introduced; we need not go into the details.

All the concepts underlying the symbols had already been discovered, and
previously named. All modern logicians did was introduce *a* *smaller string of letters*. Even if
that was a harmless exercise, it cannot by any standards be viewed as an
earth-shattering feat. *A symbol is just
another word*, only more brief (and less widely understood) than the
corresponding word of ordinary language.

The medieval symbolization of whole (actual categorical) propositions, as **A**, **E**, **I**, **O** (I have added **R** and **G**, for the singulars) was a valuable achievement, simply because it
gave us a way to refer to them briefly: longer words take up more attention span
of their own. Likewise, my introduction of modal symbols like **n**, **a**, **p**,
which can be used as suffixes to **A**, **E**, **I**, **O**, as in **An** or **Ap**,
were very useful to me, as tools in clarifying modal argument.

I am not putting down symbolism as such, but only warning against certain
excesses and mistakes it can engender. Since symbolization as such was no
novelty (it had long been used in mathematics), applying it to all the
components of propositions can only be regarded as an argument for *a
language of shorter words*; that was surely a very minor contribution.

But, in any case, there is a world of difference between symbolizing a variable and symbolizing a constant. The various values of a variable are very different from each other; their only common ground is having a certain position in the proposition concerned. Whereas a constant is uniform in all its manifestations; for instance, every instance of the ‘is’ relation is thought to be identical in all respects, except for being another individual instance. Thus, symbolization does not play the same role in both cases; for variables, it signifies formalization, but for constants, it has no such effect.

The only way to formalize the constants of categorical propositions would
be as follows. Let **Q** mean ‘whatever
the quantity, be it *all* or *some*‘; and **P** mean
‘whatever the polarity, positive or negative’; and even **R** mean ‘whatever the relation, whether the copula *is* or some other’ (and eventually ‘**M**‘
mean ‘whatever the modality’, if you want to get fancy). Then, if x and y are
variable terms, the general form of categorical is QxPRy (or QxMPRy).

The trouble with this idea, is that *we
know of no logical processes which apply to a form so vague as this: so there
would be no point in constructing it.* If you lump together disparate
propositions, or worse whole families of propositions, there is virtually
nothing left to say about them. Logic develops by differentiating one form from
another.

Similarly, for an all-embracing form of logical relation: the relations
of conjunction, implication, and disjunction, *have
no common properties*, or none of any significance. It is for that reason
that no such form has made its appearance in human language, and it would be
useless to try and invent one.

Modern symbolic logic gives *the
impression* that it is extending the scope of formalization, by its
symbolizations, but it is in fact doing nothing of the kind. It is just
abbreviating language. I admit that such abbreviation is occasionally not
without utility: it allows us to display a lot of data in a small space, and
thus more readily perceive its patterns. But it can also be counterproductive,
in many ways (see ch. 1).

For a start, few people have the inclination to learn and memorize the meanings of symbols. Doubly fatiguing are shapes which are not letters in our alphabet. The result of this is that a large segment of the general public, even intellectuals, are discouraged from studying logic; and it becomes the reserve of only a handful of academics. Yet the point of science is to educate people, not scare them away. Logic is a valuable segment of the human cultural heritage; and, properly presented, can be very enjoyable to study.

Logical science has to resort to some artifice, because ordinary language is not always ‘consistent’: idioms vary, there are synonyms (same meaning, other word) and homonyms (same word, other meaning). Logic selects and ‘freezes’ certain senses of the words it uses, so as to avoid all ambiguity or equivocation. It symbolizes some of these words, to shorten them. This does not affect the data which concerns it.

But even for logicians, symbolization can be very misleading, and should only be carried as a final embellishment to a pre-existing system. If anything, symbols are likely to only conceal the weaknesses of a system; they may give it an appearance of finality it may not deserve, and arrest development.

The truth of this warning is evident in the many errors of logic congealed and enshrined by various modern theoreticians. If the underlying concepts are wrong, or too narrow in scope, then the system built upon them are bound to be confusing.

A case in point is the belief that implication can be defined nonmodally as ‘It is not true that {the antecedent is true and the consequent is false}’, or (inclusive) disjunction as ‘It is not true that {both theses are false}’. These definitions refer to actual negation of actual conjunction, but they are misinterpretations of what we actually mean when we say ‘if-then-‘ or ‘-or-‘ (see ch. 23).

Of course, one can use words as one wishes, as long as they are clearly defined; but why misrepresent one’s achievements? Why not just admit that one is dealing with the limited field of negative conjunction, and avoid all ambiguity? To define ‘if p, then q’ as ‘not-{p and notq}’ results in ignorance of the contradictory form ‘if p, not-then q’, since its interpretation would then have to be simply ‘p and q’. Likewise, defining ‘p or q’ as ‘not-{notp and notq}’ forces us to interpret ‘not-{p or q}’ as ‘notp and notq’.

In any case, such ‘Philonian’ implication or disjunction is of little use
to formal logic, which even when it compares ‘material’ relations does so in
terms of ‘strict’ relations. For example, the middle implication in “‘p
materially implies q’ *implies* ‘notq
materially implies notp'” is a strict one: it is this that gives it
significance, and that is how it is understood.

Moreover, although modern logic has lately become more aware of modal
definitions of implication and disjunction, these concepts are still understood
in a very limited sense of logical relations. *De-re *forms of conditioning have as yet been very rarely discussed, and so far as
I know there is no integrated formal system which includes them, other than the
present one. Concepts like the ‘basis’ of conditioning, or like modal induction,
seem to be totally ignored.

Even if logicians themselves are aware of the limited intent of their vocabulary, others may easily be misled, especially since the whole is cast in esoteric symbols. But in fact, the hasty departure from ordinary language has misled the logicians themselves. Ordinary language is rich in still unexplored meanings, whose elucidation is an ongoing process: they have locked themselves out of that process to a great extent.

Perhaps the best way to express my misgivings is through the following principle:

* If a theory of logic cannot be comprehensively and convincingly
constructed in ordinary language, it cannot be built any more solidly by using a
symbolic treatment**.*

While on the topic of purely verbal changes, I would like to make an additional comment, concerning changes in terminology.

A process may be known to previous logicians, but they have not seen fit to assign it a special label, preferring for the sake of brevity to refer to it merely by a defining phrase. Likewise, past logicians may have noticed a fine distinction between very similar processes, but refrained from naming them distinctively, to avoid unnecessary pomposity. If later logicians come along, and rename something already named, or name something not previously named, for reasons like those, it does not signify that they discovered anything.

Making new, finer distinctions is sometimes valuable, of course; but
cosmetic changes should not be regarded as momentous contributions. It depends
on how useful they are found to be by the society using them. Language has to
change over time. But it is well to remember Ockham’s *Razor*,
which in this context would be that names are not to be needlessly multiplied.

**2. Systematization and Axiomatization. **

Now, let us consider the other processes which are much touted by modern logicians — systematization and axiomatization. These terms are often used interchangeably, with each other and with formalization and symbolization, but they are not the same.

We would label a theory of logic, or concerning some specific field of logic, as ‘systematic’ to the extent that it is broad-based, and its various findings are well-integrated with each other, cohesively bound together by common threads, so that their relations to each other are made evident. We expect some degree of comprehensiveness; and a consistent and enlightening ordering of the data.

By *broad-based*, is meant that all seemingly relevant information has
been compiled and taken into consideration, so that we can be confident that our
judgment proceeds from a sufficiently large context. To the extent that some of
the data is unconsciously or willfully ignored, it is reasonable to suppose that
some hidden difficulties exist that have not been taken into account in
formulating the theory; out in the open, such difficulties might alter or cause
rejection of the theory.

What makes a theory systematic, once some database is gathered, is
primarily that it ‘structures’ the information, through comparisons and
contrasts. Integration presupposes differentiation; but analysis and synthesis
proceed together, feeding on each other. Structuring both unifies and divides,
making the *patterns* of sameness and
difference involved apparent. We want to arrange the data in such a way, that
its full message is apparent to all, with ease.

The purpose of this exercise is to consolidate all items of data, to show how each sits comfortably in the whole context. It obliges us to review the data methodically, again clarifying and checking our methods. Issues and themes are highlighted. The apparatus holding the data together is made transparent. Some arguments are seen to be ‘reasonable’, others more ‘forced’. New ideas emerge, strengthening the whole, or enabling growth. The crucial factors become more visible.

An example of *integration* would be the definition of the various types of plural
modality, through similar statistical formulas, differing only in their focal
singularity. Thus: ‘in all or some or no or not-all cases, circumstances, times,
instances, contexts’ — the focal determinant varies, but the quantitative
aspect and the understood meaning of ‘in-ness’ are similar.

We can then differentiate the various categories of modality, within each
type, though the horizontal analogies remain apparent. The vertical development
of modal propositions, from the categorical forms to the forms of conditioning,
is also integration: the threads of continuity are made evident. The significant
relation here, is not a ‘logical’ one (in the sense of implication, or the
like), but more accurately the highlighted *similarities
and differences*: these are much more primary.

A badly, or not at all, integrated theory, is one which lists information
without rhyme or reason, incoherently. The value and purpose of integration, is
its ability to clarify things for us and make them more ‘understandable’. The
best way to achieve that goal, is open to debate, at least *ab-initio*.
The axiomatic model favored by modern logicians is one possible answer, which
must be evaluated; it is not automatically the best. Furthermore, it is not as
‘primary’ as it is claimed to be.

There are two ways data may be *ordered*.
One, is the ‘historical’ order, the exact order in which the author or all
authors collectively actually developed the research; this is commonly avoided
because of all the trial and error involved in making theories: taken to an
extreme it would be a waste of time and confusing, akin to ‘stream of
consciousness’ literature. Two, is some sort of ‘logical’ order.

If we ask what constitutes a logical order, our first impulse may be to suggest that it is ‘deductive’, proceeding from the general to the particular. This is the model used, for instance, by geometrical science; and it is effectively prescribed by modern logicians for all science, including logic. We are to highlight the independent postulates and show what may be derived from them, including prediction of empirical data, ‘if any’.

However, upon reflection, the issue is not essentially one of what is written down before what; one could equally well present things in an ‘inductive’ order, from the particular to the general, provided the same empirical data is taken into consideration and all consistencies are maintained. The choice is only esthetic, or related to didactic value.

But, in the last analysis, neither of these characterizations of logical
order is accurate. The truth is that there are both inductive and deductive
elements in all knowledge, and all sciences. There is *no
such thing* as an ‘a priori’ purely deductive science: it would consist of
meaningless words. And indeed there is no such thing as an ‘a posteriori’ purely
inductive science: it would consist of nothing more of the impressions of the
moment, unrelated to each other and unevaluated.

The so-called *axiomatic* model proposed by modern logicians for logical science, is
claimed to be a purely deductive process, whose ‘axioms’ are therefore at best
exactly identical with the empirical data base, or at worse devoid of any
empirical reference. If the ‘axioms’ are intended as postulates related to
certain other data in the way of adduction (by prediction and confirmation), as
in Physics, then they cannot be properly regarded as ‘axioms’ in a purely ‘a
priori’ system.

Realistic systematization, a strictly-speaking logical order, must
include an awareness of the *conceptual* under-currents of knowledge. Even geometry, whether Euclidean or otherwise,
involves empirical data: if we did not have a perception of physical or mental
space, we would be unable to formulate postulates like those about parallelism
at all. The same applies to logical science: without a mass of experiences, like
(say) that things ‘have properties’ or that many things may have ‘similar’
properties, there would be no forms *to* put in any order.

The first condition for accurate systematization is, not ‘axiomatization’
of mysterious entities, but awareness of the genesis of the entities to be
ordered. Inquiry into the *‘genealogy’* of the concepts involved, their precise concrete and abstract referents,
logically precedes any generalizing propositions concerning their
interrelations, and the selection of some of these propositions as postulates
(or axioms, in the proper sense) ‘standing before’ and ‘encompassing’ the
others.

Ordering presupposes having something to order; ordering meaningless ‘symbols’ does not give them meaning. If they symbolize nothing, we are ordering nothing. If they do indeed represent something, then the ordering is referring to certain perceptions and conceptions, and is not an arbitrary construct. ‘Axiomatization’, taken in its extreme sense, has nothing to do with scientific systematization; it is just convoluted artwork.

Let us now consider *the
actual order of things for logical science**:*

a. First, it looks at some *categorical* propositions, discerning their various parts, taking note of their positions and
relative variations, and (to begin with) *intuitively* estimating their functions in the whole. The most
constant part is ‘is’, but others may take its place, like ‘becomes’ for
instance; somewhat less constant are ‘all’, ‘some’, ‘not’, and the like; most
variable are the so-called terms, their roles seem to be such and such.

Next, the logical relations (oppositions, eductions, syllogisms) between
such propositions are, again, (to begin with) *intuitively* estimated. What seems ‘contradictory’ is rejected, what does not is called
‘consistent’, apparent ‘implications’ are noted, and so forth.

All such intuitions are taken at their face values, unless further intuitions interfere, and impose a selection. For us, intuitions are pure data.

b. Second, the logician confronts conditional propositions. Until now, we have been aware peripherally of relations which involve more than two terms, more than one categorical proposition; in particular, we noticed our intuitive access to ‘logical’ relations. Now, we repeat the same intuitive analysis for conditionals, as we used for categoricals.

We dissect them, and note the similarities and differences and the apparent functions. We distinguish varieties like incompatibility, conjunction, implication, disjunction of various sorts. We discover that there are not only ‘logical’ such conditionings, but also other types, and we try (to begin with) to intuit their analogies and distinctions. Then we (again, to begin with) intuit the logical relations (oppositions, eductions, syllogisms, apodoses, and so forth) among the various forms of logical conditionings, and likewise for other types of conditional propositions.

c. Only now, *at the last,* can we
propose to *identify* (equate) our
logical intuitions of various sorts, with the constructs designed to explain
them. This is where we order the data *on a
grand scale* (some more limited ordering has been performed in the course of
the two preceding stages, now we consider the whole).

This grand ordering *itself*, is
done by means of *intuitions* (of
similarities and differences, and of apparent logical relations). Its role is
only to test and confirm, by sifting through the data with that goal in mind,
that our constructs are indeed able to reproduce the *preceding* logical and other intuitions.

The stage of overall systematization, which includes efforts of
‘axiomatization’ (in a proper, limited sense), is a final effort. There can be
no pretensions that this final ordering *justifies* the intuitions which were used before it and in performing it. Its only
intention is to confirm the theory that our constructions of ‘logical relations’ *correspond to* the intuitions we have
labeled ‘logical’, that they predict the same results.

It is the particular intuitions (including those used in the final ordering) which justify the proposed postulates, by showing that, yes indeed, they (are intuited to) make the correct predictions, and everything fits together without apparent contradictions. Ergo, our constructs for logical relations are ‘valid’.

That, briefly put, is the *logical* order of things in logic, in the strictest sense, an order which acknowledges
the empirical antecedents (both perceptual and conceptual, and including all
intuitions of sameness and difference, as well as those of conflict and forced
inference). And that this genealogical order is displayed in practise, in
personal intellectual development and in the history of logic, is no accident.

‘Axiomatization’ advocates (in the extreme sense) completely reverse this order, and posit the forms as preceding the contents, and logical constructs as more dependable than logical intuitions. Those premises are fundamentally wrong; they are inattentive to actual conceptual developments.

With these preliminaries in mind, I want to now criticize certain broader attitudes which have shaped the direction of modern logic. I do not wish to appear overly critical, but too often, looking at the work of modern logicians, I am reminded of H.C. Anderson’s story of the emperor’s new clothes. In less ironic moods, the words a ‘guide for the perplexed’ float in my mind.

Let us look into some more specific statements, reflecting the attitudes
of modern logic, with reference to the *New
Encyclopaedia Britannica* articles on the kinds of logic (23:234-235,
250-272, 279-289), namely the introduction, the section on ‘Formal
Logic'(authored by **G.E. Hughes** of
Wellington University, in New Zealand), and the section on ‘Applied Logic’
(authored by **N. Rescher** of Pittsburgh
University). I deal with the section on ‘Metalogic’ in a later chapter.

The introductory article at the outset declares the traditional division between deductive and inductive logic ‘obsolete’, on the grounds that induction is nowadays dealt with under heading of ‘methodology of the natural sciences’, so that the logic is henceforth to be taken to refer specifically to deduction.

I do not agree with this position, for a start. As I think has been clearly shown throughout my work, deductive and inductive logic make up a continuum; they are not radically distinct. Deduction is a limiting case of induction, where logical probability hits the ideal 100% mark; and induction is in a sense a form of deduction, in that it also prescribes a specific conclusion from given premises, albeit one of lesser logical probability.

In practise, there is no clear demarcation line between the two; how we categorize a judgment is very relative. If all the conditions are specified, we call it deductive — but that label ultimately depends on how well established the premises are inductively. If the conclusion is the most probable thesis in a disjunction of conclusions of varying probability, we call it inductive — but in the context of the law of generalization it is effectively deductive.

Furthermore, to limit the concept of induction to that of methodology of the ‘natural sciences’ is a very narrow viewpoint. Inductive logic, as we have seen, is not of value only to the institution of science, but to all human thought; and it does not apply only to material phenomena (the usual connotation of ‘nature’), but to any phenomena, without prejudice as to their nature (which may be mental or even spiritual).

In any case, let us note that the descriptions and histories of ‘Logic’ here considered refer specifically to deductive logic in isolation, and move on. Inductive logic is dealt with in a later chapter.

Next, the article on ‘Formal Logic’ effectively distinguishes between the
logic of categorical propositions and that of conditioning (using other
terminologies: ‘logic of noun expressions’ and ‘logic of propositions’), but
regards the latter as *in principle* ‘logically prior’ to the former, although historically
in fact a later development.

Again, I am not in total agreement with this modern viewpoint. Although
it is true that, say, categorical syllogism is *ex-post-facto* visibly *a case of* hypothetical
proposition (the *production of* ‘if
these premises, then that conclusion’), it does not follow that one can think of
logical relations before one has developed some concept of categorical
propositions.

The historical order is not accidental, but reflects the fact that the
concept of a ‘proposition’ must have *some
meaningful content* (a categorical, non-logical relation) before we can think
of ‘propositions about propositions’. The attempt to cast thought processes into
an artificial ‘logical order’, without regard to the constraints of conceptual
development, is not justifiable.

I dealt with this issue in more detail during the discussion of the laws of thought (ch. 20), where I showed that:

** The ‘geometrical’ model of axioms and inferences is in any case not
applicable to the science of logic as a whole (though it can be used
fragmentally), since it is a function of logic to justify it (for all other
sciences)**.

The attempt of some logicians to reduce logic to a concept like
implication signals an unawareness of the conceptual order. *We
can only construct such complex concepts out of simpler, more empirical and
intuitive, notions*; they cannot be taken as primaries (see ch. 21).

A ‘logical order’ which is not conceptually feasible does not reflect any practical reality; it is a meaningless exercise.

Further on, the article describes ‘formal’ logic in the following terms.

It abstracts from their content the structures or logical forms that they embody… to enable manipulations and tests of validity. Formal logic is an a priori, not an empirical study.

The logician is concerned only with [the deductive necessity of the conclusion]… the determination of the truth or falsity of the premises, is the task of some special discipline or of common observation, etc., appropriate to the subject matter of the argument. [Inferences which] differ in subject matter… require different procedures to check… their premises.

In my view, I am sorry to say, this is utter nonsense. Even the process of abstracting form from content is an inductive one, presupposing certain perceptual and conceptual experiences. The morphologies which are studied by the logician do not come out of the blue; nor are they arbitrary constructs which later find interpretations.

At every moment in the enterprise of knowledge, including formal logical
science, *a complex interplay of inductive and deductive forces* is involved.
These ingredients continuously check and correct each other, in a holistic way.
Deduction without induction or induction without deduction are impossible.

The intuition of contradiction is itself an inductive act, and the logical necessity of a deductive conclusion very much depends on the logical probability of the premises. Likewise, empirical observation cannot ultimately be separated from its deductive evaluation, in the broadest possible context.

Next, the work of logicians is described as follows. I am quoting a length so as to display the ideas of modern logicians in their own words.

The construction of a system of logic… consists in setting up… a set of symbols, rules for stringing these together into formulas, and rules for manipulating these formulas. Attaching certain meanings to these symbols and formulas [is viewed as a separate process, and it is claimed that] systems of logic turn out to have certain properties quite independently of any interpretations that may be placed on them.

Certain unproved formulas, known as axioms, are taken as starting points, and further formulas [theorems] are proved on the strength of these… the question whether a proof of a given theorem in an axiomatic system is a sound one or not depends solely on which formulas are taken as axioms and on what the rules are for deriving theorems from axioms, and not at all on what the theorems or axioms mean.

Moreover, a given uninterpreted system is in general capable of being interpreted equally well in a number of different ways; hence in studying an uninterpreted system, one is studying the structure that is common to a variety of interpreted systems. Normally a logician who constructs a purely formal system does not have a particular interpretation in mind, and his motive for constructing it is the belief that when this interpretation is given to it the formulas of the system will be able to express sound or true principles in some field of thought.

Again, a very negative reaction from this here logician. The traditional division between inductive and deductive logic is an artificial measure, allowing theoreticians to develop and teach the subject gradually. But once all the processes involved in knowledge acquisition have been adequately described and validated, it must be recognized that they are common aspects of all actual thought, with no isolated existence in practise.

In reality, there are no relations without things related, nor any things
without some sort of (positive or negative) relations to others. For these
reasons, a ‘purely formal system’ of logic is a pretension: either it is totally
meaningless, or it naively or dishonestly conceals its intended meaning(s). A
‘symbol’ which is not a symbol *of
something*, has no right to the name, which is only applicable to something
representing something else.

It is no wonder that modern logicians have been able to develop so many ‘alternative systems’, if one considers the fact that more often than not they proceeded too impatiently, without bothering or sufficiently trying to first analyze the concepts they were using. With so many issues left tacit and unresolved, it was inevitable that situations would arise where the outcomes were uncertain. A case in point is the treatment of modality.

On the surface, it is of course quite reasonable that a word or symbol may have generic value, so that its properties are those in common to various specific applications, while each variety may additionally display special properties of its own. We have encountered this often in the course of our research, and that is not the point at issue.

But note in passing, that in any case the generic properties of the
different types of modality and relations can only be obtained by
generalization, *after* separately
analyzing each type; we summarize what is in common to the special logics, we do
not discover them without specific studies.

Quite different, is the claim that any ‘system’ of logic can be developed relativistically, without even a hidden reference to any experienced phenomena.

‘Symbols’ have no magical powers. Any system can of course be symbolized: this is just a change of language, which may be more convenient. But there has to be some underlying significance to what is being said. One can intuitively grasp logical and other relations without words or symbols, but the latter are nonsensical without the former.

Needless to say, *all the new
findings in the present work can equally well be expressed in a ‘symbolic system*‘,
and no doubt someone will chose to do it sooner or later. But had I begun with
meaningless symbols, instead of first approaching the subject-matter
conceptually and with reference to pregnant ordinary language, there would be no
data to systematize, no system to symbolize. There is no magic formula, no way
to avoid the need for empirical and intuitive consciousness.

To top it all, modern logicians present their theories in a manner they find esthetically pleasing, and call ‘rigorous’, and then make grandiose claims, like:

Aristotle’s methods of reduction achieved something approaching an informal axiomatic system of syllogistic… [but] a formalized axiomatic system of syllogistic has, in fact, been made possible only by the methods of modern logic.

Excuse my frankness, but these modern contributions are comparatively trivial. Aristotle taught us all about the syllogism; and codified the whole, by declaring the three laws of thought to be fundamental. He provided the tools; it is of little relevance that he did not take the time to use them fully.

The modern pretension at ‘systematization’ could not exist, without Aristotle’s lessons of rigorous formally-validated reasoning (with the example of categorical syllogism), and the model he presented us with, of a science based on three main general ‘axioms’ capable of controlling all subsequent processes. He taught them how it was done; how can they now claim to be one up on him? They are just carrying on his programme!

Re-ordering data in as structured a way as possible, so as to actualize
all their potential, is of course valuable work. But it is surely not
revolutionary. For instance, the ‘axiomatization of syllogistic’ by Jan
Lukasiewicz was perhaps a clarification, an interesting rewording and
classification of the components of such reasoning; but it was a relatively easy
thing to do, *ex-post-facto*, after
Aristotle’s work in the field.

One can hardly claim that ‘the derivations… depend entirely on rules for the manipulation of concatenations of signs, and not any intuitive insights or knowledge of the meaning of these signs’. Why this desire for magical mumbo-jumbo? Why not view it simply as an effort to grasp all facets of traditional logic, using some more or less novel ‘techniques’ or emphases?

There is no great expansion of substance, nor any great review of
fundamentals. To put down Aristotle’s contribution as ‘informal’, and to suggest
that ‘formal systematization’ was *‘made
possible’* by modern ‘methods’, is surely hyperbolic. It suffices to say that
improvements were made.

More will be said on these issues in the chapter on metalogic (ch. 66).

**4. Improvements and Innovations. **

Let us now look at modern logic with a more positive emphasis, and consider certain theoretical details in the context of the theories developed in the present treatise. The reader is again referred to the encyclopedic article on ‘Formal Logic’, and to that on ‘Applied Logic’. Examples of specific improvements are many.

More attention was paid to invalidation of invalid syllogism, whereas previously validation of valid syllogisms had been the center of attention (ch. 10.2f). Various properties of axioms were stipulated: they must together be complete, consistent, interpretable, nonredundant, independent.

An improved notation for categorical propositions was introduced by
Lukasiewicz, in which ‘lower case Latin letters… are employed to stand for
general terms’, as in **A**ba for ‘all b are a’ (I personally would have transposed the
letters a and b). Diagrams were designed, such as those of **John Venn** (in 1880), to illustrate quantitative aspects of
relationships (but some of these, in my opinion are too artificial to reflect
any easygoing functioning of the mind).

The logical properties of categoricals with singular or negative terms were further analyzed (for instance, their oppositions). Forms with complex terms were considered (ch. 42.1). The intricacies of logic were worked out to an extreme degree (ch. 27).

In that context, modern logicians learned ‘to draw hypothetical conclusions from categorical premises’, a process which was called ‘conditionalization’. Though the concept arose in the context of ‘material implication’ (p implies {q implies p}), and is invalid in strict implication (unless p is necessary, see ch. 31), it may be seen as a precursor of productive argument (ch. 29.3).

It is also interesting to note that **John
Neville Keynes**, in 1887, stressed the difference between (*de-re*)
‘conditional’ propositions, which concern ‘implication between terms’, and
(logical) hypotheticals, which concern ‘implication between propositions’ (see
ch. 33). **Georg Von Wright** (of
Finland, b. 1916), should be mentioned for his work in the logic of preference
and deontic logic.

There was also a lot of work done in the field of *modal
logic*. Distinction was made between logical necessity (as in ‘2 + 2 = 4’)
and contingent truth (as in ‘France is a Republic’), and similarly on the side
of falsehood. The oppositional relationships between the various categories of
modality of this type were clarified, on ‘intuitive’ grounds. Strict implication
was identified with necessity of ‘material implication’. Symbols were introduced
for modal notions, such as L for necessity or M for possibility.

Propositions like ‘if p is necessary, then p is true’ (subalternation) and ‘if it is necessary that p implies q, then the necessity of p would imply the necessity of q’ (an apodosis, see ch. 30.1), were posited as axioms, which ‘appear to have a high degree of intuitive plausibility’. Others, like ‘if p is true, then p is possible’, and ‘if {p and q} is necessary, then p is necessary and q is necessary’ were viewed as derivative theorems. The superimposition of modal categories, as in ‘a necessity is necessarily-necessary’, was investigated (seemingly, in an attempt to distinguish relative and absolute contextuality).

But many conclusions, like ‘the [material] disjunction of p is necessary and q is necessary, [materially] implies the strict disjunction of p and q’, were of doubtful validity; and some, like ‘if p is true, then p is necessarily-possible’, were trivial or misleading. This ‘modal system’ was labeled T; others were constructed with more or less similar premises, such as S4, S5, and B (for Brouwer), which added superimposition propositions as axioms instead of mere side-issues.

At the root of these difficulties, as already mentioned, lay a failure to prepare for these purely technical discussions of modality, by a preliminary philosophical analysis of the concepts involved. An adverb like possibly ‘is taken as the fundamental undefined modality in terms of which the other modalities are constructed’. The attempt to work ‘intuitively’, without first clearly defining the categories of modality within the framework of each modal type, was bound to result in confusions and disagreements.

So-called systems were worked out first, and only thereafter were their properties (allegedly) found to match this or that type of modality. Given the truth values of the components of a modal formula, it was ‘not obvious how one should set about calculating the truth value of the whole’. It was suggested that ‘necessity is truth in every “possible world” or “conceivable state of affairs”‘ — a valuable statement which can be likened to my definitions of necessity (see ch. 11, 21).

But that definition seems circular and vague, without the stratification between primitive, intuitive notions, and their more constructed, conceptual derivatives. As far as I can see, there was no major effort to understand modality ontologically and epistemologically; no precise explanations for the similarities and differences in behavior of the various types of modality, no theory as to how modalities are or are-to-be induced day by day by common humans.

More will be said on the developments in modal logic in the next chapter (ch. 65).

There was a strong interest in developing a theory of *class
membership*. This concerns the relationship between ‘collections (finite or
infinite)… of objects of any kind’ and the individual objects themselves.
Membership and its absence, having been identified as relations, were assigned
special symbols. A class was to be defined ‘either by listing all of its members
or by stating some condition of membership’. Classes with all the same members,
yet different specifying conditions, were ‘extensionally identical’.

A class without members (like ‘Chinese popes’) was referred to as a null (or empty) class, and it was realized that ‘there is only one null class’. The ‘complement’ (negative version) of the latter would be ‘the universal class… of which everything is a member’. The technical impact of the concept of an empty class on Aristotelean categoricals was considered, and its ‘existential import’ was discussed (My own treatment of such issues appears under the heading of ‘modalities of subsumption’, ch. 41).

A distinction was made between classes which are themselves members of classes (called ‘sets’ by some logicians), and those which are not so. It was understood that ‘class membership is… not a transitive relation’, that is, a member of some class is not thereby a member of the classes of classes of which that class is a member.

However, the seemingly reasonable assumption (known as ‘the principle of comprehension’) that ‘for every statable condition there is a class (null or otherwise) of objects that satisfy that condition’ was ‘found to lead to inconsistencies’ — specifically, Russell’s paradox concerning ‘the class of all classes that are not members of themselves’. This was indeed an important finding.

Russell tried to resolve the difficulty by suggesting that statements like ‘X is (or is not) a member of X’ may be ‘ill-formed’ and only applicable to classes at different levels in some ‘hierarchy’. But no explanation was apparent as to why such restriction should occur, why all predicates do not behave in the same way.

Other solutions were therefore proposed. For instance, Lesniewski suggested, in the context of his theory of whole and part (called ‘mereology’), that a confusion between ‘the distributive and the collective interpretations of class expressions’ was involved (but I fail to see the relevance of this suggestion: which of the terms in the paradox are collective?). In any case, of all the solutions proposed so far, ‘none has won universal acceptance’.

My own solution is not with reference to issues of self-membership, or to
any hierarchy or stratification of classes (though I have clarified more
traditional and legitimate senses of that kind of concept), but with reference
to the known process of *permutation*.
The principle of comprehension is too loosely formulated, in that the expression
‘statable condition’ seems to refer to any arbitrary verbal utterance, without
regard for the conceptual feasibility of grouping and isolating the words used
(see ch. 43-45).

More will be said about class logic in later chapters (see ch. 66.3 and 67.1).

There has been notable progress, in this century, in *extending
the scope of logic theory*. The logical properties of new forms, of more
limited applicability, were researched. This field was called ‘applied logic’,
because it dealt with more specific relations than those traditionally studied
by logicians. It was viewed as one step closer to ‘material logic’ than current
‘formal logic’.

Parenthetically, let me state that I do not agree with this division, or its underlying beliefs. Ultimately, the whole science of logic is ‘formal’, whether it is expressed to some degree or other symbolically or not at all, whether its principles are expressed in detailed schemata or in broad statements (though the former method is clearer), and whether its subject-matter is of broad applicability and manifold in properties (like ‘X is Y’), or less often useful (like ‘X sings to Y’).

When we notice that in practise a certain inference seems ‘logical’, but we cannot place this event in the list of thought forms already assimilated by the current science of logic, we classify the event as ‘material logic’. But this is a mere convenience; it should not be taken to imply that there is some unformalizable force at work. There is still usually formal aspects to the event, except in the extreme case where the combination of features (copulative or modal) is unique. The latter case is conceivable (this would be ‘material logic’ properly speaking), but I suspect very unlikely: a solitary, unrepeatable universal.

In any case, however we call it, ‘the applications envisaged in applied logic cover a vast range, relating to reasoning in the sciences and in philosophy, as well as in everyday discourse’. This is a constructive programme, with which I whole-heartedly agree: an all-out effort to uncover the formal aspects of as much of human reasoning as possible, till each inference which seems logical in practise is fully understood and justified in theoretical terms.

The exact dividing line between logic and more specific sciences, like psychology, ethics, and natural sciences (including biology), is ultimately artificial and irrelevant. All knowledge is one enterprise, and specialization should not be excessive, insular. Everyone, of course, is agreed on this point.

Subjects arousing interest, apart from those mentioned above (like the logic of logic, the logic of modality, and the logic of classes), included:

(i) ‘epistemic’ logic (knowing) and ‘doxastic’ logic (belief), as well as the logic of questions;

(ii) ‘practical’ logic, comprising the logics of preferences, command, obligation (‘deontic logic’); Rescher elsewhere mentions ‘boulomaic logic’ (concerning hopes and fears, ‘bulimic’ would be more accurate English) and ‘evaluative logic’ (concerning good and bad);

(iii) ‘physical’ logic, concerning time and space; Rescher elsewhere mentions causality (see White, 168).

I have noted some of the work done in some of these fields in earlier chapters. With regard to ‘practical logic’, this classification mixes psychological (desire and aversion) and ethical (teleological) phenomena rather indiscriminately, as far as I am concerned. With regard to ‘physical logic’ (a misnomer, in my view, since these concepts are not limited to the physical domain), I would include the logic of change as of fundamental interest, (see ch. 17 for an introduction); the logic of causality is of course very important too (see ch. 42.2 for an introduction).

I would like to first mention Aristotle’s ‘inductive syllogism’, which establishes a general rule by making an assuredly complete examination of all cases (one by one) and showing it to be true for every one’. This is also known as argument by enumeration, and looks as follows:

Each of S1, S2, S3,… is S,

and Nothing other (than these pointed-to things) is S,

but Each of S1, S2, S3,… is P,

therefore, All S are P.

This is of course the ideal case of ‘induction’ (effectively, a ‘deduction’), where all the objects of a certain kind can be pointed to, and the conclusion merely states in abbreviated form what is already given in the premises. The implicit singular propositions are supposedly given by observation. For each of the objects, there is an underlying third figure singular syllogism, which establishes the link between the attributes S and P.

In practise, induction is of course more of a problem, because most of our concepts are open-ended, and we cannot enumerate every instance. But with this model, Aristotle may be said to have founded the science of formal inductive logic. We will later consider in more detail more recent developments in this field, but for now let us take note of some related issues which are also mentioned in the article on ‘applied logic’.

Some modern logicians have made a distinction among hypotheticals, with ‘reference to the status of the antecedent’. It could be ‘problematic (unknown), or known-to-be-true, or known-to-be-false’, so that three kinds of conditional emerge: the ‘problematic… “Should it be the case that p — which it may or may not be — then q”; the factual… “Since p, then q”; and the counterfactual… “If it were the case that p — which it is not — then q”‘. Such ‘contrary-to-fact’ theses were seen to have ‘a special importance in the area of thought experiments in history as well as elsewhere’. It was recognized that this had to concern strict, rather than ‘material’, implication.

This idea may be viewed as a precursor of my fuller theory of ‘bases’ for logical (see ch. 25.1, 31.3) and other types (ch. 34.1, 38.1, 39.1) of conditionals, and also for disjunctions. However, note that they are referring to problemacy and knowledge, rather than as I do to logical modalities (for hypotheticals). Such propositions may of course also be viewed as compounds, of a generic conditional and a statement about the antecedent, with the latter left tacit or expressed in parentheses or made explicit.

With reference to counterfactuals, a Polish linguistic theorist **Henry
Hiz** (b. 1917) suggested (effectively) that they were constructed by applying
a general law (unstated but implied as underlying accepted fact), to a specific
case (the known to be false antecedent). Such laws might be ‘all or part of the
corpus of scientific laws’. ‘This approach has been endorsed by **Roderick
Chisholm** of Brown University… [and] many recent writers’.

This was an interesting development. First, however, I would like to point out that this is just enthymeme, argument with a suppressed premise or a hypothetical with part of its antecedent left tacit — known since antiquity. Also, although we do indeed in practise draw in this way on a reservoir of strongly accepted ‘laws’, we must admit that in a more holistic epistemological context such ‘laws’ (unless strictly self-evident) are not distinguishable from other propositions: they are subject to eventual review like any others.

But, apparently, Hiz was formulating a (first figure) syllogism, with a categorical (general) major premise and a categorical (specific) minor premise, whose conclusion was to be a (counterfactual) hypothetical. I am therefore inclined to say that Hiz was struggling with the notion of ‘productive’ argument, which I have developed considerably (see ch. 29.3, 36.3, 40.2).

Also, the above objection to a special status for ‘laws’ seems to have been grasped, because researchers proceeded to analyze the problem without necessarily taking the major premise for granted. Instead, the falsehood of the conclusion was seen to put in doubt both the premises, so that a more open evaluation is required: ‘a contradiction obviously ensues. How can this situation be repaired?’ They went on to enumerate the various alternative combinations of premises and conclusions (or their negations) which would overcome the contradiction.

They go on to discuss whether it is better in such cases to ‘sacrifice… a particular fact in favour of a general law… [or] a law to a purely hypothetical fact’, and suggest that ‘in actual cases one makes laws give way to facts, but in hypothetical cases one makes the facts yield to laws’. However, there still ‘remains… a choice between laws’, cases where ‘the distinction between facts and laws does not resolve the issue’, so that ‘some more sophisticated mechanism for a preferential choice among laws is necessary’.

I see these contemporary ideas as akin in concept to my theory of ‘revision’, where conflicts are resolved with reference to alternative outcomes. The harmonization depends on the relative credibilities of the theses, and we may interpret the suggestion that actual facts are superior to laws, and laws in turn to hypothetical facts, as a principle for instant decision-making. However, that formulation is inadequate, not only because of its failure to reconcile disagreements among laws, but also because it presumes that we can always distinguish actual from hypothetical facts.

It is for these reasons that I developed my whole theory of ‘factorial
induction’, which is still much broader in scope than these contemporary
stirrings (see part VI). I believe that this theory qualifies as the ‘more
sophisticated mechanism’ that is being called for. It is interesting that these
issues conclude the encyclopedic articles on Logic, because I view them as being
at the cutting edge of logical science, which is why I have called my work *Future
Logic*.

More will be said on these issues in the chapter on inductive logic (ch. 67).