Phenomenology

VII.The Active Role of Logic

1. Principles of Adduction

2. Generalization is Justifiable

3. Logical Attitudes

4. Syllogism Adds to Knowledge

5. There is a Formal Logic of Change

6. Concept Formation

7. Empty Classes

8. Context

9. Communication

1.Principles of Adduction[1]

The concepts and processes of adduction are fundamental tools of human cognition, which only started becoming clear in recent centuries thanks to philosophers like Francis Bacon or Karl Popper. Even so, many people are still today not aware of this important branch of logic. Logic is the art and science of discourse. Like all logical principles, those of adduction are firstly idealized descriptions of ordinary thinking, and thereafter prescriptions for scientific thought.

Anything we believe or wonder about or disbelieve may be considered atheory. Everything thinkable has some initial credibility at first glance, but we are for this very reason required to further evaluate it, otherwise contradictories would be equally true!Adductionis the science of such evaluation: it tells us how we do and should add further credibility to a theory or its negation. To adduce evidence is to add logical weight to an idea.

A theory T is said topredictsomething P, if T implies P (but does not imply nonP). A theory T may predict the negation of something, i.e. nonP; we might then say that TdisclaimsP; in such case, T implies nonP (but does not imply P). A theory T may not-predict P, or not-predict nonP, which are the same situation by our definition (i.e. where T does not imply Panddoes not imply nonP); we might then say that T isneutralto P (and to nonP).[2]

A theory T has always got at least one alternative nonT, at least to start with[3]. Normally, we do not have only one theory T and its negation nonT to consider, but many theories T1, T2, T3, etc. If any of these alternatives are compatible, they are improperly formulated. Properly formulated alternatives are not merely distinct but incompatible[4]. Let us henceforth suppose we are dealing with such contraries or contradictories, so that the alternatives in the disjunction ‘T1 or T2 or T3 or…’ are mutually exclusive[5].

Theories depend for their truth oninternal consistency and consistency with all other knowledge, both the theoretical and the empirical. Here, we are concerned in particular with the estimating the truth, or falsehood, of theories with reference to their predictions or lack of them.

·Bycorrect(or true) prediction we mean that T predicts P and P indeed occurs, or that T disclaims P and nonP indeed occurs.

·Byincorrect(or false) prediction is meant that T predicts P whereas nonP is found to occur, or that T disclaims P whereas P is found to occur.

Ultimately, occurrences like P or nonP on which we base our judgments have to be merephenomena– things which appear in our experience, simply as they appear[6].

If a theory seems trueat first sight, it is presumably because its alternative(s) was or were quickly eliminated for some reason – for example, due to inconsistency, or because of obviously untenable predictions. If no alternative was even considered, then the first theory –and its alternative(s)– must be subjected to consistency checks and empiricaltests. By the latter term we refer to observation (which may be preceded by experiment) of concrete events (and eventually some of their abstract aspects), to settle issues raised by conflicting theories.

It is conceivable that only one theory concerning some issue be at all thinkable; but this situation must not be confused with that of having only succeeded in constructing one theory thus far. For it also happens that we havenotheory for the issue at hand (at present and perhaps forever), and we do not conclude from this that there is no explanation (we maintain that there is one, in principle). It must likewise be kept in mind that having two or more theories for something does not ensure that we have all the possible explanations. We may later (or never) find some additional alternative(s), which may indeed turn out to be more or the most credible.

Alternative theories may have some predictions in common; indeed they necessarily do (if only in implying existence, consciousness and similar generalities). More significant are the differences between alternative theories: that one predicts what another disclaims, or that one predicts or disclaims what another is neutral to; because it is with reference to such differences, and empirical tests to resolve issues, that we can confirm, undermine, select, reject or establish theories.[7]

If a theory correctly predicts something, which at least one alternative theory was neutral to, then the first theory is somewhatconfirmed, i.e. it effectively gains some probability of being true (lost by some less successful alternative theory). If a theory is neutral to something that an alternative theory correctly predicted, then the first theory is somewhatundermined, i.e. it effectively loses some probability of being true (gained by a more successful alternative theory). If all alternative theories equally predict an event or all are equally neutral to it, then each of the theories may be said to beunaffectedby the occurrence.

Thus, confirmation is more than correct prediction and undermining more than neutrality. By our definitions, these terms are only applicable when alternative theories behave differently, i.e. when at least one makes a correct prediction and at least one is neutral to the occurrence concerned. If all alternatives behave uniformly in that respect, they are unaffected by the occurrence, i.e. their probability ratings are unchanged. Thus, confirmation (strengthening) and undermining (weakening) are relative, depending on comparisons and contrasts between theories.[8]

Furthermore, we may refer to degrees ofprobability, (a) according to which and how many theories are confirmed or undermined with regard to a given occurrence, and (b) according to the number of occurrences that affect our set of theories. If we count one ‘point’ per such occurrence, then (a) in each event the theory or theories confirmed share the point, i.e. participate in the increased probability, while that or those undermined get nothing; and (b) over many instances, we sum the shares obtained by each of the theories and thus determine their comparative weights (thus far in the research process). The theory with the most accumulated such points is the most probable, and therefore the one to beselected.[9]

Note that it may happen that two alternative theories T and nonT, or a set of theories T1, T2, T3… are in equilibrium, because each theory is variously confirmed by some events and undermined by others, and at the end their accumulated points happen to be equal. This is a commonplace impasse, especially because in practice we rarely do or even can accurately assign and compute probability ratings as above suggested in the way of an ideal model. We end up often relying onjudgment calls, which people make with varying success. But of course, such decisions are only required when we have to take immediate action; if we are under no pressure, we do not have to make a stand one way or the other.

If any prediction of a theory is incorrect, then the theory isrejected, i.e. to be abandoned and hopefully replaced, by another theory or a modified version of the same (which is, strictly speaking, another theory), as successful in its predictions as the previous yet without the same fault. The expression ‘trial and error’ refers to this process. Rejection is effective disproof, or as near to it as we can get empirically. It follows that if T incorrectly predicts P, then nonT is effectively proved[10]. So long as a theory seemingly makes no incorrect predictions, it istoleratedby the empirical evidence as a whole. A tolerated theory is simply not-rejected thus far, and would therefore be variously confirmed, undermined, unaffected.

A theory is finallyestablishedonly if it was the only theory with a true prediction while all alternative theories made the very opposite prediction. In short, the established theory had anexclusiveimplication of the events concerned. Clearly, if nonT is rejected, then T is our only remaining choice; similarly, it all alternatives T2, T3… are rejected, then the leftover T1 is established[11]. We may then talk of inductive proof or vindication. Such proof remains convincing only insofar as we presume that our list of alternative theories is complete and their respective relations to their predictions correct, as well as that the test was indeed fully empirical and did not conceal certain untested theoretical assumptions. Proof is deductive only if the theory’s contradictory is self-contradictory, i.e. if the theory is self-evident.

Once a theory is selected on the basis of probabilities or established because it is the last to withstand all tests, it retains this favored status until, if ever, the situation changes, i.e. as new evidence appears or is found, or new predictions are made, or new theories are constructed.

It is important to note that, since new theories may enter the discussion late in the day, events which thus far had no effect on the relative probabilities of alternative theories or on a lone standing theory, may with the arrival on the scene of the additional player(s), become significant data. For that reason, in the case of selection, even though correct predictions or neutralities may previously have not resulted in further confirmations or undermining, they may suddenly be of revived interest[12]. Likewise, in the case of establishment, we have to continue keeping track of the theory’s correct predictions or neutralities, for they may affect our judgments at a later stage.

Certain apparent deviations from the above principles must be mentioned and clarified:

ØNote that well-established (consistent and comparatively often-confirmed) large theories are sometimes treated as ‘proofs’ for narrower hypotheses. They are thus regarded as equivalent to empirical evidence in their force. This gives the appearance that ‘reason’ is on a par with experience with respect to evidence – but it is a false impression.

More specifically: say that (a) I guessed or ‘intuited’ the measure of so and so to be x, and (b) I calculated same to be x. Both (a) and (b) are ‘theories’, which can in fact be wrong, yet (a) being anisolatedtheory (or offhand guess) is considered confirmed or rejected by (b), because the latter beingbroader in scope(e.g. a mathematics theorem) would require much more and more complex work to be put in doubt.

The more complicated the consequences of rejecting an established hypothesis, the more careful we are about doing such a thing, preferring to put the pressure on weaker elements of our knowledge first.

ØNote also here the following epistemological fallacy: we often project an image, and then use thisimagined event as an empirical datum, in support of larger hypotheses. In other words, speculations are layered: some are accepted as primary, and then used to ‘justify’ more removed, secondary speculations. By being so used repeatedly, the primary speculations are gradually given an appearance of solidity they do not deserve.

The term ‘fact’ is often misused or misunderstood. We must distinguish between theory-generated, relative fact and theory-supporting, absolute fact.

a)‘Facts’ may be implied by one’s theory, in the sense of being predicted with the expectation that they will be found true, in which event the theory concerned would be buttressed. Such ‘facts’ are not yet established, or still have a low probability rating. We may call thatsupposed fact. It is properly speaking an item within one’s theory, one claimed to be distinguished by being empirically testable, one that at first glance is no less tentative than the theory that implied it.

b)In contrast,established factrefers to propositions that are already a source of credibility for the theory in question, being independently established. The logical relation of implication (theory to fact) is the same, but the role played by the alleged fact is different. Here, a relatively empirical/tested proposition actually adds credibility to a proposed theory.

2.Generalization is Justifiable

The law of generalization is a special case of adductive logic, one much misunderstood and maligned.

Ingeneralization, we pass from a particular proposition (such as:someX are Y) to a general one (allX are Y). The terms involved in such case are already accepted, either because we have observed some instances (i.e. things that are X and things that are Y) or because in some preceding inferences or hypotheses these terms became part of our context. These terms already overlap to at least a partial extent, again either thanks to an observation (that some things arebothX and Y) or by other means. The generalization proper only concerns the last lap, viz. on the basis that some X are Y, accepting thatallX are Y. There is no deductive certainty in this process; but it is inductively legitimate.

The general proposition is strictly speaking merely a hypothesis, like any other. It is not forever fixed; we can change our minds and, on the basis of new data (observed or inferred), come to the alternate conclusion that ‘some X are not Y’ – this would simply beparticularization. Like any hypothesis, a generalization is subject to the checks and balances provided by the principles of adduction. The only thing that distinguishes this special case from others is that it deals with already granted terms in an already granted particular proposition, whereas adduction more broadly can be used to invent new terms, or to invent particular as well as general propositions. To criticize generalization by giving the impression that it is prejudicial and inflexible is to misrepresent it. We may generalize, provided we remain open-minded enough to particularize should our enlarged database require such correction.

Some criticize generalization because it allows us to make statements aboutunobservedinstances. To understand the legitimacy of generalization, one should see that in moving from ‘some X are Y’ to ‘all X are Y’ one remains within thesame polarityof relation (i.e. ‘are,’ in this case); whereas if one made the opposite assumption, viz. that some of the remaining, unobserved instances of X arenot(or might not be) Y, one would be introducing a much newer, less justified relation. So far we have only encountered Xs thatareY, what justification do we have in supposing that there might be Xs thatarenotY? The latter is more presumptive than assuming a continued uniformity of behavior.

Note this argument well. When we generalize from some to all X are Y, weonlychange the quantity involved. Whereas if, given that some X are Y, we supposed that some other X are also Y and some are not Y, we changeboththe quantity and the polarity, for we are not only speculating about the existence of X’s that arenotY, but also saying something aboutallX (those known to be Y, those speculated to also be Y and those speculated to be not Y). Thus, the preference on principle of particularization to generalization would be a more speculative posture.

Whence, generalization is to be recommended – until and unless we find reason to particularize. Of course, the degree of certainty of such process is proportional to how diligently we have searched for exceptions and not found any.

To those who might retort that an agnostic or problematic position about the unobserved cases would be preferable, we may reply as follows. To say that, is a suggestion that “man is unable to know generalities.” But such a statement would be self-contradictory, since it is itself a claim to generality. How do these critics claim to have acquired knowledge of this very generality? Do they claim special privileges or powers for themselves? It logically follows that they implicitly admit that man (or some humans, themselves at least) can know some generalities, if only this one (that ‘man can know some generalities’). Only this position is self-consistent, note well! If we admit some generality possible (in this case, generality known by the logic of paradoxes), then we can more readily in principle admit more of it (namely, by generalization), provided high standards of logic are maintained.

Moreover, if we admit thatquantitativegeneralization is justifiable, we must admit in principle thatmodalgeneralization is so too, because they are exactly the same process used in slightly different contexts. Quantitative generalization is what we have just seen, the move from ‘some X are Y’ to ‘all X are Y,’ i.e. from some instances of the subject X (having the predicate Y) to all instances of it. Modal generalization is the move from ‘(some or all) X are insomecircumstances Y’ to ‘(some or all) X are inallcircumstances Y,’ i.e. from some circumstances in which the XY conjunction appears (potentiality) to all eventual surrounding circumstances (natural necessity). It is no different a process, save that the focus of attention is the frequency of circumstances instead of instances. We cannot argue against natural necessity, as David Hume tried, without arguing against generality. Such a skeptical position is in either case self-defeating, being itself a claim to general and necessary knowledge!

Note that theargumentsproposed above in favor of the law of generalization are consistent with that law, but not to be viewed as an application of it. They are logical insights, proceeding from the forms taken by human thought. That is to say, while we induce the fact that conceptual knowledge consists of propositional forms with various characteristics (subject, copula, predicate; polarity, quantity, modality; categorical, conditional), the analysis of the implications on reasoning of such forms is a more deductive logical act.

Thus, generalization in all its forms, properly conceived and practiced, i.e. including particularization where appropriate, is fully justified as an inductive tool. It is one instrument in the arsenal of human cognition, a very widely used and essential one. Its validity in principle is undeniable, as our above arguments show.

3.Logical Attitudes

Logic is usually presented for study as a static description and prescription of forms of proposition and arguments, so that we forget that it is essentially anactivity, a psychic act. Even the three Laws of Thought have to be looked at in this perspective, to be fully understood. To each one of them, there corresponds a certain mental attitude, policy or process…

a)To the Law of Identity, corresponds the attitude ofacknowledgement of fact, i.e. of whatever happens to be fact in the given context. Here, the term ‘fact’ is meant broadly to include the fact of appearance, the fact of reality or illusion, or even the fact of ignorance or uncertainty. Also, the attention to eventual conflicts (contradictions, incompatibilities, paradoxes, tensions) and gaps (questions, mysteries); and by extension, other forms of oppositional relations.

b)To the Law of Non-contradiction, corresponds the policy ofrejection of contradictions. Contradictions occur in our knowledge through errors of processing of some kind (e.g. over-generalization, uncontrolled adduction, unsuccessful guessing), which is ultimately due to the gradual presentation of information to the human observer and to his limited, inductive cognitive means. The Law is an insight that such occurrence, once clearly realized, is to be regarded not as a confirmation that contradiction can occur in reality, but as a signal that a mere illusion is taking place that must be rejected.

c)To the Law of the Excluded Middle, corresponds the process ofsearching for gaps or conflicts in knowledge and pursuing their resolution. This is the most dynamic cognitive activity, an important engine in the development of knowledge. And when a contradiction or even an uncertainty arises, it is this impulse of the human thinking apparatus that acts to ask and answer the implicit questions, so as to maintain a healthy harmony in one’s knowledge.

Thus, the exercise of logic depends very much on thehuman will, to adopt an attitude of factualism and resolve to check for consistency, look for further information and issues, and correct any errors found. The psychological result of such positive practices, coupled with opportunity and creativity, is increasing knowledge and clarity. The contraries of the above are avoidance or evasion of fact, acceptance of contradictions, and stupidity and laziness. The overall result of such illogical practices is ignorance and confusion.

Whereas ‘consciousness’ refers to the essentially static manifestation of a Subject-Object relation, ‘thought’ is an activity with an aim (knowledge and decision-making). The responsibility of the thinker for his thought processes exists not only at the fundamental level of the three Laws, but at every level of detail, in every cognitive act. Reasoning is never mechanical. To see what goes on around us, we must turn our heads and focus our eyes. To form a concept or formulate a proposition or construct an argument or make an experiment or test a hypothesis, we have to make an effort. The more attentive and careful our cognitive efforts, the more successful they are likely to be.

4.Syllogism Adds to Knowledge

People generally associate logic with deduction, due perhaps to the historic weight of Aristotelian logic. But closer scrutiny shows that human discourse is largely inductive, with deduction as but one tool among others in the toolbox, albeit an essential one. This is evident even in the case of Aristotelian syllogism.

A classic criticism of syllogistic logic (by J. S. Mill and others) is that it is essentially circular argument, which adds nothing to knowledge, since (in the first figure) the conclusion is already presumed in the major premise. For example:

All men are mortal

(major premise)

Caius is a man

(minor premise)

therefore, Caius is mortal

(conclusion)

But this criticism paints a misleading picture of the role of the argument, due to the erroneous belief that universal propositions are based on “complete enumeration” of cases[13]. Let us consider each of the three propositions in it.

Now, our major premise, being a universal proposition, may be either:

(a)axiomatic, in the sense of self-evident proposition (one whose contradictory is self-contradictory, i.e. paradoxical), or

(b)inductive, in the way of a generalization from particular observations or a hypothesis selected by adduction, or

(c)deductive, in the sense of inferred by eduction or syllogism from one of the preceding.

If our major premise is (a), it is obviously not inferred from the minor premise or the conclusion. If (b), it is at best probable, and that probability could only be incrementally improved by the minor premise or conclusion. And if it is (c), its reliability depends on the probability of the premises in the preceding argument, which will reclassify it as (a) or (b).

Our minor premise, being a singular (or particular) proposition, may be either:

(a)purely empirical, in the sense of evident by mere observation (such propositions have to underlie knowledge), or

(b)inductive, i.e. involving not only observations but a more or less conscious complex of judgments that include some generalization and adduction, or

(c)deductive, being inferred by eduction or syllogism from one of the preceding.

If our minor premise is (a), it is obviously not inferred from any other proposition. If (b), it is at best probable, and that probability could only be incrementally improved by the conclusion. And if it is (c), its reliability depends on the probability of the premises in the preceding argument, which will reclassify it as (a) or (b).

It follows from this analysis that the putative conclusion was derived from the premises and was not used in constructing them. In case (a), the conclusion is as certain as the premises. In case (b), the putative conclusion may be viewed as apredictionderived from the inductions involved in the premises. The conclusion is in neither case the basis of either premise, contrary to the said critics. The premises were known temporally before the conclusion was known.

The deductive aspect of the argument is that granting the premises, the conclusion would follow. But the inductive aspect is that the conclusion is no more probable than the premises. Since the premises are inductive, the conclusion is so too, even though their relationship is deductive. The purpose of the argument is not to repeat information in the premises, but to verify that the premises are not too broad. The conclusion will be tested empirically; if it is confirmed, it will strengthen the premises, broaden their empirical basis; if it is rejected, it will cause rejection of one or both premise(s).

In our example,conveniently,Caius couldn’t be proved to be mortal, although apparently human, till he was dead.While he was alive, therefore, the generalization in the major premise couldn’t be based on Caius’ mortality. Rather, we could assume Caius mortal (with some probability – a high one in this instance) due to the credibility of the premises. When, finally, Caius died and was seen to die, he joined the ranks of people adductively confirming the major premise. He passed from the status of reasoned case to that of empirical case.

Thus, the said modern criticism of syllogism (and by extension, other forms of “deductive” argument) is not justified. Syllogism is a deductive procedure all right, but it is usually used in the service of inductive activities. Without our ability to establish deductive relations between propositions, our inductive capabilities would be much reduced. All pursuit of knowledge is induction; deduction is one link in the chain of the inductive process.

It should be noted that in addition to the above-mentioned processes involved in syllogism, we have to take into account yet deeper processes that are tacitly assumed in such argumentation. For instance, terms imply classification, which implies comparison, which mostly includes a problematic reliance on memory (insofar as past and present cases are compared), as well as perceptual and conceptual powers, and which ontologically raises the issue of universals. Or again, prediction often refers to future cases, and this raises philosophical questions, like the nature of time.

The approach adopted above may be categorized as more epistemological than purely logical. It was not sufficiently stressed in myFuture Logic.

5.There is a Formal Logic of Change

In an article in the December 1997 issue of Network[14], “Goethe’s Organic Vision”, Bortoft[15]exposes the limitation of modern scientific thinking to static relations, and how it could have been avoided had we paid more attention to Goethe’s[16]more dynamic way of looking at things.

Bortoft argues, in effect, that when science adopted its mathematical approach to the description of nature, as of the 18th Century under Neoplatonistic influences, in its enthusiasm it missed out on a valuable epistemological opportunity which Goethe had presented it.

The latter, in hisThe Metamorphosis of Plants, considers that “it may be possible out of one form to develop all plant forms”. Bortoft explains that this was not meant to be interpreted, as it has been by many, as a search for the commonalties of plant organs (and plants) – but rather, as Rudolph Steiner[17]had done, as an attempt to capture a supposed biological transformation of some original unitary organ (or plant) into a multiplicity of organs (or plants).

That is, Goethe was not referring to Platonic universals concerning a ‘finished product’, but to a living process. He was looking for the multiplicity ‘emerging from an original unity’, rather than for an ‘unity underlying multiplicity’.

I want to here let it be known thatthe linguistic/logical tools needed to implement Goethe’s programme already exist. Propositional forms through which to verbally express change (including metamorphosis), and the deductive logic (oppositions, syllogism, etc.) concerning such forms, have already been worked out in considerable detail in my workFuture Logic[18].

Aristotle had, in his treatises on logic, crystallized and surpassed the work of his predecessors, and in particular that of his teacher Plato, by formalizing the language of classification and the reasoning processes attending it.

The common characters (including behaviors[19]) of things were expressed as predicates of subjects, in categorical propositions of the form “X is Y” (where X, Y… stood for universals). The relation expressed by the copula ‘is’ was clarified in the various deductive processes, and in particular by syllogism such as “if X is Y and Y is Z, then X is Z”. This is all well known, no need for more detail.

While Aristotle limited his formal treatment to such static relations, essentially the relations between particulars, species and genera, he did in his other works investigatechangeinformally in great detail. He was bound to do so, in view of the interest the issues surrounding it had aroused in Greek philosophy since its beginnings. His approach to change was, by the way, distinguished by his special interest in biology.

What concerns us here is the distinction betweenbeing and becoming, which Aristotle so ably discussed.

In “X is Y”, a thing which is X is also Y – it has both characters at once, in a static relation expressed by the copula of being (is). In contrast, in “X becomes Y”, the particular in question is at first X and at last Y,but not both at once; it ceases being X and comes to be Y, it undergoes change – the copula of becoming expresses a dynamic relation.

The latter copula can easily be subjected to the same kind of logical analysis as was done for the simpler case. The formal treatment in question may be found, as I said, in my above-mentioned work[20]. What I want to stress here is the significance of the introduction of propositions concerning change into formal logic.

Our philosophical view of classification has been distorted simply because Aristotle stopped his logical investigations where he did. Perhaps given more time he would have pursued his research and extended our vision beyond the statics of classification into its dynamics.

For, finally, it is very obvious thatthings do not just fall under classes once and forever, but they also pass over from one class to another.

And this is true not just in biology, but in all fields. The baby I was once became an older man. The water used in the hydrolytic process became hydrogen and oxygen. Logicians have no need to invent a special language, and there is nothing artificial in considering changes in subsumption. We all, laymen and scientists, speak the language already and reason with it all the time.

No change of paradigm is called for, no metaphysical complexities, note well. The only problem is that philosophers have lagged behind in their awareness of the phenomenon. Nothing said here invalidates the static approach; we merely have to enrich it with awareness of the dynamic side.

Let me add, in conclusion, that Bortoft’s article has made me realize that the subject term (X) of “X becomes Y” may be seen as asort of‘genus’ in relation to the predicate term (Y)[21]. For, in addition to reawakening us to the dynamic aspects of our world, Goethe is pointing out[22]that the root form, the common historical source of present forms, has a unifying effect, distinct from that of mere similarities in present characteristics.

Upon reflection we see that here it is not “X” per se which is a genus, but the derivative term “came out ofX” which is obviously different in its logical properties. After an X becomes a Y, we can classify that Y under the heading of things that came out of an X (though not under things X). The closer study of this more complex predicate, involvingboth tense and course of change, would constitute an enlargement of class logic.

For evidently, a broad consideration of class logic has to recognize a distinct existence and identity to terms which are not only present and attributive (is X), but past (was X) or future (will be X) in the mutative (came out of X, will come out of X) or alterative (got to be out of X, will get to be out of X) senses. For each of these terms is legitimate (and oft-used in practice) and sure to have its own behavior patterns[23].

The scope of class logic studies has so far been limited so as to simplify the problem; but once the simpler cases are dealt with, we are obliged to dig deeper and try and give an account of all forms of human reasoning.

6.Concept Formation

Many philosophers give the impression that a concept is formed simply by pronouncing a clear definition and then considering what referents it applies to. This belief gives rise to misleading doctrines, like Kant’s idea that definitions are arbitrary and tautologous. For this reason, it is important to understand more fully how concepts arise in practice[24]. There are in fact two ways concepts are formed:

a)Deductive concepts. Some concepts indeed start with reference to a selected attribute found to occur in some things (or invented, by mental conjunction of separately experienced attributes). The attribute defines the concept once and for all, after which we look around and verify what things it applies to (if any, in the case of inventions) and what things lack it. Such concepts might be labeled ‘deductive’, in that their definition is fixed. Of course, insofar as such concepts depend on experiential input (observation of an attribute, or of the attributes imagined conjoined), they are not purely deductive.

Note in passing the distinction between deductive concepts based on someobservedattribute(s), and those based on animaginedconjunction of observed attributes. The former necessarily have some real referents, whereas the latter may or not have referents. The imagined definition may turn out by observation or experiment to have been a good prediction; or nothing may ever be found that matches what it projects. Such fictions may of course have from the start been intended for fun, without expectation of concretization; but sometimes we do seriously look for corresponding entities (e.g. an elementary particle).

b)Inductive concepts. But there are other sorts of concepts, which develop more gradually and by insight. We observe a group of things thatseemto havesomethingin common, we know not immediately quitewhat. We first label the group of things with a distinct name, thusconventionallybinding them together for further consideration. This name has certain referents, more or less recognizable by insight, but not yet a definition! Secondly, we look for the common attribute(s) that may be used as definition, so as to bind the referents together in our minds in afactual(not conventional, but natural) way. The latter is a trial and error, inductive process.

We begin it by more closely observing the specimens under consideration, in a bid to discern some of their attributes. One of these attributes, or a set of them, may then stand out as common to all the specimens, and be proposed as the group’s definition. Later, this assumption may be found false, when a previously unnoticed specimen is taken into consideration, which intuitively fits into the group, but does not have the attribute(s) required to fit into the postulated definition. This may go on and on for quite a while, until we manage to pinpoint the precise attribute or cluster of attributes that can fulfill the role of definition.

I would say that the majority of concepts are inductive, rather that deductive. That is, they do not begin with a clear and fixed definition, but start with a vague notion and gradually tend towards a clearer concept. It is important for philosophers and logicians to remember this fact.

7.Empty Classes

The concept of empty or null classes is very much a logical positivist construct. According to that school, you but have to ‘define’ a class, and you can leave to later determination the issue as to whether it has referents or is ‘null’. The conceptual vector is divorced from the empirical vector.

What happens in practice is that an imaginary entity (or a complex of experience, logical insight and imagination) is classified without due notice of its imaginary aspect(s). A budding concept is prematurely packaged, one could say, or inadequately labeled. Had we paid a little more attention or made a few extra efforts of verification, we would have quickly noted the inadequacies or difficulties in the concept. We would not have ‘defined’ the concept so easily and clumsily in the first place, and thus not found it to be a ‘null class’.

One ought not, or as little as possible, build up one’s knowledge by the postulation of fanciful classes, to be later found ‘empty’ of referents. One should rather seek to examine one’s concepts carefully from the start. Though of course in practice the task is rather to reexamine seemingly cut-and-dried concepts.

I am not saying that we do not have null classes in our cognitive processes. Quite the contrary, we have throughout history produced classes of imaginary entities later recognized as non-existent. Take ‘Pegasus’ – I presume some of the people who imagined this entity believed it existed or perhaps children do for a while. They had an image of a horse with wings, but eventually found it to be a myth.

However, as a myth, it survives, as a receptacle for thousands of symbolizations or playful associations, which perhaps have a function in the life of the mind. It is thus very difficult to call ‘Pegasus’ a null-class. Strictly speaking, it is, since there were never ‘flying horses’. But in another sense, as the recipient of every time the word Pegasus is used, or the image of a flying horse is mentally referred to, it is not an empty class. It is full of incidental ‘entities’, which are not flying horses but have to do with the names or images of the flying horse – events of consciousness which are rather grouped by a common symbol.

Mythical concepts in this sense are discussed by Michel Foucault in hisOrder of Things.

We can further buttress the non-emptiness of imaginary concepts by reminding ourselves that today’s imaginations may tomorrow turn out to have been realistic. Or getting more philosophical we can still today imagine a scenario for ourselves, consistent with all experience and logical checks, in which ‘Pegasus’ has a place as a ‘real’ entity, or a concept with real referents. Perhaps one day, as a result of genetic manipulations.

Another example interesting to note is that of a born-blind person, who supposedly lacks even imaginary experience of sights, talking of shape or color. Such words are, for that person, purely null-classes, since not based on any idea, inner any more than outer, as to what they are intended to refer to, but on mere hearsay and mimicry. Here again, some surgical operation might conceivably give that person sight, at which time the words would acquire meaning.

But of course, there are many concepts in our minds, at all times, which are bound to be out of phase with the world around since we are cognitively limited anyway. It follows that the distinction here suggested, between direct reference and indirect (symbolic – verbal or pictorial) reference, must be viewed as having gradations, with seemingly direct or seemingly indirect in-betweens.

Furthermore, we can give the cognitive advice that one should avoid conceptualization practices that unnecessarily multiply null-classes (a sort of corollary of Ockham’s Razor). Before ‘defining’ some new class, do a little research and reflection, it is a more efficient approach in the long run.

One should also endeavor to distinguish between‘realistic’ conceptsand‘imaginary’ concepts, whenever possible, so that though the latter be null classes strictly speaking, their mentally subsisting elements, the indirect references, may be registered in a fitting manner. Of course, realistic concepts may later be found imaginary and vice-versa; we must remain supple in such categorizations.

Imaginary concepts are distinguished as complexes involving not only perception and conception, but alsocreativity. The precise role of the latter faculty must be kept in mind. We must estimate the varying part played by projection in each concept over time. This, of course, is nothing new to logic, but a restatement for this particular context of something well known in general.

8.Context[25]

We may here refer to as a ‘text’ any word, phrase, sentence or collection of sentences, or indeed any meaningful symbol (such as a traffic sign or a Chinese character[26]). A text may be explicit in thought, speech or writing; or it may be implicit, yet to be made explicit. When two or more texts come together in a body of knowledge, or in a selected framework under consideration, they form a combined text, and each text is said to be taken ‘in the context of’ the other text(s) present or under consideration. Note also: If a text logically implies some other text or parts of a text, the latter text or parts is/are called a ‘subtext’ of the former.

Each text taken alone carries with it a certain range of meaning or semantic charge, which is all the possible intentions or interpretations inherent in it, with reference to all possible contexts. This is of course a theoretical notion, since we are never omniscient: it is an open-ended concept; as our knowledge develops, more and more of these possible meanings come to light. Nonetheless, we can represent this eventual totality as a circle for the sake of argument. Thus, contextuality can to some extent be illustrated as theintersectionbetween two (or more) such circles of meaning, as in the diagram below.

Obviously, the texts must be compatible, to give rise to a combined text[27]. As this diagram makes clear, the intersection of texts may not give rise to just one joint meaning (a point); it may well give rise to a range of meanings (an area, though one smaller than the original areas). The meaning(s) that they share is/are their compatibility, and the areas outside their intersection are their distinctions and incompatibilities. Note that some, perhaps most, of the “meanings” under consideration are bound to be experiential (actual or at least potential experiences): they are far from entirely conceptual.

But, the essence of contextuality isthe mutual impactthat combined texts have on each other. When two texts intertwine, if the meaning of neither of them is apparently affected by the presence of the other text, they cannot be regarded as constituting a context for each other.Contextuality is joint causation by the combination of texts of some new, or more specific, meaning.The combined text has a semantic charge somewhat different from the separate texts that constitute it. Either some “new” meaning is caused to appear for us by such fusion (i.e. though it was in the theoretical semantic charge, we were not yet made aware of it in actuality); or though the meaning was foreseen as potential, the fusion of texts has narrowed down the scope of possibilities and so brought that meaning to the fore or into sharper focus.

A one-word text has a broad range of potential meanings (all its eventual denotations and connotations, now known or not yet known). When you combine it with other words, in a phrase or sentence, you inevitably fine-tune its range of meanings, since only its occurrences in such conjunction are henceforth under consideration. But if you had not till now been aware that this word was combinable with those others, the moment of discovery was an enrichment of meaning for that word, as far as you are concerned. The fine-tuning aspect may be viewed as “deductive”; the enriching aspect may be viewed as “inductive”.

In this way, bringing texts together in thought or common discourse serves to naturally enlighten us as to their meanings, to increase our understanding or the precision of our insights. This is no mystical event, but is a natural consequence of logic, an operation of the reasoning faculty. And by logic, here, understand inductive as well as deductive logic. After all, what is the whole thrust of this science – its analysis of the forms (categorical, conditional, etc.) and processes (oppositions, eductions, syllogisms, adductions) – but to evaluate once and for all the effect of terms and propositions on each other.

A formal example is syllogism. The premises are two texts, say “X is Y” and “Y is Z”, and the conclusion “X is Z” is the context, i.e. the common ground (or part of it) of meaning in them. Each text in isolation includes this proposition (X is Z) and possibly its opposite. But when the two are brought together, this meaning (X is Z) in them is selected.

Of course, some mystery remains. We may well wonder at the ultimate universality of logical insight. Contrary to the beliefs of certain naïve logicians, it is not by means of conventions that reason keeps us in sane contact with experience. It is rather a sort of orderliness, by careful attention to the laws of thought. It is an ethical choice and habit, not a compulsion. Many people fail in this duty of sanity much of the time, and most people do so some of the time (hurting themselves and others).

9.Communication

Logic and language are used primarily for individual thought, and only thereafter for communication between individuals and in groups. Some logicians and linguists seem to forget that, and stress their social aspect, considering the facts of biological evolution. There is no denying that the physiological organs that make human speech possible had to evolve before language could occur. It is also doubtless that the existence of social groups with common experiences and survival goals greatly stimulated the development of verbal discourse. Nevertheless, it is logically unthinkable that any social communication occur without there being first an equivalent movement of thought within the individual mind.

Moreover (as I explain earlier, in chapter 3.2), before verbal thought or dialogue there has to be intention. Words are phenomenal, first occurring in the way of sounds and images in the mind, whether they are taught by society or personally invented. Preverbal thought is intuitive: it is the self-knowledge of what experiences or abstractions we personally intend to refer to or understand by the words used or encountered. Before a logical insight is put into words, it occurs silently and invisibly, as something introspectively evident. To grasp the meanings we attach to words, we range far and wide in our present and past experiences and reasoning. All the factors thus scanned, which effectively contribute to the meaning of a text, are its ‘context’ for the individual concerned.

With regard to communication between people (or even with animals), additional factors must be taken into consideration. First, we have to note the empirical facts that, to all appearance, communication is sometimes successful and sometimes not. Both these facts are significant.

Secondly, successful communication may seemingly be nonverbal as well as verbal. Some nonverbal discourse occurs in the way of facial expressions, bodily gestures, tonalities of voice, etc. – this is still phenomenal, indeed material, communication, which largely relies on the common behavior patterns of individuals, and in particular the similarity of their emotional reactions. If I shout angrily or wail despairingly, you recognize the sounds as similar to those you emit when you have these emotions, and you assume I am having the same emotions (or occasionally, pretending to have them).

There may also exist nonverbal communication based on telepathy, i.e. apparently on a non-material vehicle, though possibly through some material field (e.g. electromagnetic waves). Thoughts might alternatively be transported in some shared mental domain; or telepathy might even be non-phenomenal, based on possibility of intuition into other people’s souls as well as our own. I tend to believe in telepathy (however its means), but readily admit that such a conjecture is not currently scientifically detected and justified. It is mentioned here in passing.

With regard to verbal communication between two (or more) players, the following is worth mentioning. It may be oral (speech) or visual (writing, alphabetical or using other symbols). In the case of speech, the emitter is a speaker and the receiver is an auditor. In the case of writing, we have a writer and a reader. There are different (variously related) languages, and even the same language is not necessarily fully shared. Obviously, both the players must have (part of) a language in common for verbal communication to at all occur.

Inevitably, two people who share the same text do not have exactly the same context for it. They may have both had a certain experience, but their perspectives and memories of it are likely to differ. They may both know and use a word or concept, but it means somewhat different things to them. They may agree on certain beliefs or principles, but understand them variously. For example, the word “logic” means different things to two logicians, and all the more so to a logician and a layperson. Or again, a scientist’s idea of “intellectual honesty” and that of a journalist are very different.

This brings us, thirdly, to the complexities of communication: the difficulty of transmitting what one intends to mean and that of interpreting what was meant. The one making a statement (call him or her A) may wish to reveal something and/or to conceal something; the intent may be sincere and transparent, or manipulative and distortive. The one interpreting the statement (call him or her B) must, as well as understanding its content at face value, critically evaluate its honesty or dishonesty. For both parties, both deductive and inductive aspects are involved.

A may call upon B to remember certain common experiences or to believe some reported experiences, to form certain concepts and propositions from them, and to draw certain deductive and inductive inferences from them. To achieve this end, A must guess what B knows or does not know, and how intelligent he or she is, and tailor the statement accordingly.

For example, a teacher may want to ensure the transmission of knowledge by adding more information or explanation, giving students sufficient indices so that there will be no misunderstanding. Or for example, a biased TV news team may slant a “report” by filming or showing only certain aspects of an event, and they may air with it comments that are either explicitly tendentious or that serve their aims through a cunning choice of words and tone of voice, or they may simply add background music that produces the desired emotional reaction of sympathy or rejection.

On the other side, B has to guess, or more or less systematically estimate, what A intended by the statement made, and how reliable a witness A is. This may involve looking into one’s memory banks for matching or conflicting personal experiences, researching in other sources (looking in a dictionary, the public library or the Internet, or interviewing people around one), thinking for oneself, spotting contradictions, using syllogisms, trying and testing different hypotheses, and so forth. This sort of inner discourse goes on usually unconsciously all day long when we are dealing with people, trying to understand their words and deeds.



[1]This essay was written back in 1990, soon after I completedFuture Logic, so that I could not include its clarifications in that book. All the other topics in this chapter were developed later, in 1997.

[2]A theory that impliesbothP and nonP is inconsistent and therefore false. If that result seems inappropriate, then the claim that T implies P or that T implies nonP or both must be reviewed.

[3]This alternative is incompatible with it, i.e. they cannot both be true.

[4]For example, ‘it is white’ and ‘it is black’ are too vague to be incompatible. We might not realize this immediately, till we remember that some things are both black and white, i.e. partly the one and partly the other. Then we would say more precisely ‘it is white and not black’ or ‘it is wholly black’, to facilitate subsequent testing. Of course, our knowledge that some things are both black and white is the product of previous experience; in formulating our theses accordingly, we merely short cut settled issues.

[5]The disjunction ‘T or nonT’ may be viewed as a special case of this. But also, ‘T1 or T2 or T3 or…’ may always be recast as ‘T1 or nonT1’, where nonT1 is equivalent to ‘T2 or T3 or…’.

[6]Such bare events impinge on our mind all the time. A skilful knower is one who has trained himself or herself to distinguish primary phenomena from later constructs involving them. Sometimes such distinction is only possible ex post facto, after discovery of erroneous consequences of past failures in this art.

[7]A prediction is only significant, useful to deciding between theories, if it is, as well as consistent, testable empirically; otherwise, it is just hot air, mere assertion, a cover or embellishment for speculations. The process of testing cannot rest content at some convenient stage, but must perpetually put ideas in question, to ensure ever greater credibility.

[8]Note that correct prediction by a theory does not imply proof of the theory (since ‘T predicts P’ does not imply ‘nonT predicts nonP’), nor even exclude correct prediction by the contradictory theory (since ‘nonT predicts P’ is compatible). It ‘confirms’ the theory only if the contradictory theory may be ‘undermined’ (i.e. if ‘nonT is neutral to P’), otherwise both the theory and its contradictory are unaffected.

[9]The domain of probability rating may be further complicated by reference to different degrees of implication, instead of just to strict implication. T may ‘probably imply’ P, for instance, and this formal possibility gives rise to further nuances in the computation of probabilities of theories.

[10]Note that if both T and nonT predict P, then P is bound to occur; i.e. if the implications are logically incontrovertible, then P is necessary. If we nonetheless find nonP to occur and thus our predictions false, we are faced with a paradox. To resolve it, we must verify our observation of nonP and our implications of P by both T and nonT. Inevitably, either the observation or one or both implications (or the assumptions that led us to them) will be found erroneous, by the law of non-contradiction.

[11]At least temporarily; we may later find reason to eliminate T1, which would mean that our list of theories was not complete and a further alternative Tn must be formulated.

[12]Thus correct prediction, though not identical with confirmation, is ‘potential’ confirmation, etc.

[13]In a way Aristotle brought this criticism upon himself, since he first apparently suggested that universal propositions are based on complete enumeration. But of course, in practice we almost never (except in very artificial situations where we ourselves conventionally define a group as complete) encounter completely enumerable groups. Our concepts are normally open-ended, with a potentially “infinite” population that we can never even in theory hope to come across (since some of it may be in the past or future, or in some other solar system or galaxy)!

[14]My present comments were written in 1998.

[15]Author of The Wholeness of Nature; Goethe’s Way of Science (Floris Press, 1996).

[16]Johannn Wolfgang von Goethe (Germany, 1749-1832).

[17]In Goethe’s World View (1897).

[18]See especially chapter 17.

[19]That is, an action or activity can be counted as a quality in this context; e.g. footballers.

[20]There I also deal with other forms of change. ‘Becoming’ refers tomutation(or metamorphosis or radical change), but we must also consideralteration(or superficial change), for which I use the expression ‘getting to be’ as copula, note. (I saw the elucidation of this language and area of logic as essential to discourse in evolutionary theory, for instance.)

[21]Note well this reverses the roles in “X is Y”, where Y is usually seen as a genus of X (if all X are Y, to be more precise).

[22]It is irrelevant how far today’s biologists agree with Goethe’s specific thesis; we are merely concerned with the philosophical aspects here.

[23]Certainly, a member of “now X” is not necessarily a member of “previously X” or of “subsequently X”, all the more so if we consider the different kinds of change which may underlie the qualifications ‘previously’ or ‘subsequently’. Such study ought, perhaps, start by considering the converse issue — the logical properties of the tenses of mutation (became, will become Y) and alteration (got to be, will get to be Y).

[24]See also myFuture Logic, chapter 4.4, and other comments on this topic scattered in my works. The present comments were written in 2002, so as to clarify the next section, about empty classes. The ultimate null class is, of course, ‘non-existence’!

[25]See alsoFuture Logic, chapter 22.

[26]In contrast to the letters of an alphabet, which are intended as semantically empty.

[27]When two texts are incompatible, and it is not clear which of the two is to be abandoned, they remain in knowledge “temporarily” as an unsolved problem (i.e. both become problematic to a greater degree than previously). When one text is preferred to the other, for whatever reasons, clearly the negation of the latter becomes a context for the former, as do the reasons for the preference.

You can purchase a paper copy of this bookBooks by Avi Sion in The Logician Bookstoreat The Logician’s secure online Bookshop.