
Key ideas: Published in 2014. "Epistemology, the theory of knowledge, is the branch of philosophy that defines the nature, means, and standards of knowledge. Epistemology deals with the crucial questions: What is knowledge? How is it acquired? How is it validated? Since knowledge is man’s means of dealing with reality, a man attempting to function on an irrational epistemology is unequipped to deal with reality, dooming himself to doubt, confusion, and failure." (Harry Binswanger)
Epistemology, the theory of knowledge, is the branch of philosophy that defines the nature, means, and standards of knowledge. Epistemology deals with the crucial questions: What is knowledge? How is it acquired? How is it validated? Since knowledge is man’s means of dealing with reality, a man attempting to function on an irrational epistemology is unequipped to deal with reality, dooming himself to doubt, confusion, and failure. ….
Our technological success has come from a dedication to reason and logic, but reason and logic have been distorted or openly attacked by mainstream epistemologists for the last 200 years, ever since Kant’s theory of knowledge gained dominance in the intellectual world. Establishment epistemology has carried to its logical conclusion Kant’s claim that reason cannot know reality. The result has been two schools of thought, one that accepts reason while ignoring reality, and one that accepts reality while denying reason.
Rationalism is the school that scorns sensory perception and constructs intellectual castles in the air. Empiricism is the school that scorns abstractions and demands that men hold their minds down to the animal level of unconceptualized, unintegrated sensing. Rationalism ultimately degenerates into mysticism, as in its ancient father: Plato. Empiricism ultimately degenerates into skepticism, as in its modern father: Hume. …
Understanding how knowledge is acquired and validated enables one to bring the cognitive quest under one’s conscious control and direction, equipping him to succeed in acquiring knowledge, to avoid whole categories of error, and to reach objective certainty in his conclusions. …
Knowledge is a product of the wider faculty: consciousness. If one adopts the causal-biological perspective on consciousness, and applies it to each of the different functions and levels of awareness, one can gain a crucial, even life-altering, understanding of the mind and its cognitive needs.
The misunderstandings of consciousness that have wreaked havoc on the history of philosophy, making philosophy appear irrelevant to daily life, all stem from taking consciousness to be non-causal and non-biological — or even, in the latest aberration, non-existent. But consciousness exists, and it functions according to its nature. Refusing to recognize its existence and its identity makes men mysterious to themselves. It turns men, in Rand’s graphic phrase, into “prisoners inside their own skulls.”
To gain self-understanding, one must understand the essence of the self: one’s mind.
Ayn Rand’s characterization of knowledge summarizes this, and states the basic means by which knowledge is acquired: Knowledge is “a mental grasp of a fact(s) of reality, reached either by perceptual observation or by a process of reason based on perceptual observation.”
Knowledge is of facts of reality, i.e., aspects of existence. The basis and starting point of all knowledge is the fact that there is a world to be known. Or, in Rand’s indelible statement, “Existence exists.” …
“Existence exists” is a formulation of what is self-evident. “Self-evident” means: available to direct awareness. …
“Existence exists” is not a derivative or restricted truth but an axiom: a fundamental, primary, self-evident truth implicitly contained in all knowledge.
Some people demand that axioms be proved. But such a demand fails to grasp what proof is. ….
As Aristotle observed, it is illogical to hold that absolutely everything has to be proved. Proof is indispensable when direct observation is not available. But proof is neither necessary nor possible in regard to the basic information on which all knowledge is based: perceptual data. As important as proof is, it is the secondary, not the primary, means of validating ideas. The primary means is direct awareness.
Self-evidencies, directly perceived facts, are what make proof possible. To state the point in an extreme form: proof is what we resort to when something is not self-evident.
And let us ask: why does proof prove? What makes it “work”? Proof establishes an idea by connecting it to the directly perceived, the self-evident. To demand, therefore, a proof of the self-evident is an absurd reversal.
Consciousness is the faculty of awareness….
An axiom is a truth that is cognitively primary, self-evident, and stands at the basis of knowledge. It is easy to show that “I am conscious” possesses each of those four characteristics of axiomaticity.
The fact that one is conscious is a truth; indeed consciousness, like existence, is a foundation of truth: truth pertains to a certain relationship between consciousness and reality. External existents, apart from any relationship they have to consciousness, are not “true” or “false” — they just are; truth and falsehood pertain to something mental — an idea, statement, proposition — in its relation to the external world.
The fact that one is conscious is a cognitive primary: for newborns, or even in the womb, cognition begins with being conscious of something — of a pressure, a temperature, etc. — and there is no cognition prior to being conscious.
The fact that one is conscious is self-evident: it is directly experienced, not inferred. To be sure, one’s direct awareness of one’s consciousness is not sensory, as one’s direct awareness of existence is. One does not see or touch or taste one’s awareness. But, from a very young age, one is directly, non-inferentially aware that in sense-perception, and later in more complex mental activities, one is aware. Self-awareness is a given for man.
The fact that one is conscious is the base of all knowledge: knowledge is a phenomenon of consciousness. To know by means of unconsciousness is a contradiction in terms. Roses and rocks do not have knowledge.
A proposition is a statement, such as “Cats are animals.” A proposition applies some predicate (e.g., “animal”) to some subject (e.g., cats). Thus, the general formula for a proposition is “S is P.” To form or grasp a proposition, one has to know what the S and the P refer to — i.e., one has to have the concept of “S” and the concept of “P.” If a child does not have the concept “cat” and the concept “animal,” he cannot form or understand the proposition “Cats are animals.” Propositions presuppose concepts. Axiomatic propositions presuppose axiomatic concepts, with the implication Rand draws: “The base of man’s knowledge — of all other concepts, all axioms, propositions and thought — consists of axiomatic concepts.”
In particular, the base of the axiomatic proposition “Existence exists” is the axiomatic concept of “existence.” The base of the axiomatic proposition, “I am conscious” is the axiomatic concept of “consciousness.” …
Axiomatic concepts, held implicitly, are thus the base of cognitive development. Rand writes:
The building-block of man’s knowledge is the concept of an “existent” — of something that exists, be it a thing, an attribute or an action. Since it is a concept, man cannot grasp it explicitly until he has reached the conceptual stage. But it is implicit in every percept (to perceive a thing is to perceive that it exists) and man grasps it implicitly on the perceptual level — i.e., he grasps the constituents of the concept “existent,” the data which are later to be integrated by that concept. It is this implicit knowledge that permits his consciousness to develop further.
Of the properties of consciousness, four are fundamental and undeniable.
The stolen concept fallacy consists of a certain kind of violation of the hierarchy of concepts. Concepts have to be formed in a certain order, and their meaningful use depends on not violating that order. A child’s first concepts, such as “dog,” are formed from sense-perception; then more advanced concepts, such as “animal” and “pet,” are formed on the basis of the prior concepts, creating a hierarchy, in which some concepts depend on others. For instance, a child cannot grasp “pet” before he grasps “animal.” (I am referring to the grasp of a concept, not merely the uttering of a word.)
But suppose a demented philosopher announces: “Pets exist, but animals do not.” In violating the necessary hierarchy of concepts, his statement wipes itself out. Obviously, if there are no animals, there are no pets, since a pet is “any domesticated or tamed animal that is kept as a favorite and cared for affectionately.” [Random House College Dictionary, 1980]
On the other hand, if one says only: “There are no such things as pets,” one has made a false statement but has not “stolen” any concept. The concept-stealing here occurs when one attempts to retain the concept “pet” while denying the hierarchically prior concept “animal.” Doing that “steals” the concept “pet” — i.e., uses “pet” without any logical right to do so.
The uniquely perverse nature of the fallacy of the stolen concept, in the form it usually occurs, is its attempt to use a concept in the very act of negating that concept’s own base — thereby sawing off the cognitive branch one is sitting on. Consider the statement: “It has been proved mathematically that there are no such things as numbers.” If there are no numbers, there is no science of mathematics and no such thing as a mathematical proof: one can grasp “mathematics” only on the basis of first grasping “one,” “two,” and other numerical concepts.
All the versions of the primacy of consciousness that litter the history of philosophy “steal” the concept of “consciousness,” or some particular concept pertaining to consciousness. For, as Rand observes, “It is only in relation to the external world that the various actions of a consciousness can be experienced, grasped, defined or communicated.”
To grasp “consciousness,” one must distinguish actions of consciousness from their objects — i.e., from things that exist independently. When a Cartesian says, “Maybe nothing exists outside my mind,” he has “stolen” the concept “mind,” depriving it of any meaning, just as if he had said, “Maybe the entire universe is indoors,” an utterance that renders “indoors” meaningless. …
The nonbiological perspective stands markedly revealed in the common question: is it possible to develop a computer that can think? My answer is: before a computer could think, it would have to be able to understand ideas (concepts); before it could understand ideas, it would have to be able to perceive the world and to feel emotions, such as pleasure and pain, desire and fear; before it could perceive and feel emotions, it would have to be alive — i.e., be engaged in action to sustain itself. We can dismiss notions about a thinking computer until one is built that is alive — and then it wouldn’t be a computer but a living organism, a man-made one. …
Biologically, consciousness is not a passive spectator; an organism’s consciousness controls the actions of its body. It is the efficacy of consciousness in guiding such actions that explains the selection-pressure that favored its evolutionary development. …
Consciousness is not an entity, not in the sense that a stone or an organism is. Consciousness is a faculty of an entity, a man or animal; the operation of consciousness is a process of that entity. …
The biological function of consciousness is to guide action, and the basic source of guidance is cognition. A cognitive process is one devoted to gaining information about reality. Cognitive activities range from an animal’s perception of the entities in its immediate environment to man’s complex processes of scientific investigation. However primitive or advanced, the cognitive functions of consciousness are directed toward providing awareness of what things are, of their identities. …
Sensory perception is an animal’s or man’s primary form of cognitive contact with the world. Knowledge begins with, develops out of, and is tested against sensory observation. This point is not self-evident, nor is it the view of cognition with which mankind began. Perception’s fundamentality was first identified by Aristotle, but that identification did not become widely accepted until almost 1500 years later, after the long night of the anti-senses Dark and Medieval ages. Even at the dawn of the scientific era, perceptual observation was attacked and derided. How could men like Copernicus and Galileo cast aside the revealed word of God? How could they trust “observations” that were the product of debased bodily senses, or imagine that their limited, finite intellects, without aid from God, could produce anything other than confused, conflicting opinions?
Over a span of centuries, through the writings of Thomas Aquinas (c. 1250), Francis Bacon (1620), and John Locke (1690), the Aristotelian view won out, and mankind entered the Enlightenment era, the Age of Reason. But a counter-attack was soon launched by — of all people — philosophers. Starting with Descartes and bottoming out with Kant, a prominent line of philosophers peddled a secularized version of the old religious notions. “I have therefore found it necessary to deny knowledge in order to make room for faith,” Kant wrote. …
… two fundamental points:
Sensory perception is the primary and basic form of cognitive contact with the world. An organism born entirely without sense organs would be unconscious. …
The axiomatic nature of sensory awareness is confirmed by the argument of re-affirmation through denial, the test of axiomaticity. To make any statement denying the senses, one has to understand the terms the statement uses — “senses,” “invalid,” etc. But the meaning of these terms is learned, directly or indirectly, on the basis of perception. Without the senses’ basic cognitive contact with reality, we could not have any concepts, including those used to claim that the senses are invalid. Thus, the attack on the senses constitutes concept-stealing on an unparalleled scale. Without perception, we would be unconscious, like vegetables; vegetables cannot ponder the validity of perception.
A “sensation,” as I use that term, is the most primitive form of conscious response, the response to energy impinging on receptors, not to objects in a perceived world. …
A sensation is a conscious response to stimulation at the receptors, and that response lasts only as long as the stimulus is applied. A sensation is thus stimulus-bound: it is a sense or feeling, in response to what is currently stimulating the receptors.
The higher animals have evolved a much more potent form of awareness: perception. There are a number of features that distinguish perception from mere sensations.
Perception is awareness of entities — of things (including their characteristics). Whereas the crayfish’s tail-spot only discriminates brightness from darkness, human vision provides man with awareness not of stimuli but of the objects in the world, the objects that are responsible for the patterns in the light received by the eye. We see trees, dogs, books, clouds — rather than just discriminating a general level of illumination. Human eyes, like the crayfish’s tail spot, respond to light, but the human visual system is able to detect and exploit patterns in the light. …
A point essential to understanding perception is that perception is spatial; it presents a world of entities arrayed in space — i.e., in their relative positions. We do not perceive one isolated entity at a time, but a spread-out world of entities, each entity being discriminated from the others that are next to it. ….
The three-dimensional spatial array given in perception is what fundamentally distinguishes perception from sensation. It is not merely that perception (especially vision) gives entities, but also that perception provides the co-presence of all the entities that the animal can act on or be affected by. We see in one spread the entire scene of entities.
Perception gives us awareness of a world of entities extending out in all directions from “here,” i.e., from oneself. As one moves around in the perceived world, one’s vantage point, and hence “here,” moves, giving one a sense of one’s current place in the world. Thus, perception includes at least some sense of self. And, of course, one perceives one’s own limbs and trunk, and their spatial relation to the other things in the surroundings…
Perception is not a momentary, static impression but a continuous process over time. In the process of perceiving the world, the animal or man is an active, exploring observer. He scans his environment, moves through it, acts on it, and perceives the changes in the world that result.
To summarize in a preliminary definition: “Perception” is the ongoing awareness of entities in their relative positions, gained from actively acquired sensory inputs.
The preceding understanding of perception is radically at odds with most traditional views. According to most philosophers and psychologists, perceptions are constructed out of “sensations.” This approach, known as “sensationalism,” holds that when looking at an apple, we have now, or had in infancy, separate sensations of color, brightness, roundness, etc; the mind or brain supposedly puts together those separate sensations into the sight of the apple.
This notion is completely mistaken. Perception is a unitary phenomenon; it does not have sensations or anything else as components. It is not the case that sensations are cognitive “atoms” out of which perception is built up, whether by the brain or the intellect. …
The error in sensationalism is reification: the fallacy of taking an aspect of a thing, grasped by mental analysis, as if it were an entity capable of separate existence. The simplest example would be thinking that a coin is a combination of heads and tails, rather than realizing that heads and tails are not entities put together to form a coin, but are aspects of the actual entity, the coin — aspects that we mentally isolate. Likewise, sensory qualities, like brightness and softness, are aspects of the perceptual whole, which we mentally isolate but which never existed as separate phenomena. …
What perception isolates are not qualities but entities. The fire truck is automatically discriminated from the other cars and trucks, the road, the buildings, etc. The spatial discrimination of entities from each other is given in the perception; it is not the outcome of some higher act of cognition. Perception presents us with an array of entities, each set off against the others in a three-dimensional world. …
“Perception” is the direct awareness of reality, in the form of spatially arrayed entities, that results from the automatic neural processing of actively acquired sensGory inputs.
(It is taken as understood that the awareness is ongoing, not momentary or episodic, and that perception is “metaphysically given,” and hence inerrant.)
Like perception, reason has a biological function. Man needs knowledge, knowledge that extends beyond the perceptual. He needs to know how to obtain food and shelter, forge tools, and satisfy all the needs of his survival, material and mental. Reason enables man to gain the conceptual knowledge his survival requires.
For man, the basic means of survival is reason. Man cannot survive, as animals do, by the guidance of mere percepts. A sensation of hunger will tell him that he needs food (if he has learned to identify it as “hunger”), but it will not tell him how to obtain his food and it will not tell him what food is good for him or poisonous. He cannot provide for his simplest physical needs without a process of thought. He needs a process of thought to discover how to plant and grow his food or how to make weapons for hunting. His percepts might lead him to a cave, if one is available — but to build the simplest shelter, he needs a process of thought. No percepts and no “instincts” will tell him how to light a fire, how to weave cloth, how to forge tools, how to make a wheel, how to make an airplane, how to perform an appendectomy, how to produce an electric light bulb or an electronic tube or a cyclotron or a box of matches. Yet his life depends on such knowledge . . . [Ayn Rand]
…
We know that man evolved from pre-conceptual primates, and that our present intellectual capacity developed gradually, as the brain evolved. Man’s conceptual faculty arises from the nature of his brain, and the human brain is an elaboration of the primate brain. The conceptual faculty, reason, is an enhancement of perceptual consciousness, not an alien element wrenching man’s soul away from perceptual concretes.
After the work of Darwin, Mendel, Fisher, Watson and Crick, we know with full certainty that man’s conceptual faculty evolved due to natural selection — which means: man’s conceptual faculty has survival value. …
The operation of our perceptual equipment is automatic and infallible; the exercise of the faculty of reason is not automatic but volitional, and therefore can be misused, leading to error. We consciously control how we think, making constant choices regarding what to think about, how to proceed, what counts as evidence, and what constitutes sufficient evidence to be certain of a conclusion.
Man is neither infallible nor omniscient; if he were, a discipline such as epistemology — the theory of knowledge — would not be necessary nor possible: his knowledge would be automatic, unquestionable and total. But such is not man’s nature. Man is a being of volitional consciousness: beyond the level of percepts — a level inadequate to the cognitive requirements of his survival — man has to acquire knowledge by his own effort, which he may exercise or not, and by a process of reason, which he may apply correctly or not. Nature gives him no auto- matic guarantee of his mental efficacy; he is capable of error, of evasion, of psychological distortion. He needs a method of cognition, which he himself has to discover: he must discover how to use his rational faculty, how to validate his conclusions, how to distinguish truth from falsehood, how to set the criteria of what he may accept as knowledge. [ITOE, 78–79]
A man’s survival, well-being, and happiness depend on his knowing what to do (and how to do it). He needs to know that fire can be tamed and that striking together two rocks of a certain kind can make sparks that will start a fire. He needs to know that seeds can be planted, that a keystone will hold an arch together, that 2 + 2 = 4, that a certain stock is (or is not) a good investment, that the ratio of HDL to LDL cholesterol is important for his vascular health, that a certain person is “Mr. (or Miss) Right,” that political candidate X is most likely to preserve his freedom.
The truth of a proposition depends on the validity and precision of the concepts of which it is composed. Will candidate X support freedom? That depends on what the concept “freedom” means. Is abortion murder? That, in turn, depends on what “murder,” “human,” and “life” mean. Is it immoral to be selfish? That depends on what “moral” and “selfish” mean. …
Concepts are the tools of thought; if the tools are useless, malformed, or otherwise defective, the thought cannot achieve its goal: knowledge of reality. The validity of one’s thinking depends upon the validity of the concepts one uses.
A concept is held by means of a word — e.g., “man,” “furniture,” “justice.” The simplest words are proper names. The word “Tom” names Tom. Likewise, the earliest theorists assumed, when we say “Tom is a man,” we must be using “man” as a name for some one, perceivable existent: man, or manness. Manness is the “one in the many,” as the Ancient Greeks put it; manness is the one “universal” found in Tom, then in Dick, then in Harry. We can call each of them a “man,” they held, because “man” refers to this one “universal”: manness. The unstated assumption was: for a concept to be valid, it must be like a percept: an awareness of a perceivable existent, out there in reality, independent of consciousness.
This assumption is the defining characteristic of the theory of concepts known (somewhat misleadingly) as “Conceptual Realism,” which is usually shortened to “Realism.” According to Realism, a concept is a term that designates a metaphysical universal: a special kind of non-specific element present in all the members of a class, an element that is grasped directly by some sort of non-sensory “intuition” or “insight.”
Realism began with Plato. Plato went far beyond the mere acceptance of metaphysical universals; he concocted another, “higher,” reality for universals to inhabit — his “World of Forms.” In that unperceivable, transcendent realm, there is one perfect, unchanging Form corresponding to each of our concepts. There is the “Form of Man,” the “Form of Triangle,” the “Form of Justice,” etc. … (Plato regards Forms as more real than the concrete particulars of the physical world, calling the World of Forms “the really real reality.”) …
The direct object of conceptual awareness, Plato held, is a Form in another dimension.
Aristotle, though he was Plato’s student, rejected this whole metaphysics. Aristotle recognized that there is only one reality, the world of concrete entities, the world that we perceive. But in his theory of concepts, Aristotle did not fully break free from Platonic assumptions. …
Centuries later, John Locke took the further step of jettisoning the form- matter apparatus, treating universals as attributes of things. For Locke, there is no such thing as either “manness” or “essence.” Rather, men all have the rational faculty and other attributes in common. In these attributes, men “agree” — i.e., are the same — however much their other attributes may differ. …
The “problem of universals” is actually the question of the basis of concepts. The issue can be stated as follows. The concretes to which a given concept refers are neither identical to each other nor possessed of any nonspecific properties. What then warrants our treating them as the same, as being interchangeable units, when viewed abstractly?
As Rand puts the question:
To exemplify the issue as it is usually presented: When we refer to three persons as “men,” what do we designate by that term? The three persons are three individuals who differ in every particular respect and may not possess a single identical characteristic (not even their fingerprints). If you list all their particular characteristics, you will not find one representing “manness.” Where is the “manness” in men? What, in reality, corresponds to the concept “man” in our mind? [ITOE, 2]
…
Realists hold that reality contains “universals,” and that these can be grasped passively by the intellect, as if they were perceptually given. This, as we shall see, totally misconstrues what concepts are and do.
Realism’s problems spawned the Nominalist reaction. Accepting the Realist claim that concepts would require pre-packaged universals, Nominalists bite the bullet and conclude that concepts have no objective basis, that concepts are nothing but words. …
This is Extreme Nominalism, a truly lunatic theory. It flatly contradicts every aspect of our actual usage of concepts. For instance, Extreme Nominalism implies that it is an inexplicable miracle when two people independently apply the word “blue” to the next blueberry they see. Since the blueberry and the other things previously called “blue” have nothing in common, according to Wittgenstein, the two people should be equally prepared, on the Nominalist theory, to say “yellow” or “pink” — or, for that matter, “railroad.” …
As Ayn Rand remarks, “Wittgenstein’s theory that a concept refers to a conglomeration of things vaguely tied together by a ‘family resemblance’ is a perfect description of the state of a mind out of focus.” [ITOE, 78]
The result is that, for the Nominalist, how concepts are to be formed and used becomes a matter of feeling. This subjectivism has a devastating impact upon higher-level abstractions. One man feels that aborting a fetus and killing an innocent adult “resemble” each other, so he calls both “murder”; another man feels differently and maintains that abortion is a right. One man feels that a free lunch and free speech are similar, another man disagrees. One man feels that Al Capone and John D. Rockefeller were similar enough to be classified together, as being “rapacious” and “predatory.” But are they? How similar and in what way? Nominalism not only has no answer, it regards all such questions as in principle unanswerable.
Thus, the Nominalist theory deprives man of objective guidance in the crucial aspect of his life: how to form and use the concepts on which his control over the course of his life depends.
Stepping back to get an overview, we can state the problem of concepts as follows. The concretes to which a given concept refers are similar but not identical. The blue of the sky is similar but not identical to the blue of a blueberry. A beagle and a collie are similar in some way, though not identical, and this similarity is what enables us to classify them as dogs. The question then is: What is similarity? How does the fact that observed concretes are similar warrant our treating them as the same, as interchangeable units, qua referents of a given concept?
Ayn Rand’s theory of similarity grounds her Objectivist theory of concepts. She defines similarity as: “the relationship between two or more existents which possess the same characteristic(s), but in different measure or degree.” [ITOE, 13]
Things that are similar differ quantitatively. The blue of a blueberry is not identical to the blue of the sky, but the two differ quantitatively, in measurable ways. …
… a very young child beginning to form concepts would not perceive a pig and a collie to be similar. Why not, if similarity is an issue of quantitative differences? After all, the pig’s differences from the collie are also measurable — the pig is fatter, pinker, with a measurably different shape, etc.
The answer to this question lies in a cognitive process neglected by traditional theorists: differentiation. Similarity is inherently perceived against a background of difference. As I have stressed, consciousness is a differencedetector. When a naïve, pre-conceptual child attends to two items, it is their differences, not their similarities, that will be prominent. Although a beagle and a collie are similar, putting them side by side serves to focus attention on their differences (for a pre-conceptual child). But sensitivity to difference can be turned to advantage here. When the child observes a beagle, a collie, and a pig, the huge differences between the pig and the dogs leap to the foreground of awareness, making the two dogs appear similar. …
The grasp of similarity requires a minimum of three concretes having a commensurable characteristic(s): two whose measurements differ slightly and one that differs greatly in measurement from both.
The arch example is location. Is the Empire State Building near to or far from the Chrysler Building? That depends — it depends on what we are comparing them to. If the comparison is to a location across the street from the Empire State Building, the Chrysler Building is far. If the comparison is to the Sears Tower in Chicago, the two Manhattan buildings are near to each other. And if the comparison is to the location of a mountain on Mars, all three buildings are near to each other.
…..A…..B……………………………..C…..
Considered by themselves, A and B are in different places. Considered in contrast to C, A and B, though not in the identical place, are seen as falling in the same general region — i.e., at the left end of the line. …
“Near” and “far” are the stand-ins for “similar” and “different.” Similarity is measurement-proximity. …
For this mechanism of contrast to work, all three items, the two similars and the foil, must be compared along the same axis. They must share a commensurable characteristic, as in the diagram above, where horizontal position is the commensurable characteristic. If the position of A and B were to be considered in the same frame of awareness with, say, the smell of a peach, no similarity would be established, as there is no commensurable characteristic uniting those three items: no single unit of measurement can be applied to them. They are not different but disparate. Differentiation requires a commensurable characteristic to serve as the axis along which the things can be compared.
Rand refers to this commensurable characteristic as the “Conceptual Common Denominator” (CCD):
A commensurable characteristic (such as shape in the case of tables, or hue in the case of colors) is an essential element in the process of concept-formation. I shall designate it as the “Conceptual Common Denominator” and define it as “The characteristic(s) reducible to a unit of measurement, by means of which man differentiates two or more existents from other existents possessing it.” [ITOE, 15]
It may be helpful to connect Rand’s terminology with the traditional terms “genus” and “differentia.” A genus is the wider category to which a given concept’s referents belong. E.g., the genus of “triangle” is “polygon.” What is the relation of “genus” to Rand’s “CCD”? The CCD is not a category but a measurable characteristic, one possessed by all the things in the genus, as “having a certain number of sides” is a characteristic of all polygons.
The differentia is the referents’ distinguishing characteristic, the one that isolates them from all the other things within the genus. For triangle, the differentia is “three-sided.” So, we form “triangle” by differentiating shapes according to the number of their sides (CCD), triangles having three.
Though the CCD plays a crucial role in conceptualization, neither the CCD nor its function is explicit in the mind of the beginning conceptualizer. The CCD’s role is precisely to serve as the unnoticed background — not to call attention to itself but to something else: the difference between the similars and the foil, whose measurements lie noticeably farther away on the same CCD. When a child differentiates two blues from green, the CCD is hue; but the child’s attention is drawn not to hue as such (the sameness among all three items) but to blue vs. green (the difference in hue). …
Now we are in a position to find the “one in the many.” Or, rather, to see that the one in the many, the “universal,” is man-made, not given in nature, yet not created subjectively. A concept classifies together concretes whose measurements fall within the same category of measurements within the CCD.
The mental process of grasping the distinguishing characteristic is called by Rand “measurement-omission.” Measurement-omission is the core of the Objectivist theory of concept-formation. Having identified that similar concretes possess the same characteristic, but vary in their measurements, Rand was able to identify the nature of abstraction. The process of abstraction consists in interrelating the concretes in a certain way: one grasps the range or category of measurements that embraces all their varying measurements.
Concepts, then, are formed by omitting measurements. But omitting measurements is not a process of deletion or excision, as if we could mentally strip away the specific measurement from the characteristic. Rand is very clear about the difference between “measurement-omission” and the narrowed, eliminative focus of the abstraction-as-subtraction view:
Bear firmly in mind that the term “measurements omitted” does not mean, in this context, that measurements are regarded as nonexistent; it means that measurements exist, but are not specified. That measurements must exist is an essential part of the process. The principle is: the relevant measurements must exist in some quantity, but may exist in any quantity. [ITOE, 12]
Measurement-omission does not consist in ignoring the specific and varying measurements of concretes (à la Locke). When we omit the measurements, we are not ignoring anything; we are grasping something more:
the relationship among the measurements, the fact that the similar concretes are “near” to each other, as contrasted with the “far away” measurements of the foil.
When Rand says “That measurements must exist is an essential part of the process,” the process she is referring to is the process of measurement- omission. One uses the measurements to interrelate the similar concretes. One needs to focus one’s attention on the specific measurements, in order to see “where they are” — i.e., to grasp the category of measurements within which the similar concretes fall. E.g., the child has to focus on the specific shapes of particular tables (and of the foil, e.g., a chair) in order to perceive the tables as being similar, and to establish the range of shape-variation.
The process of measurement-omission is one of grasping a segment of the CCD — the segment representing the range embracing the similar concretes. Thus, measurement-omission is measurement-integration — the establishment of a range or category of measurements.
Concepts are formed by treating existents as units. A “unit,” in Rand’s definition, is “an existent regarded as a separate member of a group of two or more similar members.” [ITOE, 6]
Since similarity is a quantitative relationship, the term “unit” from mathematics is meant literally here. “Unit” means “one,” and to regard an existent as one is to view it in relation to a group — e.g., as a (one) book. The group is formed by mentally isolating things that are similar in some way, even if only in location (“objects on my desk”). Each of the similar existents is a “one” that can be used as a standard for counting or for measuring degrees.
Mentally linking similar existents to a word (“I’ll call these things tables”) makes the grouping permanently accessible as a store of knowledge about its units. Words’ information-handling power makes possible an exponential growth in knowledge, the growth observable in a child’s cognitive development from infancy to adulthood, as well as in the historical progression of mankind’s knowledge.
It is a commonplace to observe that words — whose full implementation is language — permit the communication of knowledge and thus the transmission of knowledge across generations. But Rand makes a deeper point: beyond a certain level, language is essential to the acquisition of knowledge in a single, private mind.
Concepts and, therefore, language are primarily a tool of cognition — not of communication, as is usually assumed. Communication is merely the consequence, not the cause nor the primary purpose of concept-formation — a crucial consequence, of invaluable importance to men, but still only a consequence. Cognition precedes communication; the necessary precondition of communication is that one have something to communicate.
. . . The primary purpose of concepts and of language is to provide man with a system of cognitive classification and organization, which enables him to acquire knowledge on an unlimited scale; this means: to keep order in man’s mind and enable him to think. [ITOE, 69]
…
Summarizing and condensing all the preceding is Ayn Rand’s definition of “concept”: “A concept is a mental integration of two or more units possessing the same distinguishing characteristic(s), with their particular measurements omitted.” [ITOE, 13]
By analyzing how concepts are formed, Rand has solved the age-old “problem of universals.” She has shown what concepts refer to in reality. Concepts do not refer to some Platonic “Form” in another dimension, nor to an Aristotelian “essence” in things, nor to a Lockean non-specific attribute, nor to Nominalist vague “resemblances.” Concepts refer to existents that have, within a range or category, “some but any” degree of the same characteristic(s). As Rand puts it:
Now we can answer the question: To what precisely do we refer when we designate three persons as “men”? We refer to the fact that they are living beings who possess the same characteristic distinguishing them from all other living species: a rational faculty — though the specific measurements of their distinguishing characteristic qua men, as well as of all their other characteristics qua living beings, are different. (As living beings of a certain kind, they possess innumerable characteristics in common: the same shape, the same range of size, the same facial features, the same vital organs, the same fingerprints, etc., and all these characteristics differ only in their measurements.) [ITOE, 17]
Everything that exists is finite. Whether it is an entity or an aspect of an entity, each existent is exactly what it is, and no more than that. This applies to the faculty of consciousness as well. The faculty has a specific identity, and that identity permits it to do certain specific things, but not more than that. …
Man’s conceptual faculty has its own specific, delimited identity. Accordingly, every aspect of conceptual processing has operational limits, or “specs” in computer-language. For instance, one cannot learn with infinite rapidity; it takes time to take in, absorb, digest, and fully integrate a body of new material. …
A man cannot concentrate on everything presented to him at once. Giving more attention to one thing is achieved by giving less attention to other things. This is apparent even in vision: one’s visual field is much wider than what is in clear, focal vision at any given moment. That limit applies not just to vision, but to every aspect of consciousness, including the conceptual faculty. In particular, there is a limit on the number of distinct units one can hold in focal awareness at any given moment.
This simple fact is the basis of the conceptual level’s power and biological value. Everything that concepts do for man, everything that raises man above the animal levels, is traceable to the fact that concepts permit economizing on units. “Conceptualization is a method of expanding man’s consciousness by reducing the number of its content’s units.” [ITOE, 64]
Rand introduces the principle of unit-economy by comparing man’s cognitive capacity to that of animals. Since the following passage goes to the essence of the Objectivist theory of concepts, I quote it at length.
The story of the following experiment was told in a university classroom by a professor of psychology. I cannot vouch for the validity of the specific numerical conclusions drawn from it, since I could not check it first-hand. But I shall cite it here, because it is the most illuminating way to illustrate a certain fundamental
The experiment was conducted to ascertain the extent of the ability of birds to deal with numbers. A hidden observer watched the behavior of a flock of crows gathered in a clearing of the woods. When a man came into the clearing and went on into the woods, the crows hid in the tree tops and would not come out until he returned and left the way he had come. When three men went into the woods and only two returned, the crows would not come out: they waited until the third one had left. But when five men went into the woods and only four returned, the crows came out of hiding. Apparently, their power of discrimination did not extend beyond three units — and their perceptual-mathematical ability consisted of a sequence such as: one-two-three-many.
Whether this particular experiment is accurate or not, the truth of the principle it illustrates can be ascertained introspectively: if we omit all conceptual knowledge, including the ability to count in terms of numbers, and attempt to see how many units (or existents of a given kind) we can discriminate, remember and deal with by purely perceptual means (e.g., visually or auditorially, but without counting), we will discover that the range of man’s perceptual ability may be greater, but not much greater, than that of the crow: we may grasp and hold five or six units at most.
This fact is the best demonstration of the cognitive role of concepts.
Since consciousness is a specific faculty, it has a specific nature or identity and, therefore, its range is limited: it cannot perceive everything at once; since awareness, on all its levels, requires an active process, it cannot do everything at once. Whether the units with which one deals are percepts or concepts, the range of what man can hold in the focus of his conscious awareness at any given moment, is limited. The essence, therefore, of man’s incomparable cognitive power is the ability to reduce a vast amount of information to a minimal number of units — which is the task performed by his conceptual faculty. And the principle of unit-economy is one of that faculty’s essential guiding principles. [ITOE, 62–63]
There are two distinguishable points here:
… In informal conversation, Rand often referred to this capacity-limit as “the crow epistemology,” or just “the crow.” …
Rand provides a vivid and memorable summary of the Objectivist theory of concepts by means of a mathematical analogy:
The basic principle of concept-formation (which states that the omitted measurements must exist in some quantity, but may exist in any quantity) is the equivalent of the basic principle of algebra, which states that algebraic symbols must be given some numerical value, but may be given any value. In this sense and respect, perceptual awareness is the arithmetic, but conceptual awareness is the algebra of cognition.
The relationship of concepts to their constituent particulars is the same as the relationship of algebraic symbols to numbers. In the equation 2a = a + a, any number may be substituted for the symbol “a” without affecting the truth of the equation. For instance: 2 x 5 = 5 + 5, or: 2 x 5,000,000 = 5,000,000 + 5,000,000. In the same manner, by the same psycho-epistemological method, a concept is used as an algebraic symbol that stands for any of the arithmetical sequence of units it subsumes.
Let those who attempt to invalidate concepts by declaring that they cannot find “manness” in men, try to invalidate algebra by declaring that they cannot find “a-ness” in 5 or in 5,000,000. [ITOE, 18]
The Objectivist theory holds that concepts are formed by grasping similarity against difference; the measurement-relationships among existents result from their having a commensurable characteristic. …
Abstraction is not subtraction but integration.
… How does a child form higher-level concepts, since they are too abstract to be formed directly from perception as first-level concepts are?
Higher-level concepts are formed by the process Rand calls “abstraction from abstractions.” It consists of turning the concept-forming process back on its own products: the input to the process is not concretes but earlier-formed concepts. The input used to form the concept “animal” is the prior concepts of, say, “dog,” “flea,” and “elephant.” The process is iterative: “animal” and “plant” will become the input for forming, years later, the still more abstract concept “organism.” …
It is abstraction from abstractions that allows one to acquire the full human vocabulary, giving one the power of thought, the power that has enabled man to become, in Dobzhansky’s phrase, “the lord of creation.”
A child who has only first-level concepts is, metaphorically speaking, living hand-to-mouth, without any tools of production. The child’s cognitive progress depends upon his acquiring the equivalent of baskets, nets, weapons, huts — all of which expand his efficacy, better his life, and save him time. Just as the development of basic tools, and then tools for making tools, made possible man’s material progress, so the step-by-step formation of concepts at higher and higher levels of abstraction makes possible his intellectual progress (which is required for his material progress).
Wider integrations are the simpler case. To form wider concepts, one applies the same processes of differentiation and integration that one used to form first-level concepts, but with earlier-formed concepts taken as the units to be integrated. A simple example is forming the concept “furniture” from the prior concepts “table,” “bed,” “couch,” “dresser,” etc. These first-level concepts are taken as units, which are then differentiated from architectural features, such as walls and doors, and/or from other objects in a room, such as plates, appliances, and rugs. Then one integrates the items of furniture according to their possession of a distinguishing characteristic, with their differing measurements omitted. …
There are two ways of subdividing an earlier concept: 1) by narrowing the earlier concept’s measurement-range, or 2) by adding a new characteristic, a characteristic not used in forming the earlier concept. …
Consider the process of subdividing the concept “table” to form “coffee table” and “dining table.” These two subdivisions are formed by narrowing the height measurements (coffee tables being lower than dining tables) and narrowing the function-measurements (to keep objects within easy reach while seated on a couch vs. while seated higher, in dining chairs).
The second type of narrowing is more complex and more interesting: cross-classification. An existing concept is narrowed according to the presence or absence of a new characteristic. Rand gives the example of forming “desk” as a narrowing of “table.” The essential difference between desks and other tables consists in the presence of drawers for stationery supplies — a characteristic not specified in, and orthogonal to, the original concept “table.”
The process of abstraction from abstractions gives rise to a phenomenon of immense importance for epistemology: hierarchy.
In its most general usage, a “hierarchy” is an ordered relationship among items: each item is located in a series according to its dependency upon the item below it. … The hierarchy terminates in (or begins with) a primary, or set of primaries. This primary is the fundamental item of the series, the one on which all the others depend. For instance, the military hierarchy terminates in the Commander-in-Chief, or each of the floors of a building depends on the one below it, terminating in the ground floor.
The hierarchy of concepts results from the iterative nature of abstraction from abstractions. Higher-level concepts depend upon the earlier ones that they integrate or subdivide. “Organism” (a widening) depends on “plant” and “animal,” and these concepts, in turn, depend on earlier concepts formed from perception, such as “tree” and “bush,” and “dog” and “pig.” “Celebrity” (a narrowing) depends on “fame” and “man.”
This dependency is absolute: without lower-level concepts to link them back to perceptual reality, the higher-level concepts lose their meaning, becoming empty sounds.
The hierarchy of concepts concerns the necessary order of their formation. By virtue of the identity of man’s consciousness, concepts have to be formed in a certain order. The concept of “stockholder” cannot be formed before the concept of “stock,” which cannot be formed before the concept “corporation,” which cannot be formed before the concept “business.” …
Likewise, consider the concept “organism” as denoting any living being, whether plant or animal. A child can see the similarity among dogs, or among horses, or among trees. But now picture the child, before he has any concepts, looking at a grassy field on which there are dogs, horses, trees, and rocks. Given the limits of the “crow,” no child can grasp by just looking at the scene that the living organisms (dogs, grass, horses, trees) are similar as opposed to the rocks. In order to reach the required scale of awareness, the child must first condense the dogs into “dog,” the horses into “horse,” then “dog” and “horse” into “animal,” and he must separately condense “tree” and “grass” into “plant.” Only after these intermediate concepts have been formed and automatized can he then consider the two units “animal” and “plant” in opposition to “rock”. …
The hierarchy of concepts is the order required to grasp the concept, not merely to parrot the word. And “grasping” means “taking firm, secure hold of,” not merely “having a faint, brushing encounter with.” To grasp “soldier,” one must first grasp “army” or “military,” which in turn requires grasping “war,” “nation,” and so on, until one gets back to first-level concepts — i.e., those that can be formed from perception, without needing to use any previously formed concepts in the process. …
Neither the concept “monocotyledon” nor even the concept “dog” represents a gift granted by society. Grasping any concept is an achievement, one that requires effort, attention, and doing mental work. The product of that work, a concept, is a grasp of facts of reality — i.e., knowledge.
We need to bear in mind Rand’s observation that “no matter how many men mouth a concept as a meaningless sound, some man had to originate it at some time.” [ITOE, 21] To originate a new concept requires making fresh observations, and, for higher-level concepts, that origination also requires a sustained effort to relate things, in a quest to find common denominators.
A fairly simple example of the difference between origination and learning from others is provided by the concept “velocity.” High school physics students today have little difficulty in integrating the concepts of “speed” and “direction” to form the concept “velocity,” which denotes speed in a given direction. But consider the fact that the origination of this concept, which is the key to understanding the laws of motion, required the genius of a man like Galileo. The simple concept “speed,” even precisely measurable speed, does not suffice to explain motion. For example, the basis of understanding planetary orbits lies in realizing that, though a planet’s speed remains roughly constant, its velocity is continuously changing as it follows its curved path.
By using the Galilean concept of “velocity,” Newton was able to explain how gravity is the force responsible for the changing velocity of the planets, and that the planets are constantly accelerating towards the sun, even though their distance from the sun remains almost constant. “Acceleration” is a still more abstract concept, defined not as a change in speed but as a change in velocity. Only this conceptual hierarchy makes it possible to understand such seemingly paradoxical facts as that a ball thrown upwards, though losing speed as it rises, is accelerating downward from the moment it leaves one’s hand, in accordance with Newton’s Second Law of motion (F = ma).
Newton’s three laws of motion, as simple as they are, cannot be grasped until one has grasped, then related, the concepts of direction, speed, velocity, and acceleration. These concepts do not lie flat, in effect, on a plane; instead, they have a hierarchical structure: a necessary order of learning. “Velocity” is a cross-classification and cannot be grasped before “speed” and “direction.” Likewise for “acceleration,” which is a change in velocity. If one forgets the actual meaning of “velocity,” and attempts to use it as a synonym for “speed,” one’s concept of “acceleration” will be sabotaged. A higher-level concept depends for its meaning on the prior concepts used in its formation. Deprived of that base, a concept becomes a meaningless sound. That is exactly the fate of “stolen concepts” — e.g., of “illegal” in “The U.S. Constitution is illegal.”
Hierarchy has sometimes been analogized to structures — to skyscrapers or pyramids. But the most accurate analogy for the hierarchy of concepts is the suspension bridge. Some of the higher parts of a suspension bridge serve to hold up the structures below them; other parts do the opposite, supporting what lies on top of them. But every part of the suspension bridge is subject to and works in relation to the force of gravity, just as every concept in the hierarchy is subject to the necessary order of learning. Just as a given part of a suspension bridge has support but also supports other parts, so a concept in the hierarchy may have prior concepts that make it possible, while also making possible grasping other concepts that rest on it.
And just as any bridge part will fall unless it is supported, ultimately, by the ground, so any concept which does not reduce through intermediate concepts back to the perceptual level will fail to function cognitively. Ungrounded “concepts” are mere sounds, without a cognitive link to the facts of reality. Rand calls them “floating abstractions.”
Concepts are tools. The value of any tool lies in its use. What concepts are used to do is to identify: to state in words what something is.
The form in which we make conceptual identifications is the proposition. A proposition is the combination of two or more concepts into a single thought, as in “The table is brown,” “Tops spin,” or “Plants need water.”
A proposition must be distinguished from the sentence used to express it. A sentence is a series of words; the proposition is the thought behind those words. A sentence is the concrete, sensuous form of a proposition in just the way that a word is the concrete, sensuous form of a concept. Just as the word “table” denotes the same concept as “Tisch” does in German, so “The table is brown” is the same proposition as that expressed by “Der Tisch ist braun.” The linguistic symbols differ, the thought is the same. The proposition is the cognitive content of the sentence, as distinguished from its linguistic form…
A given sentence may combine two or more propositions — i.e., make two or more identifications. “Plants need sunlight, and animals need food” is a single sentence, but two propositions. More complex sentences may express several propositions. But we are concerned here only with understanding the basic form of the proposition: the assertion that a predicate, P, applies in a specified way to a subject, S: “S is P.” …
In a tantalizingly brief lead to the cognitive function of propositions, Rand states:
Since concepts, in the field of cognition, perform a function similar to that of numbers in the field of mathematics, the function of a proposition is similar to that of an equation: it applies conceptual abstractions to a specific problem. [ITOE, 75]
An analysis of propositions in terms of subject and characteristics suggests how to flesh out Rand’s statement. First, her statement has to be understood correctly. Her point, as I read it, is not that a proposition equates subject and predicate — not, absurdly, that “Lassie is a dog” says that “Lassie = dog” — but, as she says, that there is a similarity in function between a proposition and an equation. The overall function of propositions and of equations is to advance our knowledge. “Lassie is a dog” advances our knowledge in a manner similar to the way “2 + 3 = 5” advances it.
But just how does the arithmetic equation advance our knowledge? What is the “specific problem” that “2 + 3 = 5” solves? The problem is to identify the overall quantity of a group, a group known to consist of a pair and a trio. But there are an unlimited number of other equations that would also identify the quantity of 2 + 3:
And so on. Every valid equation makes a connection between terms on the left side of the equal sign and terms on the right. Propositions, likewise, make a connection between subject and predicate. Establishing a connection between arithmetic terms or between subject and predicate means that knowledge can be applied: knowledge of what is on the right side of an equation, or knowledge stored by the predicate of a proposition.
Why, though, is “5” the answer to the question, “What is the quantity of a group composed of two units and three units?” What is wrong with giving as the sum “18 − (4 + 9)”? The answer is that “5” is the most unit-economical way of stating that group’s quantity. Giving the sum as “5” makes available the greatest amount of other knowledge. “5” gives direct access to all the facts in the “5” file folder, such as that the quantity is 1 more than 4, that it is the same as the quantity of fingers on one hand, is odd, prime, has no integer square root, is the cube root of 125, etc. Identifying the sum of “2 + 3” as “5” implicitly relates that quantity to the whole integrated set of mathematical facts that we have stored, carry forward, and continue to learn more about. The one term “5” on the right-hand side of the equation relates the two terms on the left to a kind of “central repository” for information on this quantity.
Propositions perform a similar function. The proposition “Lassie is a dog” enables us to apply to Lassie all the knowledge of dogs stored in the “dog” file, which is the “central repository” for information on this kind of animal.
How does information get added to the conceptual file? By means of a higher-order proposition: a proposition whose subject is the predicate of the original proposition.
In the higher-order proposition, one identifies something about that kind of thing — the kind denoted by the predicate of the lower-order proposition. In the “sophomore” example, the higher-order propositions would include, first a definition: “A college sophomore is a college student in his second year of study,” and then such other information as: “Sophomores are typically 19 to 20 years of age,” “Sophomores are qualified to take certain courses not offered to freshmen,” “Sophomores are said to have an exaggerated sense of what they know,” “Sophomores who have not yet picked a major are expected to do so in that year of study,” and so on.
These higher-level identifications, which take “sophomores” as their subject, make it possible for that concept to store knowledge about all its units. Then we call upon this knowledge when we reason deductively: “Joe is a sophomore, therefore he is qualified to take certain courses that he was not qualified to take as a freshman.”
The proposition “S is P” has its cognitive value in the fact that “P” is not merely a symbol for all the things similar to S in a given respect, but that “P” has itself been identified by means of higher-order propositions of the form “P is Q.” The higher-order identifications permit us to apply that knowledge (with whatever qualifications are necessary) to S, by the simple deductive process:
S is P P is Q ______ S is Q
Because concepts store knowledge, once one learns that S is P, the fact that S is Q becomes implicit in the structure of one’s conceptual filing system: since the P folder is stored “inside” the Q folder, placing S in P also means placing S in Q.
Thus, concepts and propositions interact. Concepts make propositions possible, but then higher-order propositions enable us to add content to those concepts. Propositions make thought itself possible; indeed, propositions are the form that thinking takes. And some of that thinking leads to the formation of new concepts, concepts that could not have been formed without propositions using earlier knowledge. …
The proposition is a means of bringing to mind and applying to the subject the knowledge stored in concepts. The primary direction of information flow is from the predicate to the subject; the predicate illuminates the subject by bringing stored knowledge to bear upon it.
Propositions are knowledge-appliers.
The ability to form propositions gives rise to a crucially important phenomenon: the ability to think about one’s own thinking and thus to judge one’s own judgments. Concepts of consciousness enable one to identify what one’s mind is doing — whether one is perceiving, thinking, emoting, imagining, etc. Axiomatic concepts enable one to distinguish between what is only in one’s mind and what is a fact of the external world, to distinguish between what merely seems to be and what really is, between one’s wishes or fears and the facts of an independently existing reality….
In consequence, a man’s ability to evaluate his mental processes gives him a power of self-control that liberates him from the reactive nature of the per- ceptual level. Perceptual cognition is automatic; perception is world-generated, hard-wired, and deterministic. Conceptual cognition, in contrast, is volitional: self-initiated, self-directed, and controllable. …
By using concepts of consciousness and axiomatic concepts to make intro- spective judgments, man becomes cognitively self-determining. He can decide by conscious, explicit choice what questions to ask, what issues to consider, what aspects to focus on, how to proceed.
But conceptual functioning is fallible. Although perception is inerrant, conceptual processes, by virtue of being volitionally controlled, can be misperformed, resulting in conclusions that are false — i.e., that contradict perceived fact. To correctly identify reality on the conceptual level, man needs a method, with standards, to guide him. We give this method the name “logic.”
Since the purpose of logic is to align one’s thinking with the facts, logic requires that one start from and work with the only source of information about the facts: perceptual observation. In the conceptual use of that data — in concept-formation, propositional judgment, and inference — logic defines the kind of procedures one must follow in order to keep one’s thinking connected to perceptual reality. (Logic is not only the method of proof but also the method required to gain knowledge.)
In dealing with perception-derived material, the guidance logic provides flows from a single imperative: Be consistent. Because contradictions do not exist in reality, a mental process that involves or implies a contradiction has departed from reality and is invalid; a conceptual product that contradicts any fact is false.
Aristotle, “the father of logic,” identified the Law of Non-Contradiction, stating that it is the basic principle of all knowledge. He gives this careful formulation of the Law:
The same characteristic cannot both belong and not belong to the same thing at the same time and in the same respect.
A thing cannot be hot and not be hot at the same time. If it changes, over time, from being hot to being cool, that does not violate the Law of Non-Contradiction …
The corollary of the Law of Non-Contradiction is the Law of Excluded Middle: a given thing must either have or not have a given characteristic at a given time and in a given respect. It must either be A or not be A.
(Violations of Excluded Middle reduce to contradictions. A violation of Excluded Middle would be something that isn’t A and isn’t non-A. But, as explained in the preceding chapter, what isn’t A is still something, just something different from A. Thus a violation of Excluded Middle would be something that is different from A and isn’t different from A — a contradiction.)
Later Aristotelians recognized that both these laws stem from the axiom of identity: “A is A.” A thing is what it is.
There are many ways of formulating these three laws. I suggest the following formulations are those that get to the fundamental: the metaphysical issue of what it is to be:
The following formulations are more economical and more memorable, but they are consequences of what is stated in the more careful formulations:
The Laws of Non-Contradiction and Excluded Middle are reformulations of the Law of Identity made for the purpose of guiding cognition. To think that a thing is A and non-A in the same respect, is implicitly to hold that the thing is everything in that respect. But to be everything in a given respect is to be nothing in particular in that respect — i.e., to lack identity.
For instance, if a ball simultaneously is and isn’t all red, then it has no specific identity with respect to color. If the ball is simultaneously here and not here (or neither here nor not here), then it has no identity in regard to location. …
Knowledge is an awareness of the identity of things; logic enjoins us to use our conceptual faculty in ways that recognize that things are what they are, rather than being contradictory or identity-less. Thus, Ayn Rand’s definition: “Logic is the art of non-contradictory identification.” [AS, 1016] …
Aristotle identified not only the rules of syllogistic deduction and the Law of Non-Contradiction as the standard of logic but also the fact that sensory perception is the self-evident base and court of final appeal for all conceptual conclusions. His achievements in logic lay dormant for many centuries, but their recovery almost a millennium later led to the Renaissance and the Scientific Revolution. [Randall, 1940]
As fundamental and pathbreaking as Aristotle’s work was, it is incomplete. The principles of logic he formulated pertain to the objects of cognition. But logic must also take into account the nature of the subject of cognition: the nature of the thinker’s mind. Identifying this fact and working out its implications for logic is the achievement of Ayn Rand. [ITOE, ch. 8] She stresses that man’s cognitive equipment has a specific identity, with specific terms of operation, which the principles of logic must reflect: “. . . the rules of cognition must be derived from the nature of existence and the nature, the identity, of his cognitive faculty.” [ITOE, 82]
Knowledge is a mental product. Any product is made by working up the proper materials, following a proper method. Automobiles cannot be made out of bricks; nor can they be made by combining the right materials — steel, glass, plastic — in the wrong way. Likewise, one cannot make knowledge out of errors, vague approximations, wishes, or daydreams. Nor can one make knowledge out of truths improperly combined, as in combining “Men are human beings,” and “Women are human beings,” to conclude, “Men are women.”
Both the right input and the right method of production are required. Identifying the right method of production depends upon identifying the nature of the equipment; a given drill cannot penetrate faster than a certain rate, a given crane cannot lift more than a certain maximum weight. There are equivalent limits on the nature of man’s mental equipment — notably the limit imposed by “the crow epistemology.” …
Man’s cognitive mechanism is what it is and cannot function in contra diction to its nature. The nature of man’s consciousness includes two facts that are central to logic:
Knowledge is based on the data given in perception. From that base one builds new knowledge upon old, in an incremental, step-by-step process. Knowledge is not gained by revelation from on high and it is not gained in huge gulps. Knowledge is built up, in “crow-friendly” steps, from specific perceptual observations.
We start somewhere: 1) where we are in the universe, and 2) with the kind of information our perceptual system is scaled to detecting. E.g., an infant lives in a specific location, with sense organs sensitive to a specific range of energy differentials and not others. He can perceive the people and furniture in the room, but not the atoms that compose them and not the ultrasonic frequencies that a dog can hear.
From his first perceptions, the child’s knowledge grows step by step…. At a certain point in his development, he is able to form concepts. From then on, observation and conceptualization reinforce each other, in a spiraling process …
In accordance with the identity of reality and the identity of conceptual consciousness (especially, the two facts stated above), there are two overarching facts about the nature of knowledge that we must adhere to: knowledge is contextual and hierarchical. The two basic injunctions of the Objectivist conception of logic are: hold context and obey hierarchy. [OPAR, ch. 4]
The word “context” suggests by its structure (“con” + “text”) the simplest meaning of “context”: the surrounding text. In reading a given word or phrase, one needs to hold in mind the sentence of which it is a part. Likewise, the sentence is part of a paragraph, section, chapter, etc. Meaning is contextual: without a context, one does not know the right way to interpret an isolated term — e.g., the word “one,” as used earlier in this sentence. …
On the perceptual level, integration occurs automatically … Perceptual awareness is automatically contextual ….
What logic demands is not random association but integration, and to achieve integration requires working to achieve both clarity and precision. Attaining clarity and precision is a precondition of checking for consistency. One cannot check the unclear or the vague for its coherence with the rest of one’s knowledge….
The immediate context of an item is the knowledge directly connecting to it — as the knowledge that Phoenix is in Arizona connects directly to the knowledge that Arizona is a U.S. state. The wider context consists of the things directly related to that, and then the things directly related to them, and so on — such that the full context is the totality of one’s knowledge at a given time.
Since every fact bears some relationship to every other fact, however remote, one must work to integrate one’s knowledge into a non-contradictory whole. Rand’s important summary statement is:
No concept man forms is valid unless he integrates it without contradiction into the total sum of his knowledge. [AS, 1016.]
Doing the work of integration is a multifold process, requiring more than good intentions. One has to understand and follow the rules of the “non-contradictory identification” that constitutes logic. Take: “Tax cuts would stimulate the economy.” Is there any contradiction in that idea? That question cannot be answered by simply posing it. There are myriad questions and subquestions one has to consider in order to deal logically with that issue. The whole science of economics and of its philosophic base, as well as a knowledge of history and of the nature of man, is involved in reaching a logical conclusion.
Rand’s term “context-dropping” names the wider error: ignoring available facts that would alter or contradict one’s conclusion.
A simple form of context-dropping is that involved in irrational behavior. An action is irrational if it stems from evading the long-range consequences. It is context-dropping to function on the basis of a kind of tunnel-vision that restricts one’s range of view to the here and now. … Any action that sacrifices the long-range to the short-range represents context-dropping.
Peikoff recounts a striking example of what context-dropping means. Here is his analysis of Neville Chamberlain’s appeasement of Hitler’s demands regarding Czechoslovakia, at the Munich conference in 1938:
Mr. Chamberlain treated Hitler’s demand as an isolated fact to be dealt with by an isolated response; to do this, he had to drop an immense amount of knowledge. He did not relate Hitler’s demand to the knowledge already gained about the nature of Nazism; he did not ask for causes. He did not relate the demand to his knowledge of similar demands voiced by aggressor nations and even local bullies throughout history; he did not ask for principles. He did not relate his own policy to mankind’s knowledge of the results of appeasement; despite ample indications, he did not ask whether his capitulation, besides satisfying Hitler, would also embolden him, increase his resources, hearten his allies, undermine his opponents, and thus achieve the opposite of its stated purpose. Chamberlain was not concerned with any aspect of a complex situation beyond the single point he chose to consider in isolation: that he would be removing Hitler’s immediate frustration.
Deeper issues are involved in this example. Chamberlain was proposing a course of action while ignoring the field that defines the principles of proper action, ethics. He did not ask whether his course comported with the virtues of honor, courage, integrity — and, if not, what consequences this portended.
He dropped the fact that foreign-policy decisions, like all human actions, fall within a wider context defined by moral philosophy (and by several other subjects as well). The prime minister wanted “peace at any price.” The price included the evasion of political philosophy, history, psychology, ethics, and more. The result was war. [OPAR, 124–125]
The more one works to integrate, the better one’s mental filing system and the easier further integration becomes. A proper filing system deals in essentials….
One asks: “What is taxation?”, “What are the consequences of taxation on the individual? On government financing?”, “What is the source of economic progress?”, etc.
The effects of tax cuts are part of the science of economics, which is part of the social sciences. Knowing the “tree of knowledge” vastly accelerates the needed integration, allowing for things to be integrated wholesale. …
The fact that knowledge is contextual has crucial implications for the field of epistemology: the standards of judging an idea’s validity must take into account the contextual nature of knowledge. No cognitive standard can require one to have more knowledge than is possible at a given stage of cognitive development. One cannot require that, for instance, in order to attain certainty, one must know everything that could bear upon one’s conclusion — i.e., be omniscient. …
If an idea is supported by observation and integrates with all the knowledge available, that idea is valid. Its validity is not retroactively undone if, as occasionally happens, the idea has to be subsequently qualified, or even rejected, on the basis of new data. Given the context of knowledge that one possessed, the idea was either reached logically or it wasn’t — and that fact about the past situation never changes. The issue is not: “What would one conclude if one were omniscient?” but: “what is the proper conclusion to draw given all the facts available now?” Epistemic standards are prospective, not retrospective.
Contextuality thus has a dual application: 1) one must be consistent with all the knowledge currently available, and 2) a standard cannot require consistency with the as-yet-unknown. “Man cannot know more than he has discovered — and he may not know less than the evidence indicates, if his concepts and definitions are to be objectively valid.” [ITOE, 46] The purpose of standards is to guide one’s present choices.
… Hierarchy is essential to logic, and the anti-hierarchical approach is the one factor most responsible for the confused and chaotic state of today’s intellectual world.
“Hierarchy” as a general term pertains to a number of ways in which things exist in an order of dependency, but the specific meaning of hierarchy I will focus on is: the hierarchy of learning — i.e., the necessary order of acquiring knowledge. This is the kind of dependency that occurs when knowing A is a prerequisite of learning B, as knowing arithmetic is a prerequisite of learning algebra. In this hierarchical relationship, A grounds B. …
The well-known fallacy of “begging the question”
The need to obey hierarchy has long been recognized in regard to inference. The well-known fallacy of “begging the question” (petitio principii) consists of a hierarchy violation: illicitly using what was to be proved, in the attempt to prove it — i.e., circular reasoning. The textbook example of question-begging is: “There must be a God because the Bible says there is, and I know the Bible is true because it is the word of God.” That is a hierarchy violation: A cannot be established by B, if B has to be established by A.
… obeying the hierarchy is necessary — to form actual concepts. On the higher levels of abstractions, many people form only approximate concepts, words used by imitating the way others use them. Such half-baked, semi-formed concepts cannot be applied accurately to their units. Accordingly, Rand calls them “floating abstractions.”
Floating abstractions are cognitive cripplers. They represent, in Rand’s words, “condensing fog into fog into thicker fog — until the hierarchical structure of concepts breaks down . . . losing all ties to reality.” [ITOE, 76]
Consider the vague, woozy manner in which many people hold and use such higher-level concepts as “love,” “freedom,” and “justice.” Such people are clear on the application of these concepts to a few simple concretes; they know, for example, that a slave held in chains is not free. But, not having gone through the hierarchical steps necessary to attain a full, clear grasp of these concepts, people can use them in bizarre and contradictory ways, resulting in romantic, political, and moral chaos, respectively.
There is a second reason why the hierarchical progression, though necessary, can yet be violated. Even if one did go through all the required steps when first forming a given concept, one may not remember that hierarchy years later. Thus one may not notice the fact that a given proposition uses that concept in a way that violates the conceptual hierarchy, resulting in committing the fallacy of the stolen concept.
The main remedy for both floating abstractions and stolen concepts is the process of definition. “A definition is a statement that identifies the nature of the units subsumed under a concept.” [ITOE, 40]
A definition serves the function of isolating a concept’s units, thus providing the concept with a specific identity….
The definition is analogous to the label on a (physical) file folder. The label indicates, as concisely as possible, what the folder contains — i.e., the nature of the information it stores — just as a concept’s definition allows one to quickly recognize the meaning of the concept — i.e., the nature of the units it integrates.
A definition’s vital function can be performed only if it meets certain requirements. We owe the rules of proper definition to Aristotle. The Aristotelian rules have importance far beyond their role in definition; they illustrate the pattern of conceptual cognition as such: differentiation and integration….
One of Aristotle’s great achievements is the idea of defining concepts by means of genus and differentia. The genus is the wider class containing both the concept’s units and those things from which they are differentiated. E.g., for “triangle,” the genus is “polygon,” which is the wider class that contains triangles, quadrilaterals, pentagons, etc. The differentia is the characteristic(s) that distinguishes the units from the other existents in the genus — here “three-sided.” A “triangle” is a three-sided polygon. (In Objectivist terms, the genus is the class of things having the Conceptual Common Denominator, CCD, along which the units are differentiated from foils; the differentia is a range or category of measurements within that CCD.)
The definition must consist of a genus and a differentia. …
The genus-differentia structure of a definition helps one to, in effect, quickly re-form the concept, and thus recapture the concept’s meaning. Rand writes:
The rules of correct definition are derived from the process of concept-formation. The units of a concept were differen tiated — by means of a distinguishing characteristic(s) — from other existents possessing a commensurable characteristic, a “Conceptual Common Denominator.” A definition follows the same principle: it specifies the distinguishing characteristic(s) of the units, and indicates the category of existents from which they were differentiated.
The distinguishing characteristic(s) of the units becomes the differentia of the concept’s definition; the existents possessing a “Conceptual Common Denominator” become the genus.
Thus a definition complies with the two essential functions of consciousness: differentiation and integration. The differentia isolates the units of a concept from all other existents; the genus indicates their connection to a wider group of existents. [ITOE, 41]
A proper definition must have both a genus and a differentia. When the genus is omitted, a mistake often made by uncritical thinkers, the result is an unorganizable approximation, as in “An ‘automobile’ is what you drive.” A proper definition of “automobile” must contain its genus: “motor vehicle.”
One popular form of omitting the genus is the barbarous use of “is when”: “A ‘crime’ is when someone violates another’s rights.” A proper definition would be: “A ‘crime’ is an action violating another’s rights.” …
A proper definition must not only contain both a genus and a differentia, it may not add in any extraneous element. A definition consists only of the genus and differentia.
“Man is a rational animal.” Not: “Man is a rational animal, like my father.”
This fundamentality is a causal issue: the fundamental characteristic is the one that causes and explains the greatest number of other characteristics. Rand writes:
When a given group of existents has more than one characteristic distinguishing it from other existents, man must observe the relationships among these various characteristics and discover the one on which all the others (or the greatest number of others) depend, i.e., the fundamental characteristic without which the others would not be possible. This fundamental characteristic is the essential distinguishing characteristic of the existents involved, and the proper defining characteristic of the concept.
Metaphysically, a fundamental characteristic is that distinctive characteristic which makes the greatest number of others possible; epistemologically, it is the one that explains the greatest number of others.
For instance, one could observe that man is the only animal who speaks English, wears wristwatches, flies airplanes, manufactures lipstick, studies geometry, reads newspapers, writes poems, darns socks, etc. None of these is an essential characteristic: none of them explains the others; none of them applies to all men; omit any or all of them, assume a man who has never done any of these things, and he will still be a man. But observe that all these activities (and innumerable others) require a conceptual grasp of reality, that an animal would not be able to understand them, that they are the expressions and consequences of man’s rational faculty, that an organism without that faculty would not be a man — and you will know why man’s rational faculty is his essential distinguishing and defining characteristic. [ITOE, 45–46]
The definition must consist of a genus and a differentia. A “triangle” is a three-sided polygon.
The genus is the wider class containing both the concept’s units and those things from which they are differentiated. [polygon] The differentia is the characteristic(s) that distinguishes the units from the other existents in the genus — here “three-sided.”
The definition must specify a group of referents in reality.
Ways in which a definition can fail and thus violate this rule: Synonymy, Circularity, Vagueness, Metaphor
The definition must have the same scope as the concept that it defines.
The issue here is truth. A definition that is too broad is false qua defin ition. Ex: A ‘table’ is an item of furniture with a flat top surface” implies that beds are tables. A definition that is too narrow implies that some things that are units are not units. For instance, “A ‘table’ is an item of furniture with four legs and a flat, level surface for supporting smaller objects” implies that six-legged tables are not tables. A proper differentia must characterize all and only the units within the genus.
The definition must state the fundamental distinguishing characteristic(s) “Man is the rational animal.”
“This fundamental characteristic is the essential distinguishing characteristic of the existents involved, and the proper defining characteristic of the concept.” (Ayn Rand)
The definition must be a single, economical sentence.
A definition is a condensation of knowledge, knowledge of a concept’s units and of their place in the entire structure of one’s knowledge. Knowledge is not frozen in a static sum; it grows in the individual’s development, as he advances from childhood to educated adult, and it grows with the progress of science. Definitions, to function as optimal condensers of knowledge, must expand to keep abreast of an expanding context of knowledge.
Definitions, in other words, are contextual. They are established in a given context of knowledge, and they are to be judged by reference to the context of knowledge in which they are used. A broadened or deepened knowledge of the units requires a corresponding change in the concept’s definition.
Though we live in a “non-judgmental” age, the fact is that the concepts one forms are either right or wrong: a given conceptualization is either pro-cognition or anti-cognition. Rand writes:
There are such things as invalid concepts, i.e., words that represent attempts to integrate errors, contradictions or false propositions, such as concepts originating in mysticism — or words without specific definitions, without referents, which can mean anything to anyone. . . . Invalid concepts appear occasionally in men’s languages, but are usually — though not necessarily — shortlived, since they lead to cognitive dead-ends. An invalid concept invalidates every proposition or process of thought in which it is used as a cognitive assertion. [ITOE, 49]
The two basic mistakes that produce an invalid concept are: 1) making a concept for non-existent units, or 2) making a concept that uses an invalid standard, resulting in misclassifying units.
Concepts lacking units are those that attempt to refer to the contradictory or to the arbitrary. The test for the validity of a concept here is whether or not it can be reduced to perceptual reality. A made-up example of a concept that does not reflect facts would be “biangle”: a two-sided polygon — which is impossible. …
… invalid concepts “originating in mysticism,” in Rand’s phrase above. These would include “god,” “fairy,” “angel,” “devil,” “afterlife,” etc. … Some examples, of many, are: Plato’s “Forms,” Spinoza’s “intuition,” Leibniz’s “monads,” Kant’s “noumena,” Hegel’s “dialectic,” Marx’s “forces of production.” All of these terms, and many more, are introduced without any logical derivation from perceptual reality. …
Concepts in this category deal with actually existing phenomena, but mis-organize them, classifying things in a way that is confusing, misleading, or otherwise anti-cognitive. …
To validate a concept and establish its proper definition, one should ask oneself: What facts of reality give rise to the need for such a concept?
If no such facts are ascertainable, the concept cannot be considered valid, and one should not attempt to use it cognitively. …
Rand provides an intriguing example of this process in regard to the concept “justice”:
For instance: what fact of reality gave rise to the concept “justice”?
The fact that man must draw conclusions about the things, people and events around him, i.e., must judge and evaluate them.
Is his judgment automatically right? No. What causes his judgment to be wrong? The lack of sufficient evidence, or his evasion of the evidence, or his inclusion of considerations other than the facts of the case. How, then, is he to arrive at the right judgment? By basing it exclusively on the factual evidence and by considering all the relevant evidence available.
But isn’t this a description of “objectivity”? Yes, “objective judgment” is one of the wider categories to which the concept “justice” belongs.
What distinguishes “justice” from other instances of objective judgment? When one evaluates the nature or actions of inanimate objects, the criterion of judgment is determined by the particular purpose for which one evaluates them. But how does one determine a criterion for evaluating the character and actions of men, in view of the fact that men possess the faculty of volition? What science can provide an objective criterion of evaluation in regard to volitional matters? Ethics.
Now, do I need a concept to designate the act of judging a man’s character and/or actions exclusively on the basis of all the factual evidence available, and of evaluating it by means of an objective moral criterion? Yes. That concept is “justice.” [ITOE, 51]
Forming a concept is not a free lunch; there are “overhead” costs involved in storing it, carrying it forward, updating it, etc. One should form a new concept only when the cognitive gains of doing so exceed the cost. If the concept would not be thus cognitively profitable, forming and using it constitutes a waste of mental resources, complicating one’s mental filing system. Needless multiplication of concepts results in decreased cognitive efficacy.
In recognition of this fact, Ayn Rand formulated an epistemological version of Occam’s Razor, which has come to be called “Rand’s Razor”:
. . . concepts are not to be multiplied beyond necessity — the corollary of which is: nor are they to be integrated in disregard of necessity. [ITOE, 72] …
“The descriptive complexity of a given group of existents, the frequency of their use, and the requirements of cognition (of further study) are the main reasons for the formation of new concepts. Of these reasons, the requirements of cognition are the paramount one.”
There is a great deal of latitude, on the periphery of man’s conceptual vocabulary, a broad area where the choice is optional, but in regard to certain central categories of existents the formation of concepts is mandatory. This includes such categories as: (a) the perceptual concretes with which men deal daily . . . (b) new discoveries of science; (c) new man-made objects which differ in their essential characteristics from the previously known objects (e.g., “television”); (d) complex human relationships involving combinations of physical and psychological behavior (e.g., “marriage,” “law,” “justice”). [ITOE, 70]
Concepts that violate Rand’s Razor are categorizable as either “false division” or “false integration.” …
… consider a real-life, political example of false division. Since the 1930s, the political spectrum has been conventionally divided between two poles: fascism as the extreme “right” and communism as the extreme “left.” This is a lethal false division.
There are no fundamental differences between fascism and communism. Both are forms of dictatorship. Both are based on the theory and practice of collectivism. They differ in their outward forms — fascism demands the submission of the individual to a national and/or racial collective, communism demands the submission of the individual to an economic collective — but, both deny completely the rights of the individual, requiring him to live for the state. …
For instance, Bertrand Russell defines “freedom” as: “the absence of obstacles to the realization of one’s desires.” Consider some of the known facts — the wider context — that this definition ignores. To a 17th-century man in Northern Italy seeking to travel into Switzerland, the Alps are an obstacle to the realization of his desires. Do the Alps deprive him of his freedom? To a man who wishes to have sex with a given woman, her unwillingness is an obstacle to the realization of his desires. Does her refusal deprive him of his freedom? What about her freedom, which would be violated by his act? And what about historical conditions? In contemporary America, with its unprecedented standard of living, there are — in one sense — far fewer unrealized desires than there were in 1800. Does that mean that individuals in contemporary America have more freedom than Americans living in 1800?
Russell’s definition is centered on a man’s desires, using their frustration or fulfillment as the CCD. But a desire results from evaluations that may be rational or irrational, correct or incorrect. Desires may even come from evaluations that flout metaphysically given facts: recall Dostoyevsky’s “under- ground man” who did not like the fact that two plus two equals four (“The Formula ‘two and two make five’ is not without its attractions.”) But Russell’s definition treats desires as irreducible givens, when in fact they are effects, not primaries. His definition converts “freedom,” a crucial term of political philosophy, into a “package-deal.”
A definition of freedom grounded in all the facts I have mentioned would result in a concept quite different from Russell’s; it would be a concept defined in terms of the absence of physical coercion by others. But Russell’s concept treats human acts of coercion as if they were essentially the same as inanimate obstacles, the laws of nature, and even the laws of logic (which present the “obstacle” that you can’t have your cake and eat it, too). Russell’s definition converts “freedom” into a massive package-deal, obscuring the life-and-death need to be free from coercion by human beings, who act by deliberation and choice, not necessity.
Several neologisms that have been injected into contemporary political discussion exhibit the same kind of anti-cognitive focus on superficials at the expense of fundamentals. The arch-example, as Rand explains, is “extremism.”
. . . “extremism” is a term which, standing by itself, has no meaning. The concept of “extreme” denotes a relation, a measurement, a degree. The dictionary gives the following definitions: “Extreme, adj. — 1. of a character or kind farthest removed from the ordinary or average. 2. utmost or exceedingly great in degree.”
It is obvious that the first question one has to ask, before using that term, is: a degree — of what?
To answer: “Of anything!” and to proclaim that any extreme is evil because it is an extreme — to hold the degree of a characteristic, regardless of its nature, as evil — is an absurdity (any garbled Aristotelianism to the contrary notwithstanding). Measurements, as such, have no value-significance — and acquire it only from the nature of that which is being measured. [CUI, 177–178]
In politics, the issue that is fundamental is: freedom vs. state coercion.119 But the term “extremism” puts together, as if it were one basic phenomenon, positions that are “extremely” pro-freedom and “extremely” anti-freedom. It packages into one “file” and treats as equivalent George Washington and Adolf Hitler. Washington was “extremely,” i.e., radically and consistently, dedicated to establishing a nation based on the free exercise of the inalienable rights of the individual. Hitler was “extremely” dedicated to eradicating individual freedom and establishing a totalitarian state based on racial collectivism. Packaging them together under one concept is a crime against logic (and morality). “Extremist” whitewashes Hitler and Nazism by associating them with Washington, and it blackens Washington and liberty by associating them with Hitler.
The stolen concept fallacy is a form of hierarchy inversion: it consists of the attempt to use a derivative concept in a way that contradicts its own presuppositions — i.e., that negates or ignores a prior concept that is required in order to grasp and use the concept in question.
Suppose someone were to announce, “I know there are desserts, but there is no such thing as a meal.” Here the stolen concept is “dessert”; a dessert is a dish that follows a meal. The concept that is said to be “stolen” is the one that is used without a logical right to that use, in analogy with using someone else’s property without a legal right to do so. In this case, “dessert” is the stolen concept.
When a concept is “stolen,” it is being used in a way that severs its connection to perceptual reality and thus deprives the concept, as used, of meaning. “Dessert” could not have its present meaning if meals did not exist. The same type of contradiction is contained in each of the following examples.
I reject the existence of consciousness. “Rejection” is an action of consciousness.
Logic is a Western prejudice. A “prejudice” is that which is pre-judged, in advance of logical evidence.
Life is all a dream. A “dream” is meaningful only in distinction to wakeful perception.
Property is theft. “Theft” is forcibly taking property from its rightful owner.
The laws of logic are arbitrary. The “arbitrary” is distinguished from the logical.
You can’t prove reason is valid. “Proof” can be grasped only as a certain process of reason.
Physics is defined as: what physicists do. “Physicist” can be grasped only in relation to “physics.”
The universe is moving. “Motion” is change of place; “place” is the surrounding entities; the universe is everything; nothing surrounds it, and it has no place.
In the above, the stolen concepts — the concepts rendered meaningless — are, in order, “reject,” “prejudice,” “dream,” “theft,” “arbitrary,” “prove,” “physicist,” and “moving.” …
Concept-stealing is arguably the most frequent and most destructive fallacy in the history of philosophy. The mother of all stolen concepts, one rampant in post-Cartesian philosophy, is the primacy of consciousness: the attempt to use concepts of consciousness while denying or ignoring that consciousness is consciousness of something, something that exists. ….
Extrospection must precede introspection. Before extrospection, there is no consciousness to introspect. First there is awareness of something, say of a tomato; only then can one be aware of being aware of the tomato. …
… consciousness precedes self-consciousness.
Truth, as Rand characterizes it, is “the recognition of reality.” [AS, 1017] Although a true proposition is often described as one that “corresponds to” the facts, truth actually pertains not to some match-up but to an awareness, a mental grasp, of the facts. …
Part of that context is the hierarchically prior items grounding the proposition. Awareness is not a series of isolated responses to isolated stimuli; awareness is a global activity of differentiating and integrating. Propositions grow out of a context, and their meaning depends upon that context. No proposition exists or has meaning out of context. If these hierarchically prior items were different, the proposition, though expressed in the same words, would have a different meaning — i.e., it would be a different proposition.
As an illustration, take the proposition, “Lying is wrong.” Its meaning depends on the content of the ethics and metaphysics that informs that statement. For a religionist, “wrong” means “against God’s commandments,” for Rand, “wrong” means “destructive of man’s life on this earth.” And for the religionist, the basis of the moral judgment is the unaccountable will of a supernatural being; for Rand, the basis is the natural causal order. [PWNI, ch. 10] A Humean, Kantian, or Existentialist philosophy would fuse still other ideas into “Lying is wrong,” so that the same words would actually express very different propositions. …
A false proposition is one that contradicts something. The contradiction may be internal, as in “This circle is square,” or the contradiction may be to other knowledge, as in “Pears grow on vines.” A proposition is false if it contradicts any fact.
An infrequently noticed form of contradiction occurs in philosophical statements that commit what Rand calls “the fallacy of self-exclusion.” This fallacy is committed when the act of asserting a proposition contradicts its own content (thus the speaker is illicitly excluding his own utterance from what he is claiming). For example, “There are no absolutes” — asserted as an absolute. (Even if one says, “Probably, there are no absolutes,” that is being asserted as an absolute.) Or, “Man can know nothing for certain” is asserted as something known for certain. …
“I do not exist” exhibits the fallacy of self-exclusion no matter who says it, but “Harry Binswanger does not exist” is a self-exclusion only if I say it. “There is no such thing as the English language,” commits the fallacy because it is itself stated in English; but “There is no such thing as the French language,” though obviously false, does not commit this particular fallacy.
The manner of making an assertion can contradict the assertion’s content. If someone says in an angry, denunciatory tone: “That’s a value-judgment!” he is making a value-judgment himself. (In contrast, the mere observation “That’s a value-judgment,” without an implied moral condemnation, commits no fallacy; it might even be said as a compliment.) A statement is false when it contradicts any fact, including facts about its own utterance.
If someone is asserting an idea, and its basis in reality is not obvious, the first thing one should do is to ask him directly: “Why do you say that; what’s your evidence?” — a question asked too rarely. In validating a belief of one’s own, a logical first step is to ask oneself: “How did I arrive at this belief, by what steps?” — not as autobiography, but as a lead to identifying the steps by which one could validly reach this conclusion. Thus, one attempts to “reverse engineer” the idea, seeking to determine if it can be reached logically from prior knowledge.
To prove an idea, one needs to link it back to perceived fact. The Objectivist term for this process of going back “down” the hierarchy to prove an idea is: reduction.
The responsibility imposed by the fact that knowledge is hier archical is: the need of reduction. . . . Reduction is the means of connecting an advanced knowledge to reality by traveling backward through the hierarchical structure involved, i.e., in the reverse order of that required to reach the knowledge. “Reduction” is the process of identifying in logical sequence the interme- diate steps that relate a cognitive item to perceptual data. Since there are options in the details of a learning process, one need not always retrace the steps one initially happened to take. What one must retrace is the essential logical structure. [OPAR, 132–133]
Cognitive errors result from a defect in the thought process or in its input material. More concretely, cognitive errors result from one of three causes: 1) illogic, 2) false premises, or 3) incomplete information.
Illogic. When one departs from logic, the conclusion one reaches does not follow from the evidence and premises used…. In very simple, one-step reasoning, error is not possible. No one can make a mistake in adding 2 + 1, or in combining “That’s ice,” and “Ice is cold.” But logical missteps are not uncommon in complex calculations and in the complex, multi-step reasoning used in everyday decision-making
False premises. Falsehood used as “input” to the process of inference cannot result in a grasp of fact. Truth cannot be built upon error. … So, errors resulting from false premises usually reduce to the first cause of error: illogical processing (remembering that the standards for determining what is logical must be based on what is possible, not on an impossible omniscience). These two causes of error reduce to: current illogic and past illogic resulting in false ideas now being used as premises.
Incomplete information. Although this occurs very infrequently, there are cases in which, despite being flawlessly logical, one reaches a false conclusion because the data were both insufficient and apt to mislead, because of their similarity to other things known. The simplest kind of case is erring in identifying perceptual concretes — e.g., thinking the distant hills are blue, that the straight stick semi-submerged in water is bent, or taking a man’s twin to be him.
One doesn’t need a reason not to consider something a fact. One doesn’t need a reason not to entertain something as a hypothesis. The reverse is true: The burden of proof is on him who claims knowledge.
Knowledge is an effect of the operation of certain causes. For the effect to be present, the causes must have been present. Ignorance, not knowledge, is the default condition. Thus, a claim to have achieved knowledge (even knowledge of possibility) must be supported by showing that the cause was present and operative. The cause is awareness — direct perception of the thing or awareness of evidence logically supporting it. In the absence of such awareness, the claim to know is arbitrary and thus is to be dismissed. …
To know something, one must have used the means of gaining knowledge: evidence. That is all that the Burden of Proof Principle states. When there is no evidence for “S is P,” there is no awareness of S being P, nor of anything indicating that S is P. That means there are no grounds for hypothesizing that S is P. …
The first step in judging the validity of an idea is to identify its source: is it based on fact or fantasy? If the idea is evidence-based, one can check the interpretation placed on that evidence; but that which is asserted arbitrarily — proceeding from “what if?” or “why not?” or “it may well be” — offers no evidence to be interpreted. Such ideas are not in the realm of logic but of make-believe.
When a mental product results from the deliberate application of logic to evidence, it has a unique status: it is objective. Ayn Rand provides a new understanding of what objectivity is and requires.
In its metaphysical usage, “objective” simply means: existing independently of consciousness. But now we are concerned with “objective” in its epistemological meaning — i.e., the objectivity of a mental process or product. What is it for a mental process or product to be “objective”? To properly understand the concept of “objectivity,” we must pose Rand’s key question: what facts of reality give rise to the need for such a concept?
Among many such facts, three stand out:
Do we need a concept that distinguishes between those cognitive activities that are deliberately guided by logic and those that are not? That is, do we need the concept of objectivity? Yes, because processes guided by logic are fundamentally different from those that are not. Only if one knows and consciously applies logic can one warrantedly claim to have knowledge as opposed to mere belief. Only by reference to logic can one have a standard of certainty.
A theme running throughout this book is that knowledge is not an end in itself, but a means of acting successfully in the world.
At the same time, however, I have warned against Pragmatism, a philosophy holding that practical success requires rejecting absolutes, certainty, and, above all, principles.
Today, to describe someone as a “pragmatist” is considered to be paying him a compliment. Anyone who adheres to principles is attacked as being an “ideologue.” But, in fact, principles offer the only guide to practical success….
Fundamentality pertains to a certain kind of hierarchical order. …
Fundamentality refers to causal sequences. For example, the military chain of command refers to who gives orders to whom, and the Commander in Chief is the fundamental of the military hierarchy. …
The existence of this kind of ramified set of relationships, stemming from one root cause, is the fact that gives rise to the need for the concept of a “fundamental.” As a preliminary definition, a “fundamental” is a causal factor on which a multi-level, branching series of effects depends. The dependency here is causal: the fundamental is a necessary condition — a sine qua non — of the derivatives’ occurrence. ….
Knowing fundamentals is a source of immense cognitive power. That power results from the unit-economy provided by a knowledge of fundamentals. Since a fundamental causes and is expressed in everything in the domain, it is the one factor to be held in mind when dealing with anything in that domain. …
Automatization is essential to building new knowledge on old, because of, once again, “the crow epistemology.” Rand explains the process.
. . . all learning involves a process of automatizing, i.e., of first acquiring knowledge by fully conscious, focused attention and observation, then of establishing mental connections which make that knowledge automatic (instantly available as a context), thus freeing man’s mind to pursue further, more complex knowledge. [ITOE, 65]
A “principle” is a fundamental generalization that serves as a standard of judgment in a given domain. …
We need standards to guide our thinking, including the thinking devoted to deciding what to do existentially. …
Principles are differentiated from other action-guiding generalizations, such as rules of thumb or statements of “good policy.” A principle, by identifying a fundamental cause, informs us of requirements that are absolute: one cannot have effects without their causes. …
One must keep in mind that the issue of fundamentality pertains to a specified domain (and a given context of knowledge). …
The need for principles is psycho-epistemological: principles provide a unit- economical, long-range view of consequences. …
By spotlighting root causes, principles make one aware of a long train of consequences — not just the immediate effects, but the sum across a life-time. Principles are thus indispensable for acting long-range….
To reach a principle, as we have seen, is to grasp a general type of root cause, a factor explaining a whole “tree” of derivatives — as the principle of identity (A is A) underlies and explains the rules of valid deduction, valid induction, proper definition. Likewise, the principle of individual rights identifies a fundamental requirement of moral dealings with others, whether on a personal, social, or political level.
Principles enable the clear, simple cases to shed light on the obscure, difficult ones.
This is the answer to those philosophers who scorn principles as being “tautologies” or “truisms.” These philosophers say, for instance, that the Law of Non-Contradiction is “empty,” citing the fact that we gain no new informa- tion from being told “It cannot be raining and not raining at the same place.” But, in fact, holding in mind the principle of non-contradiction reminds one to check for non-obvious inconsistencies; it directs one to work to integrate every conclusion into the full context.
It is from simple cases like raining vs. non-raining that we draw the lesson: check for non-obvious contradictions. E.g., there is a non-obvious contradiction in an economy having both full employment and a minimum-wage law. (Minimum wage laws mean that a man whose services are worth less to an employer than the mandated minimum wage cannot be employed.)
Or, take the idea that man, as a “sinful” being, cannot avoid immorality; what is unavoidable is not subject to moral judgment: morality exists to judge choices. Thus, the idea of unavoidable sin implies a contradiction. …
## Penalty for violating a principle
Consider a second, and deeper, penalty for violating a principle. There is a logic to principles, and a logic to what happens when one acts against them. In acting against a principle, one faces consequences not just in regard to the case at hand, one is also implicitly endorsing an opposite principle and beginning to establish it in one’s soul. For instance, if one tells a “white lie” to spare a friend’s feelings, one is endorsing the (false) principle, “Avoiding negative feelings is more important than facing reality.” One is also endorsing certain principles about the nature of friendship, such as that it is based not on mutual esteem but on pity and shared weaknesses.
A principle identifies an action that is required by the facts at hand. To violate a principle is to act as if what is required were not required — a contradiction. The oft-heard excuse “Just this once” means: “It’s safe to accept just this one contradiction.” But accepting a contradiction undercuts the whole structure of one’s knowledge. It forces a puzzle-piece into a space it does not fit, spoiling the overall picture (and leaving no place to put the right piece). …
To see the cognitive consequences of accepting a contradiction, consider the simplest possible case: accepting a contradiction in arithmetic. Let’s take a hard case: a “small” contradiction not at the base of arithmetic, but further down the line. Suppose one accepts the contradictory idea that 14 = 15. Can’t one still have arithmetic?
No, because arithmetic is an integrated whole. Consider: if 14 = 15, then what is the result of 15 − 14? Is it 1? Is it 0? There’s no way to know. What is 14 + 14? It could be 28, 29, or 30. …
A contradiction, if maintained, paralyzes thought. One can proceed only by abandoning logic and just making up an answer as an arbitrary dictum. Ultimately, the alternative is: adherence to logic or cognitive paralysis.
A vivid concretization of the contextual absolutism of principles is provided by the principle of individual rights. On an objective theory of rights, rights are moral principles. Rights are the application of the principles of morality to man’s dealings with others; they demarcate the individual’s proper sphere of independent action. To say that a man has the right to do X is to say that he should be the one to choose whether or not he does X; no other person or group may force their choices upon him. Rights prescribe freedom by proscribing coercion.
(“A ‘right’ is a moral principle defining and sanctioning a man’s freedom of action in a social context.” [Rand, “Man’s Rights,” VOS, 110])
Principles are contextual. Rights are contextual in that they arise only in civil society; they provide guidance in organizing a proper social-political system under government. …
Within their proper context, principles are absolute. The principle of individual rights starkly illustrates the absolutism of principles: rights exist to define what takes precedence over what in cases of conflict, social or individual. In such cases, rights define the supreme moral consideration — that over which nothing can take precedence. The right to free speech, for example, cannot be superseded by any other ethical or political consideration. If one holds that free speech may be abridged for the sake of “promoting virtue,” preventing “blasphemy,” or for achieving any other “social good,” one has thereby implied that free speech is not a right, but a permission.
In any decision-making process, two competing considerations cannot both be supreme. It’s either/or — either rights are inalienable, unconditional, absolute — or they are not and can be overridden by something else. If the latter, then they are not rights. Just as reason cannot “leave room” for faith, so rights are precisely that which cannot be compromised.
The popular idea of “conflicting rights,” which must be “balanced” against each other, is a contradiction in terms (e.g., the conflict between a homeowner’s property rights and the public’s alleged right to eminent domain). There are certainly cases in which it has not yet been determined which party has the right and which party must yield. But the idea of a conflict among the very principles used to resolve conflicts is incoherent. …
Rights are the principles defining who may act independently, without interference, and who must refrain from interfering. Just as two competing principles cannot each be supreme, so two disputants in a conflict cannot each have the right to have his own way.
Rights are thus absolute — within their proper context.
Philosophic principles have real-world application, as do principles in any other field. People understand that the principles of, say, physics have immense practical value, but they fail to see that philosophy has even more. Indeed, philosophy, in identifying the nature of existence and the rules of cognition, is the base of physics. The formation of any lesser generalization and the recognition of its value depend upon the implicit acceptance of its philosophic base
For instance, the generalization “All matter attracts other matter” could not have been discovered by consulting sacred texts or waiting for revelation; nor could it be reached on the premise that the world is governed not by natural law but by the decree of an omnipotent super-spirit; nor could it have any value to those who hold that this life is God’s punishment for sin — all of which is why this generalization was not reached in the one-thousand years ruled by religious mysticism. …
Philosophic principles, like those of physics, have real-world application — deductive application. One’s actions are shaped by one’s philosophic conclusions — implicit or explicit, rational or irrational, correct or mistaken. The influence of one’s worldview on concrete issues and decisions is unavoidable, because the worldview is automatized. (Indeed, it forms an integral part of who one is.) Just as the investor who has automatized the importance of diversification looks at his investments from that perspective, so one who has automatized the idea that what’s true for you is not necessarily true for me will view disputes between men from that perspective. …
This cultural contempt for principles is relatively new. America’s Founding Fathers were men of principle. (Note that they rose in rebellion against very light taxes on stamps and tea.) Opposition to principles originated with philosophers , especially the Pragmatists William James and John Dewey. They influenced the educators (Dewey launched “progressive education”) and other opinion-leaders, who then spread the anti-principle attitude across the culture. The average man is not a philosophic innovator; he gets his phil- osophic framework from the intellectual leadership, as it filters down to the educators, editorial-writers, artists, journalists, etc. …
Using the map analogy, Rand summarizes the anti-principle attitude:
The present state of our culture may be gauged by the extent to which principles have vanished from public discussion, reducing our cultural atmosphere to [one] . . . that haggles over trivial concretes, while betraying all its major values, selling out its future for some spurious advantage of the moment . . . and by panicky appeals to “practicality.”
But there is nothing as impractical as a so-called “practical” man. His view of practicality can best be illustrated as follows: if you want to drive from New York to Los Angeles, it is “impractical” and “idealistic” to consult a map and to select the best way to get there; you will get there much faster if you just start out driving at random, turning (or cutting) any corner, taking any road in any direction, following nothing but the mood and the weather of the moment. [CUI, 144–145]
Principles are a biological imperative for Homo sapiens. To say to a human being, “Don’t be theoretical” is like saying to a bird, “Don’t fly.”
Principles are the fullest realization of how we know. That is why they are how we survive and prosper.
The actions of one’s body are controlled by the actions of one’s mind: one decides what to do.
A decision, however, is also not a primary: it is the outcome of a decision- making process, and the input to that process is constituted by one’s beliefs and values, specific and general.
One’s beliefs and values are, in turn, the products of earlier processes. The process fashions the product. All conceptual products — all ideas, values, theories, and convictions — are caused and shaped by the processing that one employs in reaching them.
That processing can be performed rationally or irrationally. One can reach conclusions by a conscientious, fact-centered process of thought, or by any irrational substitute, such as emotion-driven leaps in the dark or unthinking absorption of the beliefs and values of others.
Here we have reached the actual primary: the rationality or irrationality of one’s mental processes. It is this that is under one’s direct, volitional control.
This is the understanding of free will originated by Ayn Rand,173 and hers is the first philosophy to recognize that free will is fundamentally an epistemological issue, that it pertains to conceptual cognition as such.
. . . man is a being of volitional consciousness. Reason does not work automatically; thinking is not a mechanical process; the connections of logic are not made by instinct. Thefunction of your stomach, lungs or heart is automatic; the function of your mind is not. In any hour and issue of your life, you are free to think or to evade that effort. But you are not free to escape from your nature, from the fact that reason is your means of survival — so that for you, who are a human being, the question “to be or not to be” is the question “to think or not to think.” [AS, 1012]
The popular belief that there is a conflict between free will and causality stems from a mistaken conception of the law of causality. The proper view of causality, originated by Plato and Aristotle, recognizes that causality is a relation between the nature of an entity and its actions. An entity of a given kind has the properties it has, which gives it certain potentialities and no others. The actions possible to an entity are determined by its identity, by what it is. …
However, as far back as the 17th century, the proper, Platonic-Aristotelian understanding of causality lost favor, and came to be supplanted by an arbitrary construct: the notion that causality concerns “events,” not entities and their actions, and that every event is a necessitated reaction to previous events. Unfortunately, it was Galileo who popularized the new event-to-event view, as historian Wilhelm Windelband notes:
. . . the idea of cause had acquired a completely new significance through Galileo. According to the [preceding] scholastic conception . . . causes were substances or things, while effects, on the other hand, were either their activities or were other substances and things which were held to come about only by such activities: this was the Platonic-Aristotelian conception of the aitia [causes]. Galileo, on the contrary, went back to the idea of the older Greek thinkers who applied the causal relation only to the states — that meant now to the motions of substances — not to the Being [identity] of the substances themselves. Causes are motions, and effects are motions. [Windelband, II, 410]
This notion of causality does lead to determinism. But the event-to-event view is wrong. We do not encounter any such thing as free-floating “events”; actions are actions of entities. … Causality relates entity, identity, and action — not event to event….
Accordingly, the proper understanding of the law of causality is that the actions of an entity are an expression of its identity; the interaction of entities is an expression of the identity of each. What an entity can do is determined by what it is. “The law of causality is the law of identity applied to action.”
Across a lifetime, from early childhood on, the proper use of free will consists of the active approach eloquently concretized by Rand in this passage:
The process of concept-formation does not consist merely of grasping a few simple abstractions, such as “chair,” “table,” “hot,” “cold,” and of learning to speak. It consists of a method of using one’s consciousness, best designated by the term “conceptualizing.” It is not a passive state of registering random impressions. It is an actively sustained process of identifying one’s impressions in conceptual terms, of integrating every event and every observation into a conceptual context, of grasping relationships, differences, similarities in one’s perceptual material and of abstracting them into new concepts, of drawing inferences, of making deductions, of reaching conclusions, of asking new questions and discovering new answers and expanding one’s knowledge into an ever-growing sum. [VOS, 21–22]
…
Socrates said: “The unexamined life is not worth living.” I am adding: The unfocused life is not truly lived. …
I close by revisiting the inspiring passage in which Rand brings to life the meaning of a commitment to reason.
It consists of a method of using one’s consciousness, best designated by the term “conceptualizing.” It is not a passive state of registering random impressions. It is an actively sustained process of identifying one’s impressions in conceptual terms, of integrating every event and every observation into a conceptual context, of grasping relationships, differences, similarities in one’s perceptual material and of abstracting them into new concepts, of drawing inferences, of making deductions, of reaching conclusions, of asking new questions and discovering new answers and expanding one’s knowledge into an ever-growing sum. The faculty that directs this process, the faculty that works by means of concepts, is: reason. [VOS, 21–22]
And that’s . . . how we know.