January 2, 2010

-page 6-

Pure metaphysical speculation which is not based on fact is, to the empiricists, neither relevant nor useful. The only truth, in this philosophy, is that which is mathematically provable or experimentally observable. This truth can be divided into two categories: analytic truths are based on inherent meanings and can be observed through the application of reason, if not experiment; synthetic truths are those facts which are obtained from the experience of reality? Any system of communication must, in order to be meaningful, include some way to represent the truth accurately; any empiricist will tell you that this truth is only valuable and meaningful if it can be considered absolute and provable. In order to be perfectly accurate in representing the truth, language must conform to a certain set of specifications designed to prevent it from wandering into speculation, and under which it becomes possible to ascertain absolutely the truth of any statement. These rules are called formalist semantics. Syntax differs from semantics in that syntax guides the proper formation of elements of a language into statements, whereas semantics consists of the correct association of elements of language with elements of the real world.


In one view, philosophy itself cannot be anything other than

logically empirical because the purpose of philosophy is to elucidate and clarify truth, and this clarification consists of the examination of language to see that it conforms to the concrete facts of reality. The philosopher’s task is to analyse language and untangle the convolutions of common language into the simplicity of logical language. Sengupta quotes Ayer, one of the mainstays of the logical positivism movement, as saying that the philosopher “is not concerned with the physical properties of things. He is concerned only with the way in which we speak about them.” Thus philosophy must be empiricist and formalistic in order to discuss aspects of reality with accuracy and truth.

In philosophy it cannot be anything other than logically empirical because the purpose of philosophy is to elucidate and clarify truth, and this clarification consists of the examination of language to see that it conforms to the concrete facts of reality. The philosopher’s task is to analyse language and untangle the convolutions of common language into the simplicity of logical language. Sengupta. Once, again, quotes Ayer, one of the mainstays of the logical positivism movement, as saying that the philosopher “is not concerned with the physical properties of things. He is concerned only with the way in which we speak about them.” Thus philosophy must be empiricist and formalistic in order to discuss aspects of reality with accuracy and truth.

Nevertheless, one might object that peripheral self-awareness is nowhere to be found in one’s phenomenology. To be sure, the phenomenologists themselves did claim to find it. From Brentano (1874), through Husserl (1928) and Sartre (1937, 1943), to recent work by the so-called Heidelberg School, Smith (1989), and Zahavi (1999), the distinction between reflective and non-reflective self-awareness has been consistently drawn in the European continent. It may to suggest, that the distinction thus belaboured in the Phenomenological tradition be captured in the difference between transitive and intransitive modes of self-consciousness, that is, between being self-conscious of a thought or a percept and self-consciously thinking or perceiving. But a persistent objector could readily profess not to find anything like such peripheral self-awareness in her phenomenology and insist that the phenomenologists themselves have been, in this regard as in others, overly inflationist in their proclamations concerning the actual phenomenology of mental life.

This is a fair objection. But it may unwittingly impose an inordinate burden of proof on the proponent of intransitive self-consciousness. For how would one argue for the very existence of a certain mental phenomenon? Thus, to have as yet to encounter an effective argument against eliminativism about the propositional attitudes, or about consciousness and qualia, say of the sort espoused by Churchland (1984). Even so, we did encounter such an argument above, namely, that there appears to be peripheral awareness of every other sort, and it would be quite odd if the only exception was awareness of self-as-oneness. At this point, it is likened to try and explain away the relative intuitive appeal of eliminativism about intransitive self-consciousness, in comparison to, say, eliminativism about the qualitative character of colour experiences.

One factor may simply be that the qualitative character of colour experiences is much more phenomenologically impressive. In this respect, the proponent of intransitive self-consciousness is in a similar position to those philosophers who claim that conscious propositional attitudes have a phenomenal character (Strawson 1994, Horgan and Tienson 2002, Kriegel 2003, 2004). The problem they face is that the phenomenal character of propositional attitudes, if there is any, is clearly less striking than that of colour experiences. But the common tendency to take colour experiences as the gold standard of phenomenology may be theoretically limiting inasmuch as it may set the bar too high. For any other sort of phenomenology is bound to be milder.

Furthermore, special difficulties attach to noticing not just to an awareness of another perspective with a previously unrecognized body of knowledge but to a radically different way of being-in-the-world. In addition, this different way of being leads naturally to a different mode or practice of inquiry (i.e., the methods of Phenomenological research). This chapter will compare Phenomenological psychology to the more mainstream behavioural and psychoanalytic approaches (Valle, 1989), present the essence of the existential-phenomenological perspective (Valle, King, and Halling, 1989), describe the nature of an emerging transpersonal-phenomenological psychology (Valle, 1995), and present an overview of the transpersonal dimensions or themes emerging from seven recently completed empirical Phenomenological research projects.

Existentialism as the philosophy of being became intimately paired with phenomenology as the philosophy of experience because it is our experience alone that serves as a means or way to inquire about the nature of existence (i.e., what it means to be). Existential-phenomenology as a specific branch or system of philosophy was, therefore, the natural result, with what we have come to know as Phenomenological methods being the manifest, practical form of this inquiry. Existential-phenomenology when applied to experiences of psychological interest became existential-phenomenological psychology and has taken its place within the general context of humanistic or “third force” psychology; it is humanistic psychology that offers an openness to human experience as it presents itself in awareness.

From a historical perspective, the humanistic approach has been both a reaction to and a progression of the world views that constitute mainstream psychology, namely, behavioural-experimental and psychoanalytic psychology. It is in this way that the philosophical bases that underlie both existential-phenomenological and transpersonal ("fourth force") psychology have taken root and grown in this field.

In classic behaviourism, the human individual is regarded as a passive entity whose experience cannot be accurately verified or measured by natural scientific methods. This entity, seen as implicitly separate from its surrounding environment, simply responds or reacts to stimuli that impinge on it from the external physical and social world. Because only that which can be observed with the senses and quantified, and whose qualities and dimensions can be agreed to by more than one observer, is recognized as acceptable evidence, human behaviour (including verbal behaviour) became the focus of psychology

In a partial response to this situation, the radical behaviourism of Skinner (e.g., 1974) claims to have collapsed this classic behaviour - experience split by regarding thoughts and emotions as subject to the same laws that govern operant conditioning and the roles that stimuli, responses, and reinforcement schedules play within this paradigm. Thoughts and feelings are, simply, behaviours.

In the psychoanalytic perspective, an important difference with behavioural psychology stands out. Experience is recognized not only as an important part of being human but as essential in understanding the adult personality. It is within this context that both Freud’s personal unconscious and Jung's collective unconscious take their places. The human being is, thereby, more whole yet is still treated as a basically passive entity that responds to stimuli from within (e.g., childhood experiences, current emotions, and unconscious motives), rather than the pushes and pulls from without. Whether the analyst speaks of one’s unresolved oral stage issues or the subtle effects of the shadow archetype, the implicit separation of person and world remains unexamined, as does the underlying causal interpretation of all behaviour and experience. Both behavioural and analytic psychology are grounded in an uncritically accepted linear temporal perspective that seeks to explain human nature via the identification of prior causes and subsequent effects.

Only in the existential-phenomenological approach in psychology is the implicitly accepted causal way of being seen as only one of many ways human beings can experience themselves and the world. More specifically, our being presents itself to awareness as a being-in-the-world in which the human individual and his or her surrounding environment are regarded as inextricably intertwined. The person and world are said to co-constitute one another. One has no meaning when regarded independently of the other. Although the world is still regarded as essentially different from the person in kind, the human being, with his or her full experiential depth, is seen as an active agent who makes choices within a given external situation (i.e., human freedom always presents itself as a situated freedom). Other concepts coming from existential - Phenomenological psychology include the prereflective, lived structure, the life-world, and intentionality. All these represent aspects or facets of the deeper dimensions of human being and human capacity.

The prereflective level of awareness is central to understanding the nature of Phenomenological research methodology. Reflective, conceptual experience is regarded as literally a “reflection” of a preconceptual and, therefore, prelanguaged, foundational, bodily knowing that exists "as lived" before or prior to any cognitive manifestation of this purely felt-sense. Consider, for example, the way a sonata exists or lives in the hands of a performing concert pianist. If the pianist begins to think about which note to play next, the style and power of the performance is likely to suffer noticeably.

This prereflective knowing is present as the ground of any meaningful (meaning-full) human experience and exists in this way, not as a random, chaotic inner stream of subtle senses or impressions but as a prereflective structure. This embodied structure or essence exists as an aspect or a dimension of each individual’s Lebenswelt or life-world and emerges at the level of reflective awareness as meaning. Meaning, then, is regarded by the Phenomenological psychologist as the manifestation in conscious, reflective awareness of the underlying prereflective structure of the particular experience being addressed. In this sense, the purpose of any empirical Phenomenological research project is to articulate the underlying lived structure of any meaningful experience on the level of conceptual awareness. In this way, understanding for its own sake is the purpose of Phenomenological research. The results of such an investigation usually take the form of basic constituents (essential elements) that collectively represent the structure or essence of the experience for that study. They are the notes that compose the melody of the experience being investigated.

Possible topics for a Phenomenological study include, therefore, any meaningful human experience that can be articulated in our everyday language such that a reasonable number of individuals would recognize and acknowledge the experience being described (e.g., “being anxious,” “really feeling understood,” “forgiving another,” “learning,” and “feeling ashamed”). These many experiences constitute, in a real sense, the fabric of our existence as experienced. In this way, Phenomenological psychology with its attendant research methods has been, to date, a primarily existential-phenomenological psychology. From this perspective, reflective awareness and prereflective awareness are essential elements or dimensions of human being as a being-in-the-world. They co-constitute one another. One cannot be fully understood without reference to the other. They are truly two sides of the same coin.

Some experiences and certain types of awareness, however, do not seem to be captured or illuminated by Phenomenological reflections on descriptions of our conceptually recognized experiences and/or our prereflective felt-sense of things. Often referred to as transpersonal, transcendent, sacred, or spiritual experience, these types of awareness are not really experience in the way we normally use the word, nor are they the same as our prereflective sensibilities. The existential Phenomenological notion of intentionality is helpful in understanding this distinction.

The world’s transpersonal, transcendent, sacred, and spiritual represent subtle distinctions among themselves. For example, “transpersonal” currently refers to any experience that is transgenic, including the archetypal realities of Jung’s collective unconscious as well as radical transcendent awareness. Although notions such as the collective unconscious refer to states of mind that are deeper than or beyond our normal ego consciousness, “transcendent” refers to a completely sovereign or soul awareness without the slightest inclination to define itself as anything outside itself including contents of the mind, either conscious or unconscious, personal or collective (i.e., awareness that is not only transgenic but transmind). This distinction between transpersonal and transcendent awareness may lead to the emergence of a fifth force or more purely spiritual psychology.

In existential-phenomenological psychology, intentionality refers to the nature or essence of consciousness as it presents itself. Consciousness is said to be intentional, meaning that consciousness always has an object, whether that intended object be a physical object, a person, or an idea or a feeling. Consciousness is always a “consciousness of” something that is not consciousness itself. This particular way of defining or describing intentionality directly implies the deep, implicit interrelatedness between the perceiver and that which is perceived that characterizes consciousness in this approach. This inseparability enables us, through disciplined reflection, to illumine the meaning that was previously implicit and unlanguaged for us in the situation as it was lived.

Transcendent awareness, on the other hand, seems somehow “prior to” this reflective-prereflective realm, presenting itself as more of a space or ground from which our more common experience and felt-sense emerge. This space or context does, however, present itself in awareness, and is, thereby, known to the one who is experiencing. Moreover, implicit in this awareness is the direct and undeniable realization that this foundational space is not of the phenomenal realm of perceiver and the perceived. Rather, it is a noumenal, unitive space within or from which both intentional consciousness and phenomenal experience manifest. From reflections on my own experience (Valle, 1989) offering the following six qualities or characteristics of transpersonal/transcendent awareness (often recognized in the practice of meditation)

(1). There is a deep stillness and peace that I sense as both existing as itself and, at the same time, as “behind” all thoughts, emotions, or felt-senses (bodily or otherwise) that might arise or crystallize in or from this stillness. “I” experience this as an “isness” or “samness” rather than a state of whatness or “I am this” or “that.” This stillness is, by its nature, neither active nor in the body and is, in this way, prior to both the prereflective and reflective levels of awareness. (2). There is an all-pervading aura or feeling of love for and contentment with all that exists, a feeling that exists simultaneously in my mind and heart. Although rarely focussed as a specific desire for anyone or anything, it is, nevertheless, experienced as an intense, inner energy or inspired “pressure” that yearns, even “cries,” for a creative and passionate expression. I sense an open embracing of everyone and everything just as they are, that literally melts into a deep peace when I find myself able to simply “ let it all be.” Peace of mind is, here, a heart-felt peace (3) Existing as or with the stillness and love is a greatly diminished, and on occasion absent, sense of “I.” The more common sense of “I am thinking or feeling this or that” becomes a fully present “I am” or simply, when in its more intense form, an “amness” (pure Being in the Heideggerian sense). The sense of a “perceiver” and "that which is perceived” has dissolved; there is no longer any “one” to perceive as we normally experience this identity and relationship.(4) My normal sense of space seems transformed. There is no sense of “being there,” of being extended in and occupying space, but, similar to the previously mentioned, simply Being. Also, there is a loss of awareness of my body-sense as a thing or spatial container. This ranges from an experience of distance from sensory input to a radical forgetfulness of the body’s very existence. It is that my everyday, limited sense of body-space touches a sense of the infinite. (5) Time is also quite different from my everyday sense of linear passing time. Seemingly implicit in the sense of stillness described here is also a sense of time “hovering” or standing still, of being forgotten (i.e., no longer a quality of mind) much as the body is forgotten. No thoughts dwelling on the past, no thoughts moving into the future - hours of linear time are experienced as a moment, as the eternal Now.(6) Bursts or flashes of insight are often part of this awareness, insights that have no perceived or known antecedents but that emerge as complete or full-blown. These insights or intuitive “seeings” have some of the qualities of more common experience (e.g., although “lighter,” there is a felt weightiness or subtle “content” to them), but they initially have an “other-than-me” quality about them, as if the thoughts and words that emerge from the insights are being done to or, even, through me - a sense that my mind and its contents are vehicles for the manifestation as experience of something greater and/or more powerful than myself. In its most intense or purest form, the “other-than-me” quality dissolves as the “me” expands to a broader, more inclusive sense of self that holds within it all that was previously felt as "other-than-me."

Since these six qualities, we have come to recognize two additional dimensions or essential characteristics of transcendent awareness: (a) a surrendering of one's sense of control with regard to the outcome of one's actions, and the dissolution of fear that seems to always follow this “letting go,” and (b) the transformative power of transcendent experience, realized as a change in one’s preferences, inclinations, emotional and behavioural habits, and understanding of life itself. This self-transformation is often personally painful because this power both challenges and changes the comfortable patterns of thoughts and feelings we have so carefully constructed through time, a transformation of whom we believe we are.

These qualities or dimensions call us to a recontextualization of intentionality by acknowledging a field of awareness that appears to be inclusive of the intentional nature of mind but, at the same time, not of it. In this regard, (Valle, 1989) offer the notion of a “transintentionality” to philosophically address this consciousness without an object (Merrell-Wolff, 1973). As Phenomenological psychologist and researcher, Steen Halling (1988) has rightfully pointed out, consciousness without an object is also consciousness without a subject. Transintentional awareness, therefore, represents a way of being in which the separateness of a perceiver and that which is perceived has dissolved, a reality not of (or in some way beyond) time, space, and causation as we normally know them.

Here is a bridge between existential/humanistic and transpersonal/transcendent approaches in psychology. It is here that we are called to recognize the radical distinction between the reflective/prereflective realm and pure consciousness, between rational/emotive processes and transcendent/spiritual awareness, between intentional knowing of the finite and being the infinite. It is, therefore, mind, not consciousness per se, that is characterized by intentionality, and it is our recognition of the Transintentional nature of Being that calls us to investigate those experiences that clearly reflect or present these transpersonal dimensions in the explicit context of Phenomenological research methods.

This presentation is based on the following thoughts regarding the meaning of transpersonal in this context. On the basis of the themes that Huxley (1970) claimed to compose the perennial philosophy,(Valle, 1989) presented five premises that characterize any philosophy or psychology as transpersonal: (1) That a transcendent, transconceptual reality or Unity binds together (i.e., is immanent in) all apparently separate phenomena, whether these phenomena be physical, cognitive, emotional, intuitive, or spiritual: (2) That the individual or ego-self is not the ground of human awareness but, rather, only one relative reflection-manifestation of a greater transpersonal (as “beyond the personal”) Self or One (i.e., pure consciousness without subject or object). (3) That each individual can directly experience this transpersonal reality that is related to the spiritual dimensions of human life (4) That this experience represents a qualitative shift in one’s mode of experiencing and involves the expansion of one’s self-identity beyond ordinary conceptual thinking and ego-self awareness

(i.e., mind is not consciousness, however, if one is to thinking one has of oneself the dimension of consciousness). (5) That this experience is self-validating.

It has been written and taught for millennia in the spiritual circles of many cultures that sacred experience presents itself directly in one’s awareness (i.e., without any mediating sensory or reflective processes) and, as such, is self-validating. The direct personal experience of God is, therefore, the “end” of all spiritual philosophy and practice.

Transcendent/sacred/divine experience has been recognized and often discussed, both directly and metaphorically, as either intense passion or the absolute stillness of mind (these thoughts and those that follow regarding passion and peace of mind are from Valle, 1995). In day-to-day experience, a harmonious union of passion and stillness or peace of mind is rarely experienced. Passion and stillness are regarded as somehow antagonistic to each other. For example, when one is passionately involved with some project or person, the mind is quite active and intensely involved. On the other hand, the calm, serene, and profoundly peaceful quality of mind that often accompanies deep meditation is fully disengaged from and, thereby, disinterested in things and events of the world.

What presents itself as quite paradoxical on one level offers a way to approach the direct personal experience of the transcendent, that is, to first recognize and then deepen any experience in which passion and peace of mind are simultaneously fully present in one’s awareness. If divine presence manifests in human awareness in these two ways, and sacred experience is what one truly seeks, it becomes important to approach and understand those experiences wherever these two dimensions exist in an integrated and harmonious way. In this way, one comes to understand the underlying essence that these dimensions share rather than simply being satisfied with the seeming opposites they first appear to be.

The relationship between passion and peacefulness is addressed in many of the world’s scriptures and other spiritual writings. These two threads, for example, run through the Psalms (May and Metzger, 1977) of the Judeo-Christian tradition. At one point, we read, “Be still and know that I am God” (Psalm 46) and “For God alone my soul waits in silence” (Psalm 62,) and at another point, “For zeal for thy house has consumed me” (Psalm 69) and “My soul is consumed with longing for thy ordinances” (Psalm 119). Stillness, silence, zeal, and longing all seem to play an essential part in this process.

In his teachings on attaining the direct experience of God through the principles and practices of Yoga, Paramahansa Yogananda (1956) affirms, that “I am calmly active. I am actively calm. I am a Prince of Peace sitting on the throne of poise, directing the kingdom of activity.” And, more recently, Treya Wilber (quoted in Wilber, 1991) offers an eloquent exposition of this integration: Perhaps, the Carmelites’ emphasis on passion and the Buddhists’ parallel emphasis on equanimity. It suddenly occurred to me that our normal understanding of what passion means is loaded with the idea of clinging, of wanting something or someone, of fearing losing them, of possessiveness. But what if you had passion without all that stuff, passion without attachment, passion clean and pure? What would that be like, what would that mean? I thought of those moments in meditation when I’ve felt my heart open, a painfully wonderful sensation, a passionate feeling but without clinging to any content or person or thing. And the two words suddenly coupled in my mind and made a whole. Passionate equanimity - to be fully passionate about all aspects of life, about one’s relationship with spirit, to care to the depth of one’s being but with no trace of clinging or holding, that's what the phrase has come to mean to me. It feels full, rounded, complete, and challenging

It is here that existential-phenomenological psychology with its attendant descriptive research methodologies comes into play. For if, indeed, we each identify with the contents of our reflective awareness and speak to and/or share with one another from this perspective to better understand the depths and richness of our meaningful experience, then Phenomenological philosophy and method offer us the perfect, perhaps only, mirror to approach transcendent experience. Experiences that present themselves as passionate, as peaceful, or as an integrated awareness of these two become the focus for exploring in a direct, empirical, and human scientific way the nature of transcendent experience as we live it. Here are the “flesh” and promise of a transpersonal-phenomenological psychology

Particular reports for a list of the specific constituents presented in each study, a reflective overview of these results reveals an emerging pattern of common elements or themes. We offer these eleven themes as a beginning matrix or tapestry of transpersonal dimensions interwoven throughout the descriptions of these experiences, not as constituents per se resulting from a more formal protocol analysis. As we looked over the results of these studies, these themes naturally emerged, falling, even, into a natural order. Some are clearly distinct, whereas others appear as more implicitly interconnected. These themes are:(1). An instrument, vehicle, or container for the experience(2) Intense emotional or passionate states, pleasant or painful(3) Being in the present moment, often with an acute awareness of one's authentic nature (4) ascending space and time (5) Expansion of boundaries with a sense of connectedness or oneness, often with the absence of fear(6) A stillness or peace, often accompanied by a sense of surrender (7) A sense of knowing, often as sudden insights and with a heightened sense of spiritual understanding (8) Unconditional love (9) Feeling grateful, blessed, or graced (10) Ineffability (11) Self-transformation.

It seems that the transpersonal/transcendent aspects of any given experience manifest in, come through, or make themselves known via an identifiable form or vehicle. This theme was evident in all seven research studies, the specific forms being silence, being with the dying, being with suffering, near-death experience, being with one’s spiritual teacher, and synchronicity. Transpersonal experiences can come through many forms including meditation, rituals, dreams, sexual experience, celibacy, initiations, music, breath awareness, physical and emotional pain, psychedelic drugs, and the experience of beauty (Maslow’s, 1968, description and discussion of peak experiences are relevant here as well as to a number of the themes discussed below). We again use a musical analogy: Just as the violin, piano, flute, or voice can be an instrument for the manifestation/expression of a melody, so, too, there are many ways in and through which consciousness reveals its nature.

The existential-phenomenologists may interpret this as further evidence for the intentional nature of consciousness, that this is simply the way in which consciousness presents itself to the perceiver. There is also the view that consciousness is a constant stream of “energy” existing beyond the duality of subject-object (i.e., consciousness without an object) that flows through all creation, being both all-pervasive and unitive by its nature. Aware of the paradox implied in this perspective, Capra (1983) states.

[The mystical view] regards consciousness as the primary reality and ground of all being. In its purest form, consciousness, . . . is non- material, formless, and void of all content; it is often described as “pure consciousness,”ultimate reality,” a “suchness” and the like. This manifestation of pure consciousness is associated with the Divine. . . . The mystical view of consciousness is based on the experience of reality in non-ordinary modes of awareness, which are traditionally achieved through meditation, but may occur spontaneously in the process of artistic creation and in various other contexts, such as transcend.

Consorting, with any process drawing to some conclusion from a set of premises is called a processes of reasoning. If the conclusion concerns what we do, the process is called practical reasoning, otherwise pure theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premise support or even entailing the conclusions drawn, and if they are bad, the premise offers no support to the conclusion. Formal logic studies the cases under which conclusions are validly drawn from premises, but little human reasoning is overtly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go-beyond’ our premises, in that the conclusions of logically valid are abutments do not for the process of using evidence to reach a wider range of conclusive evidential matters. Nonetheless, such anticipatory pessimism in the opposite direction to the prospects of conformation, that denying that we can assess the result of abduction in terms of probability. A cognitive process of reasoning in which a conclusion is played-out from a set of premises usually confine themselves to the conclusions that are supposed in following from the premises, e.g., an inference is logically valid, in that of deductibility in a logically defined syntactic premises. But without there being to any reference to the intended interpretation of its theory. Furthermore, s we reason we use indefinite traditional knowledge or common-sense sets of presuppositions about what is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge by the way of the world in computer programs.

Without the fundamental discipline of linguistic analysis philosophy cuts itself adrift from ordinary meaning and enters an Alice-in-Wonderland fantasy of wishful wisdom. Yes, of course the nature of the human mind is a great puzzle. But if we approach this 'great puzzle' in a bare hands, undisciplined way, we put ourselves into the case of looking for something, without the faintest idea of what it is. That is a dumb quest. Probably the dumb quest for an insight into the nature of one's own mind is the worst example of such defective search procedure.

Many philosophers have been puzzled about the nature of consciousness. As a result, a huge literature allegedly about this subject, but really constituting a dense fog blanket of near- meaningless rhetoric, has been devised. In that finding its difficult to explain to an ordinary friend what is the point of such lengthy, scholastic, consciously obscure, artifice. What does it achieve? Does it clarify the individual's mind? Does it clarify the great intellectual issues of the day? Certainly not! It may serve to de-clarify the great intellectual issues of the day, because it helps to give philosophy, the art, a poor reputation: as being more interested in appearances than in realities, as being quite content to bandy-about badly focussed but meretricious sentences. We can't hope to get anywhere in philosophy unless we first concentrate our attention on focussing very firmly onto meanings.

What ever the bet on 'getting there' by the facile short-cut of introspection. To think one might get there by introspection is like thinking that the way to solve an equation is to stare at it harder and harder - for as long as it takes - until the unknown value of ‘x’ finally ('as it must of course') reveals itself! Introspective philosophy, it is widely agreed, is a reaction against positivism and physicalism: but, if so, the reaction has gone much too far. The main complaint against the positivists and the physicalists is surely that, in their blind attachment to scientific modes, they show a dismal insensitivity to human culture, human values, human relationships. They do, but it is what they lack that defines the complaint, not what they know. There can be no excuse for rejecting scientific modes of clarification out of hand in any department of human activity: least of all in one - philosophy - which must trade in clarification if it trades in anything at all. Yes, we need clarification in other areas too. But don't let's turn our backs on what we have.

Willard van Orman Quine, the most influential American philosopher of the latter half of the 20th century, when after the wartime period in naval intelligence, punctuating the rest of his career with extensive foreign lecturing and travel. Quine’s early work was in mathematical logic, and issued A System of Logistics (1934), Mathematical Logic (1940) and Methods of Logic (1950) wherefore, it was with the collection of papers from a Logical Point of View (1953) that his philosophical importance became widely recognized. Quine’s work dominated concerns with the problems of convention, meaning and synonymy cemented by Word and Object (1960), in which the indeterminacy of radical translation first takes centre-stage. In this and many subsequent writings Quine takes a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. These ‘intentional idioms’ resist smooth incorporation into the scientific world view, and Quine responds with scepticism toward them, not quite endorsing ‘eliminativism’, but regarded them as second-rate idioms, unsuitable for describing strict and literal facts. For similar reasons he consistently expressed suspicion of the logical and philosophical propriety of appeal to logical; possibilities and possible worlds. The language that are properly behaved and suitable for literal and true descriptions of the world as those of mathematics and science. The entities to which his theories refer must be taken with full seriousness in our ontology although an empiricist, Quine thus supposes that the abstract objects of a set theory are required by science, and therefore exist. In the theory of knowledge Quine associates with a ‘holistic view’ of verification, conceiving of a body of knowledge in terms of a web touching experience at the periphery, but with each point connected by a network of relations to other points.

Quine is also known for the view that epistemology should be naturalized, or conducted in a scientific spirit, with the object of investigation being the relationship, in human beings, between the voice of experience and the outputs of belief. Although Quine’s approaches to the major problems of philosophy have been attacked as betraying undue ‘scientism’ and sometimes ‘behaviourism’, the clarity of his vision and the scope of his writings made him the major focus of Anglo-American work of the past forty years in logic, semantics and epistemology. As well as the works cited his writings’ cover The Ways of Paradox and Other Essays (1966), Ontological Relativity and Other Essays (1969), Philosophy of Logic (1970), The Roots of Reference (1974) and The Time of My Life: An Autobiography (1985).

Coherence is a major player in the theatre of knowledge. These are cogence theories of belief, truth and justification, as these are to combine themselves in the various ways to yield theories of knowledge coherence theories of belief and concerned with the contentual representation of beliefs. Consider a belief you now have the beliefs that you are reading a page in a book, in so, that, what makes that belief the belief that is? What makes it the belief that you are reading a page in a book than that of having a belief that you have a monster in your garden?

One answer is that belief has a coherent place or role in a system of beliefs, perception or having the perceptivity that has its influenced on beliefs. As you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have some monster in your garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, then its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a monster, sortal perceptivals hold accountable the perceptivity and actions tat are indeterminate to its content if its belief is the action as if simulated by its inner and latent coherence in that of your belief, however. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, some latently causal than other relations other than to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other, justly as I infer about other beliefs.

The information of perceptibility and the output of an action supplement as the central role of the systematic relations that the belief has to other beliefs, but the systematic relations gives the belief its specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief that the representational content under which it does because of the way in which it coheres within a system of beliefs (Rosenberg 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is on e determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.

When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? Again, there is a distinction between weak and strong theoretic principles that govern its theory of coherence. Weak theories tell us that the way in which a belief coheres with a background system of beliefs in one determinant of justification, other typical determinants being perceptivity, memory, and the collection of sensory data, however, strong theories, or dominant projections are in coherence to justification as solely a matter of how a belief coheres with a system of latent hierarchical beliefs. There is, nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories between positive and negative coherence theory (Pollock 1986). A positive coherence theory tells us that if a belief coheres with a background system of beliefs, then the belief is justified. A negative coherence theory tells us that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to the positivity of a coherence theory, coherence has the power to produce justification, while according to its being adhered by negativity, the coherence theory has only the power to nullify justification.

Least there be mention, a strong coherence theory of justification is a formidable combination under which a positive and a negative theory tell us that a belief is justified if and only if it coheres with a background system of inter-connectivity of beliefs. Coherence theories of justification and knowledge have most often been rejected for being unable to deal with an accountable justification toward the perceptivity upon which its protection of knowledge (Audi 1988 and Pollock 1986), and therefore, considering a perceptual example that will serve as a kind of crucial test will be most appropriate. Suppose that a person, call her Julia, and works with a scientific instrumentation that has a gauging measure upon temperatures of liquids in a container. The gauge is marked in degrees, she looks at the gauge and sees that the reading is 125 degrees. What is she justifiably to believe, and why? Is she, for example, justified in believing that the liquid in the container is 125 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that, though her belief that she sees the numerical digits 125 degrees, and is immediately justified as a direct sensory evidence without appeal to a background system, the belief that the location in the container is 125 degrees, that results from coherence with a background system of latent beliefs that affirm to the shaping perceptivity that its 125 as visually read to be 125 degrees on the gauge that measures the temperature of the liquid in the container. This, nonetheless. Of a weak coherence view that combines coherence with direct perceptibility as its evidence, in that the foundation of justification, is to account for the justification of our beliefs.

A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shaping to sensory data that holds accountable a measure of 125, or even the more cautious belief that one sees a shape, resalting from the perceptivals of coherence theory, in that it coheres with a background system. One may argue for this strong coherence theory in a number of different ways. One line or medium through which to appeal to the coherence theory of contentual representation. If the content of the perceptual belief results from the relations of the belief to other beliefs in a network system of beliefs, then one may notably argue that the justification of perceptivity, that the belief in a resultant under which its relation of the belief to other beliefs, in the network system of beliefs is in argument for the strong coherence theory is that without any assumptive reason that the coherence theory of contentual beliefs, In as much as the supposed causes that only produce the consequences we expect. Consider the very cautious belief that ‘I see a shape’. How may the justifications for that perceptual belief are an existent result that is characterized of its material coherence with a background system of beliefs? What might the background system tell us that would justify that belief? Our background system contains a simple and primary theory about our relationship to the world and its surrounding surfaces, in that we perceive as it is or should be believed. To come to the specific point at issue, we believe that we can tell a shape when we see one, completely differentiate its form as perceived to sensory data, that we are to test of ourselves about such simple matters as whether we see a shape before us or not, as in the acceptance of opening to nature the inter-connectivity between belief and the progression through which is acquired from past experiential conditions of applicability, and not beyond deception. Moreover, when Julia sees the believing desire to act upon what either coheres with a weak or strong coherence of theory, she shows that its belief, as a measurable quality or entity of 125, has the essence in as much as there is much more of a structured distinction of circumstance, which is not of those that are deceptive about whether she sees that shape or sincerely does not see of it shaping distinction, however, light is good. The numeral shapes are largely, readily discernible and so forth. These are beliefs that Julia has single handedly authenticated by reasons for justification. Her successive malignance to sensory access to data involved is justifiably a subsequent belief, under which with those beliefs, and so she is justified and creditable.

The philosophical problems include, discovering whether belief differs from other varieties of assent, such as ‘acceptance’ discovering to what extent degree of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering it links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether prelinguistic infants or animals are property said to have beliefs.

Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs, since we are not aware of such inferences for the most part, the inference must be interpreted as unconscious inferences, as information processing, based on or finding the background system that proves most convincing of acquiring its act and used from the motivational force that its underlying and hidden desire are to do so. One might object to such a account on the grounds that not all justifiable inferences are self-explanatory, and more generally, the account of coherence may, at best, is ably successful to competitions that are based on background systems (BonJour 1985 and Lehrer 1990). The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of beliefs informs one that one is acceptingly trustworthy and enables him or her to meet the objection. A belief coheres with a background system just in case it enables either to meet the sceptical objections and in the way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer 1990).

Illustrating the relationship between positive and negative coherence theories in terms of the standard coherence theory is easy. If some objection to belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in the belief. So, to turn to Julia, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose that when she sees the reading of 125, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Julia, who has always placed her trust in the gauge, believes what the gauge tells her, that the liquid in the container is at 125 degrees. Though she believes what she reads is at 125 degrees is not a justified belief because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells us that she is not justified in her belief about the temperature of the content in the container. By contrast, when the red light is not illuminated and the background system of Julia tell her that under such conditions that the gauge I a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tell us that she is justified in her belief because her belief coheres with her background system of Julia , telling that under such conditions that the gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tell us that she is justified in her belief because her belief coheres with her background system continuing as a trustworthy system.

As the foregoing sketch and illustration of coherence theories of justification have a common feature, namely, that they are what is called internalistic theories of justification: What makes of such a view the absence of any requirement that the person for whom the belief is justified have of any cognitive access to the relation of reliability in question. Lacking such access, such a person will usually have no reason for thinking that the belief is true or likely to be true, but will, on such an account are none the lesser to appear epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

They are theories affirming that coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. Ho w one might object, can be to assume the including of interiority. A subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective conditions and external objective realities?

The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What are required maybe put by saying that the justification that one must be undefeated by errors in the background system of beliefs? Justification is undefeated by errors just in case any correction of such errors in the background system of belief would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positivity I acclaimed by the coherence theory, under which the true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error (Lehrer 1990). The connection between internal and subjective conditions of belief and external objectivity are from which reality’s result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Julia, she believes that her internal subjectivity to conditions of sensory data in which the experience and perceptual beliefs are connected with the external objectivity in which reality is the temperature of the liquid in the container in a trustworthy manner. This background belief is essential to the justification of her belief that the temperature of the liquid in the container is 125 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our beliefs that cohere with that system. For instance, such justification to convert to knowledge, that theory must be sufficiently free from error so that the coherence sustained in corrected versions of our background system of beliefs. The correctness of the simple background theory provides the connection between the internal condition and external reality.

The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences she has been deaf-mute until they are represented in the form of some perceptual belief. Beliefs are the engines that pull the train of justification. Nevertheless, what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifacts of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification (Rescher 1973 and Rosenberg 1980). That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is justifiable of some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps one expressing a consensus among systems or some consensus among belief systems or some convergence toward a consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objectification. One is that there is a consensus that we can all be wrong about at least some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with the consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.

Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background system but coherence theories for truth do not. A defender of coherentism must accept the logical gap between justified belief and truth, but may believe that our capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depending on what causal subject to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can enter causal relations, this seems to exclude mathematical and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually the sort of criterion have usually supposed that it is limited to perceptual knowledge of particular fact about the subject’s environment.

For example, Armstrong (1973) proposed that a belief of the form ‘This (perceived) object is F’ is (non-inferential) knowledge if and only if the belief in a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any object ‘x’ is to occur, and so thus a perceived object of ‘y’, if ‘x’ undergoing those properties are for us to believe that ‘y’ is ‘F’, then ‘y’ is ‘F’ (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).



This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge, because it is compatible with the belief’s being unjustified, and an unjustifiable belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been give n good reason to think otherwise, to think that the substantive primary colours that are perceivable, that things look tinted to you and tinted things look tinted. If you fail to heed these reasons you have for thinking that your colour perception or sensory data is a way. Believing in a ‘thing’, which looks to blooms of vividness that you are to believe of its tint, your belief will fail to be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being tinted in such a way as t be a completely reliable ign, or to carry the information, in that the thing is tinted.

One could fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified, by this enriched condition would still be insufficient. Suppose, for example, that in nearly all people, but not in you, as it happens, causes the aforementioned aberration in colour perceptions. The experimenter tells you that you have taken sch a drug but then says, ‘no, hold off a minute, the pill you took was just a placebo’ suppose further, that this last thing te experimenter tells you is false. Her telling you that it was a false statement, and, again suppose, telling you this gives you justification for believing that looks a subtractive primary colour to you that it is a sensorial primary colour, in that the fact you were to expect that the experimenters last statements were false, making it the case that your true belief is not knowledgeably correct, thought as though to satisfy its causal condition.

Goldman (1986) has proposed an importantly different causal criterion namely, that a true belief is knowledge, if it is produced by type of process that is ‘globally’ and ‘locally’ reliable. causing true beliefs as sufficiently high and globally reliable in its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situation alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be casually related to the belief, and si it could in principle apply to knowledge of any kind of truth.

Goldman requires tat global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge, under which requires for knowledge but does not require for justification, which is locally reliable. His idea is that justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Thee relevant alternative account of knowledge can be motivated by noting that other concepts exhibit the same logical structure e. two examples of this are the concept ‘flat’ and te concept ‘empty’ (Dretske 1981). Both appear to be absolute concepts. . . . A space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of ;flat’, there is a standard for what counts as a bump and in the case of ‘empty’, there is a standard for what counts a thing. To be flat is to be free of any relevant bumps and to be empty is to be devoid of all relevant things.

What makes an alternative situation relevant? Goldman does not try to formulate examples of what he takes to be relevantly alterative, but suggests of one. Suppose, that a parent takes a child’s temperature with thermometer that the parent selected at random from several lying in the medicine cabinet. Only the particular thermometer chosen was in good working order, it correctly shows the child’s temperature to be normal, but if it had been abnormal then any of the other thermometers would have erroneously shown it to be normal. Te parent’s actual true belief is caused by a globally reliable process but, because it was ‘just luck’ that te parent happened to select a good thermometer, “we would not say that te parent knows that the child’s temperature is normal.” Goldman gives yet another example: Suppose: -

Wally spots Ruth across the street and correctly believes that it is Ruth.

If it did so occur that it was Ruth’s twin sister, he would be mistaken her

for Ruth. Does Wally know Ruth? As long as there is a serious possibility that person across the street might have been Joan rather than, . . .

we would deny that Wally knows.

Goldman suggests that the reason for denying knowledge in the thermometer example, be that it was ‘just luck’ tat the parent did not pick a non-working thermometer and in the twin’s example, the reason is that there was ‘a serious possibility’ that it might have been the one in which Wally could have probably had mistaken. This suggests the following criterion of relevance: An alternative situation, whereby, that the same belief is produced in thee same way but is false, it is relevantly just in case at some point before the actual belief was to its cause, by which a chance that the actual belief was to have caused, in that the chance of what situation’s having come about was instead of the actual situation was to converge, nonetheless, by the chemical components that constitute its inerter-actual exchange by which endorphin excitation was to influence e and so give to the excitability of neuronal transmitters that deliver messages, inturn, the excited endorphin’s gave ‘change’ to ‘chance’, thus it has of itself the existential position of holding a given to opportunities too decided upon numerous accounts of combination as given the chance to change. Thus and so, our interpretations of which the sensory-data is unduly persuaded by innate capabilities that at times are latently hidden to arise within the mind or brain, giving to its existential decision of a chosen chance of luck.

One of the most durable and intractable issues in the history of philosophy has been the problem of universals. Closely related to this, and a major subject of debate in 20th century philosophy, has been the problem of the nature of the meaning.

The problem of universals goes back to Plato and Aristotle. The matter at issue is that, on the one hand, the objects of experience are individual, particular, and concrete, while, on the other hand, the objects of thought, or most of the kinds of things that we know even about individuals, are general and abstract, i.e. universals. Thus, a house may be red, but there are many other red things, so redness is a general property, a universal. Redness can also be conceived in the abstract, separated from any particular thing, but it cannot exist in experience except as a property of some particular thing and it cannot even be imagined but with some other minimal properties, e.g. extension. Abstraction is especially conspicuous in mathematics, where numbers, geometrical shapes, and equations are studied in complete separation from experience. The question that may be asked, then, is how it is that general properties or abstract objects are related to the world, how they exist in or in relation to individual objects, and how it is that we know them when experience only seems to reveal individual things.

Plato's answer to this was that universals exist in a separate reality as special objects, distinct in kind, from the things of experience. This is Plato's famous theory of "Forms." Plato himself used the terms idéa and eîdos in Greek, which could mean the "look" of a thing, its form, or the kind or sort of a thing [Liddell and Scott, An Intermediate Greek-English Lexicon, Oxford, 1889, 1964, pp. 226 & 375]. Since Aristotle used the term eîdos to mean something else and consistently used idéa to refer to Plato's theory, in the history of philosophy we usually see references to Plato's "theory of Ideas."

Although Aristotle said that Socrates had never separated the Forms from the objects of experience, which is probably true, some of Socrates's language suggests the direction of Plato's theory. Thus, in the , Socrates, in asking for a definition of piety, that he does not want to know about individual pious things, but about the "idea itself," so that he may "look upon it" and, using it "as a model [parádeigma]," judge "that any action of yours or another's that is of that kind is pious, and if it is not that it is not"- G.M.A. Grube trans., Hackett, 1986]. Plato concludes that what we "look upon" as a model, and is not an object of experience, is some other kind of real object, which has an existence elsewhere. That "elsewhere" is the "World of Forms," to which we have only had access, as the Myth of Chariot in the Phaedrus says, before birth, and which we are now only remembering. Later, the decided that we have access now, immediately and intuitively, to the Forms, but while this produces a rather different kind of theory, both epistemologically and metaphysically, it still posits universals as objects at a higher level of reality than the objects of experience (which partake of matter and evil)

Plato himself realized, as recounted in the Parmenides, that there were some problems and obscurities with his theory. Some of these could be dismissed as misunderstandings; others were more serious. Most important, however, was the nature of the connection between the objects of experience and the Forms. Individual objects "participate" in the Forms and derive their character, even, Plato says in the , their existence, from the Forms, but it is never clear how this is supposed to work if the World of Forms is entirely separate from the world of experience that we have here. In the Timaeus, Plato has a Creator God, the "Demiurge," fashioning the world in the image of the Forms, but this cannot explain the on-going coming-into-being of subsequent objects that will "participate" themselves. Plato's own metaphorical language in describing the relationship, which empirical objects are "shadows" of the Forms, probably suggested the Neoplatonic solution that such objects are attenuated emanations of Being, like dim rays of sunlight at some distance from the source

Whether we take Plato's theory or the Neoplatonic version, there is no doubt that Plato's kind of theory about universals is one of Realism: Universals have real existence, just as much so, if not more so, than the individual objects of experience.

Aristotle also had a Realistic theory of universals, but he tried to avoid the problems with Plato's theory by not separating the universals, as objects, from the objects of experience. He "immanentized" the Forms. This meant, of course, that there still were Forms; it was just a matter of where they existed. So Aristotle even used one of Plato's terms, eîdos, to mean the universal object within a particular object. This word is more familiar to us in its Latin translation: species. In modern discussion, however, it is usually just called the "form" of the object. The Aristotelian "form" of an object, however, is not just what an object "looks" like. An individual object as an individual object is particular, not universal. The "form" of the object will be the complex of all its abstract features and properties. If the object looks red or looks round or looks ugly, then those features, as abstractions, belong to the "form." The individuality of the object cannot be due to any of those abstractions, which are universals, and so must be due to something else. To Aristotle that was the "matter" of the object. "Matter" confers individuality, "form" universality. Since everything that we can identify about an object, the kind of thing it is, what it is doing, where it is, etc., involves abstract properties, the "form" represents the actuality of an object. By contrast, the "matter" represents the potential or possibility of an object to have other properties.

The uses of "form" and "matter" are now rather different from what is familiar to us. Aristotelian "matter" is not something that we can see, so it is not what we usually mean by matter today. Similarly, Aristotelian "form" is not some superficial appearance of a fundamentally material object: It is the true actuality and existence of the object. This becomes clear when we note Aristotle's term for "actuality," which was enérgeia, what has become the modern word "energy." Similarly, the term for "potential" is familiar, dýnamis, which can also mean "power" and "strength."

The continuing dualism of Aristotle's theory emerges when we ask how the "forms" of things are known. An individual object Aristotle called a "primary substance" (where the Greek word for substance, one might better be translated "essence" or "being"). The abstract "form" of an object, the universal in it, Aristotle called "secondary substance." So if what we see are individual things, the primary substances, how do we get to the universals? Aristotle postulated a certain mental function, "abstraction," by which the universal is comprehended or thought in the particular. This is the equivalent of understanding what is perceived, which means that we get to the meaning of the perception. The "form" of the thing becomes its meaning, its concept, in the mind. For Plato, in effect, the meaning of the world was only outside of it.

While the Aristotelian "form" of an object is its substance (the "substantial form") and its essence, not all abstract properties belong to the essence. The "essence" is what makes the thing what it is. Properties that are not essential to the thing are accidental, e.g. the colour or the material of a chair. Thus the contrast between "substance and accident" or "essence and accident." Accidents, however, are also universals. A contrast may also be drawn between substance and "attribute." In this distinction, all properties, whether essential or accidental, belong to the substance, the thing that "stands under" (sub-stantia in Latin, hypo-keĂ­menon, "lie under," in Greek) all the properties and, presumably, holds then together. Since the properties of the essence are thought together through the concepts produced by abstraction, the "substance" represents the principle of unity that connects them.

Concepts, or predicates, are always universals, which means that no individual can be defined, as an individual, by concepts. "Socrates," as the name of an individual, although bringing to mind many properties, is not a property; and no matter how many properties we specify, "snub-nosed," "ugly," "clever," "condemned," etc., they conceivably could apply to some other individual. From that we have a principle, still echoed by Kant, that "[primary] substance is that which is always subject, never predicate." On the other hand, a theory that eliminates the equivalent of Aristotelian "matter," like that of Leibniz, must require that individuals as such imply a unique, perhaps infinite, number of properties. Leibniz's principle of the "identity of indiscernibles" thus postulates that individuals which cannot be distinguished from each other, i.e. have all the same discernible properties, must be the same individual.

One result of Aristotle's theory was a powerful explanation for natural growth. The "form" of a thing is not just what it looks like, it is the "final cause," the purpose of the thing, the "entelechy," the "end within," which is one of the causes of natural growth and change. Before the modern discovery of DNA, this was pretty much the only theory there was to account for the growth of living things from seeds or embryos into full-grown forms. Nevertheless, it introduces some difficulties into Aristotle's theory: If the "form" is accessible to understanding by abstraction, then this cannot be the same "form" as the one that contains the adult oak tree in the acorn, since no one unfamiliar with oak trees can look at an acorn and see the full form of the tree. But if the entelechy cannot be perceived and abstracted, then it exists in the object in a way different from the external "form." But Aristotle's metaphysics makes no provision, any more than quantum mechanics, for a "hidden" internal "form." Neoplatonism took care of that by making the internal "form" transcendent, as in Plato, but this is then a fatal compromise with Aristotle's prima facie empiricism and with his move to "immanentize" Plato's Forms.

This brings us to a fundamental conflict in Aristotle's theory, which highlights its drawbacks in relation to Plato's theory. If Aristotle is going to be an empiricist, thinking that knowledge comes from experience, this puts him on a slippery slope to positivism or, more precisely, "judicial positivism": that the actual is good (or, as puts it, "the Real is Rational"). The continuing virtue of Plato's theory of Forms is that the Forms can be profoundly different from the objects of experience. The Forms are perfect, and the world falls far short of them. This seems to account for important characteristics of reality, that true justice is rarely to be found, and that mathematicians describe the strangest things that have no obvious relation to experience. Aristotle's theory can accommodate this, but only by positing "forms" that are inaccessible to perception and abstraction, which would contradict any original notion in Aristotelian epistemology that knowledge comes from experience. Again, Neoplatonism takes care of this, but only at the cost of an intuitionism that is non-empirical, indeed, mystical, in the extreme, where we certainly do have access to "forms," or the Forms, apart from experience. But if Neoplatonism were correct, then it would be possible for someone to look at an acorn and, unfamiliar with the species, see what the full-grown oak would look like. This does not seem to happen on any credible testimony.

One significant consequence of Aristotle's point of view was, indeed, a belittlement of mathematics. Without mathematical Realism, we do not have the modern notion that real science is mathematical and that mathematics reveals the fundamental characteristics of nature. Mathematics cannot be thought of as "abstracted" from experience in any ordinary way. If it is not, then mathematics is just internally constructed, out of contact with reality. This seems to be Aristotle's view, a rejection of Pythagorean and Platonic mathematical Realism. Mathematics is no more than a "device for calculation." Thus, although Aristotle is usually thought of as being more "scientific" than Plato, he rejects Plato's geometrical view of the for the sake of a completely Presocratics sort of theory of opposites. He is overall nowhere near as interested in mathematics as Plato. Aristotle's approach became accepted, all through the Middle Ages, and it wasn't until the revival of Pythagorean-Platonic ideas about mathematics, in people like Kepler and Galileo, that modern science got going.

The Neoplatonic combination of Plato and Aristotle dominated thought in, and the, beginning in Islâm and moving into , we have a revival of a stricter Aristotelianism, culminating in the massive Summas of St. Thomas Aquinas (1225-1274). It may not be a coincidence that this involved the rejection of the mystical elements in Neoplatonism, since Christianity was institutionally far more unfriendly to mysticism, with its promise of direct communication with God, than were Islâm or Judaism. What was rare or unheard of in Islâm or Judaism, mystics being condemned or even executed for heresy, was a fairly regular occurrence in Western Christianity? However, a stricter empiricism again creates the difficulty that the apparent "form" of an object cannot provide knowledge of an end (an entelechy) that is only implicit in the present object, and so hidden to present knowledge.

Curiously, the reaction to this was not immediately a new Platonism or Neoplatonism, but a more extreme empiricism: The Nominalists overcame the Aristotelian difficulty by rejecting Realism altogether. Universals were just "names," nomina, even just "puffs of air." The greatest exponent of this approach was the Englishman William of Ockham (1295-1349). To the Nominalists, the individuality of the objects of experience simply meant that only individuality exists in reality. The abolition of a real abstract structure to the world had a number of consequences for someone like Ockham. The omnipotence of God became absolute and unlimited, unrestricted by the mere abstractions of logic, so that God could even make contradictions real, which was inconceivable to Aristotelians or Platonists. Similarly, no things had natures (essences) that made them intrinsically either good or evil. Not even God was intrinsically good or evil: The Good would just be whatever God wills it to be, something else inconceivable to Aristotelians or Platonists -- but actually rather Islâmic in tone, since no human notion about the nature or essence of God can impose a limit on the Will of God.

Although the debate between the Realists and the Nominalists became the greatest controversy of Mediaeval philosophy, another classic expression of Nominalism is to be found in the British Empiricists, from John Locke (1632-1704) to George Berkeley (1685-1753) and David Hume (1711-1776). Locke started the approach by simply defining an "idea" as being an image. Since images are undoubtedly individual and concrete, this stacks the deck for Nominalism. Nevertheless, Locke wished to preserve something like a common sense meaning of "abstraction," which he thought of as taking some characteristic of a particular idea and using it in a general way: "the mind makes the particular ideas received from particular objects to become general." Thus, Locke cannot find any difference between the idea "horse" and the idea "Bucephalus" but "in leaving out something that is peculiar to each individual, and retaining so much of those particular complex ideas of several particular existences as they are found to agree in" [An Essay Concerning Human Understanding). Locke even wants to preserve a distinction between "nominal essence," the nature of things that we know about, and "real essence," the real nature of things, which we cannot know about given the limitations of human knowledge [Book III, Chapter VI, §§7-18]. How this distinction could be maintained on any kind of empiricism is mysterious. Real essences and the compromise on abstract ideas were swept away by Berkeley and Hume, who quite consistently and forthrightly argued that there was no such thing as "abstract ideas." Hume said: "Let any man try to conceive a triangle in general, which is neither Isoceles nor Scalenum, nor has any particular length or proportion of sides; and he will soon perceive the absurdity of all the scholastic notions with regard to abstraction and general ideas. (An Enquiry Concerning Human Understanding)

Of course, it is quite easy to conceive a triangle in general, which is neither Isoceles nor scalene, but Hume has done so himself. Hume's argument only works if he really means imagine rather than conceive. Hume even said: No priestly dogmas, invented on purpose to tame and subdue the rebellious reason of mankind, ever shocked common sense more than the doctrine of the infinite divisibility of extension, with its consequences; as they are pompously displayed by all geometricians and metaphysicians, with a kind of triumph and exultation.)Since infinite divisibility is rather important in geometry, and one of the "consequences . . . pompously displayed" is calculus, "geometricians" (like Isaac Newton) would probably be offended to be lumped together with metaphysicians. Hume's only recourse is that there are "general terms" to which multiple concrete "ideas" are attached. This however, fails the Socratic test for the "model" that would enable us to judge unfamiliar objects; and while the "family resemblances" of Ludwig Wittgenstein (1889-1951) can be appealed to by Nominalists for such judgments, the imprecision implied by such a test is wholly contradicted by the practice of mathematics, while that in which a "resemblance" would consist must be, indeed, some abstract feature or collection of such features. But Hume allows for no abstract features, much less the recognition of them.

How far this silliness can go is evident in recent analytic philosophy, which fancies itself in direct succession from Hume. The consequences of the project of reducing the world to objects and words is evident in the following statement by the logician Benson Mates [Elementary Logic, Oxford, 1972): Another matter deserving explanation is our decision to take sentences as the objects with which logic deals. To some ears it sounds odd to say that sentences are true or false, and throughout the history of the subject there have been proposals to talk instead about statements, propositions, thoughts, or judgments. As described by their advocates, however, these latter items appear on sober consideration to share a rather serious drawback, which, to put it in the most severe manner, is this: they do not exist: Even if they did, there are number of considerations that would justify our operating with sentences anyway. A sentence, at least in its written form, is an object having a shape accessible to sensory perception, or, at worst, it is a set of such objects. Thus "It is raining," and "Es regnet," though they may indeed be synonymous, are nonetheless a pair of easily distinguishable sentences. And in general we find that as long as we are dealing with sentences many of the properties in which the logician is interested are ascertainable by simple inspection. Only reasonably good eyesight, as contrasted with metaphysical acuity, is required to decide whether a sentence is simple or complex, affirmative or negative, or whether one sentence contains another as a part)

Reasonably good eyesight, however, is not enough to tell that "It is raining" and "Es regnet" are synonymous. That circumstance is evidently not noticed by Mates. What is needed is not eyesight, but understanding, which is nothing so esoteric as "metaphysical acuity," but instead a very simple and very common kind of thought. The "advocates" of the existence of thoughts are pretty much everyone who uses ordinary language, which probably includes. Given Mates's own example, it is very hard to deny that meaning is different from both words and objects. Mates, however, can indulge in a particularly Nominalist theory of meaning, which we see in his discussion of Set Theory , whereby each set is uniquely determined by its members; in other works, sets having the same members are identical.

However, the sets "the present [1999] King of France" and "the present [1999] King of England" both have the same members, namely none, which makes them identical with the Empty Set ("Nothing"). They are therefore in no way "uniquely determined" by their members, if we allow that their meaning, even if not their membership, is different. Thus, an "extensional" theory of meaning, which sees reference to objects as the content of meaning, must either ignore "non-existent objects" or must attribute a reality to non-existent objects greater than that allowed by common sense. Equally serious is the problem of how we would know what all the members of a non-empty set are, without omniscience, in order to be able to use the name of the set in its "uniquely determined" way. If all we know are certain members of the set, i.e. the dogs we actually know about from personal experience, then we are using the name of a subset, not the real set, of dogs

At the beginning of 20th century logic was a much more Realistic theory of meaning and universals, that of Gottlob Frege (1848-1925). For Frege, "subject" terms referred to individuals, while "predicates," i.e. abstract properties, referred to "concepts." "Concepts," then, exist as objects. In the subject we have meaning as "sense," which is very different from reference. Thus, in his classic example, the "morning star" and the "evening star" have the same reference, namely the planet Venus, but they have different senses, namely "Venus as seen in the morning" and "Venus as seen in the evening." A crude extensionalism cannot account for this. On the other hand, Frege was no metaphysician; and we have no theory to account for the nature or existence of concepts as objects, let alone to what Frege said was the reference of sentences, namely the "True" and the "False." A philosopher looking for the metaphysics of "concepts" has little to go on beyond Aristotle and Aquinas. Frege's theory of senses, however, recently clarified by , does preclude Nominalist (and all naturalistic theories, like Wittgenstein's theory of meaning as "usage") theories that only want to stick to words and individual objects.

The possibility occurs, then, that universals may occur, not in words, and also not in any kind of objects (individuals or Frege's concepts), but in the internal mechanisms of sense. This would be a "middle way" between Realism and Nominalism that has been called Conceptualism. This notion seems to go all the way back to Peter Abelard (1079-1142). The drawback of conceptualism, however, would be that universals would not be knowledge, since the structures of meaning would correspond to nothing of the kind in the world: Universals would have to be the "pragmatic" way that we conceive or organize individuals, avoiding the silliness of a Nominalism like Mates's, but there can be no real differences in the objects that our conceptions are reflecting. Conceptualism is devoid of anything like Frege's "concepts" (or Aristotle's "forms") as abstract objects.

Metaphysically, Conceptualism is therefore no different from Nominalism. It is a psychologistic theory, i.e. it attributes structures that we see in reality to structures imposed by the human psyche. Indeed, some structures in the world are imposed by the human psyche. There is nothing natural about a coffee pot, which is an artifact of human conception and human purposes. A Platonic Form or Aristotelian substance that is the objective existence of the abstract and universal coffee pot would seem to be the reductio ad absurdum of their theories as much as the "reasonably good eyesight" is of Mates's. The conventionality of such concepts provides a powerful argument for Conceptualism, as it would also for Nominalism

If Conceptualism were merely the argument that there is not always an objective structure to correspond to the difference between essence and accident, this would be quite true. However, it seems to be the case that there is an objective structure corresponding to some essences, since there are natural kinds of things (dogs, feldspars, stars, flowers, etc.) whose identity owes nothing to human convention or purposes. Furthermore, since all attributes (properties) are universals, whether essential or accidental, this argument would be beside the point. Even conventional concepts are based on real characteristics. A coffee pot must hold coffee, and its ability to do so owe nothing to convention but everything to the nature of the materials and even the nature of space. Those cannot be altered, much as many would like to, simply by making some change in the conventions of our conception

If a Conceptualist allows even a moment when real differences are recognized, then, however conventional the rest of the constructions, a fundamental element of Realism has been accepted into the theory. Thus, however conventional a fundamental unit of measure may be, but this does not make all fundamental units somehow the same. A metre really is more than three times as long as a foot, which means they are commensurable, i.e. each can be converted into the other. Commensurability and conversion are only possible because of the independent, objective, and real natures of each.

For a true Conceptualism or Nominalism, incommensurability, both of measure and of meaning, must be possible, which is why we find that Nominialists and deconstructionists are eager to leap on W.V.O. Quine's (1908-2000) arguments for the "indeterminacy of translation." The problem of the metaphysics of universals thus overlaps the epistemological issues and theories examined in "". A consistent Conceptualism is going to result in the same skepticism that we see in Hume or the same nihilism that we see played out in deconstruction, all because of the same denial of real universals and meaning which that has objective reference. Quine, like the deconstructionist Rorty, offers a muddled Pragmatism that obscures the non-responsiveness and question-begging nature of his thought.

can be said to be a Conceptualist because of the manner in which the mind's activity of synthesis puts the concepts of reason into phenomenal objects in the first place. This is definitely a Conceptualist move. However, Kant's theory does not end up being a Conceptualist theory, or any kind of psychologistic theory, if Kant is to be taken seriously when he says that it is a theory of "empirical realism." This is commonly misunderstood. Thus Jerrold Katz says: "Kant's Copernican revolution . . . makes the existence of objects in the world depend on our cognitive faculties" [Realistic Rationalism] is flatly contradict by Kant himself: Either the object alone must make the representation possible, or the representation alone must make the object possible . . . In the latter case, representation in itself does not produce its object in so far as existence is concerned, for we are not here speaking of its causality by means of will

If the existence of objects were produced by representation alone, this is what Kant called "intellectual intuition." Only God would have intellectual intuition. Our actual ability to produce the existence of objects is not by means of representation alone, but by means of will, otherwise the existence of objects is "given" to us. Instead, Kant's theory is that the character of objects is in part determined by the nature of representation. Since this is also the very thing we see in contemporary physics, in , it becomes very hard to reject Kant as some anti-realist without also a somewhat wishful-thinking rejection of this characteristic of physics

To be thinking, as often happens, that things-in-themselves in Kant are what are "really" real is to contradict the meaning of "transcendental idealism," which is that transcendent objects are only "ideal," i.e. subjective. , although leaving out most of the subtlety of Kant's theory, clarifies the metaphysics by ruling out any order of transcendent objects, whose possibility always seems to be hovering in the background for Kant, confusing his realism. Kant, however, is correct in that we inevitably try and conceive of transcendent, which means unconditioned, objects. This generates "dialectical illusion," of reason. Kant thought that some antinomies could be resolved as "postulates of practical reason" (God, freedom, and immortality); but the arguments for the postulates are not very strong (except for freedom), and discarding them helps guard against the temptation of critics to interpret Kant in terms of a kind of Cartesian "transcendental realism" (i.e. real objects are "out there," but it is not clear how or that we know them). If phenomenal objects, as individuals, are real, then the abstract structure (fallibly) conceived by us within them is also real. Empirical realism for phenomenal objects means that an initial Kantian Conceputalism turn into a Realism for universals.

Kant's theory, indeed, is not the kind of realism that we see in Descartes, or that was evidently desired by Einstein, where objects exist as such entirely independent of subjects. Instead, phenomenal objects presuppose the subject, and we cannot say whether their properties are "really" objective or "really" subjective - as examined in "." This is how Kant's theory can be both a form of Conceptualism and a form of Realism at the same time. Thus, if the mind conceives abstract properties, abstract properties will be in objects, because objects are just the other side of the structures found in the mind. But it would be equally true to say that the structures in the mind are just the other side of those in the objects. The Aristotelian function of "abstraction," by which universal forms are taken from objects into the mind, in these terms is less mysterious: Phenomenal objects are already in the mind, so the purely mental operation does not reach out into transcendent (Cartesian) reality to fetch the essences.

While Kant's empirical realism allows for an Aristotelian Realism of universals, it also means that we do not have to accept Aristotle's theory substantial forms and of essence and accident. There are conventional concepts. Not all concepts therefore correspond to real essences. To think that they do is what called "essentialism" -- a good label for such an error, though the term is now widely used by "post-modern" nihilists to condemn any doctrine of essences or natural kinds. But there are natural kinds and real essences.

Real essences, however, must be due to something; they are not just self-generating. A clue may be found in the modern theory of DNA that has replaced the entelchy of Aristotelian "form." DNA governs the growth and development of organisms through the causal laws of nature. The natural kinds of plants and animals are thus the result of causal necessity. All essences, whether real or conventional, are the result of some form of necessity. The fixity and unchangeability of Plato's original Forms, "immanentized" by Aristotle, are artifacts of a form of necessity itself, the necessity of the perfect aspect, of time which has occurred (the past or the present perfect tenses, the opposite of Aristotle's own "future contingency"). The various modes of necessity are discussed in "" and the nature of the perfect aspect in a to that essay. Purely conventional concepts rely on the fact of their use, which is a function of perfect necessity, for the fixity of their own conceptual essences. The entelechy of a coffee pot is owing entirely to human purposes, and to no causal necessity, but it is functionally parallel, in human understanding, to natural kinds created by causal laws of nature.

If we distinguish between substance and attribute and identify some attributes as essential, this will mean, not that there is a hidden, underlying substance unifying the essence, but that such a notion of substance can be replaced by the forms of necessity, whether causal for natural kinds or purposive for purely human conceptions. This means that the ghostly skeletons of the Platonic Forms, brought down to earth by Aristotle, and uncomfortably inhabiting the transient individuals that we perceive, can be eliminated. The abstract features we conceive in individual objects are not different in kind from the objects, which are themselves artifacts of necessity (logical, a prior, perfect, and causal), but the living skeleton of the objects, in a phenomenal world where necessity and contingency are the structure of everything

The fixity of our own concepts collapses all the necessities of reality into the fact of conventional usage, which Plato and Aristotle projected out into the world, even into the transcendent; but it is now possible to correct this. It is not the Concept out among objects, as Frege put it, but mental concepts do refer to some abstract structure grounded in some form of necessity. By the same token we can identify the ground of the "True" and the "False," which Frege saw as the reference of sentences, since the same necessities that unify real or conventional essences also unify predications in sentences. Kant's doctrine of the "primacy of judgment," indeed, subordinates the unity of concepts to the unity of propositions, which enables us to say that even analytic truths are of different kinds, depending on the necessity that unifies the properties in the concepts. "All placental mammals give live birth" is thus analytic of the concept "placental mammal," which is a natural kind based in causal necessity, while "All Hobbits are short" is analytic of the concept "Hobbit," which is a fictional artifact of J.R.R. Tolkien's Lord of the Rings and so dependent on the mere fact of the convention adopted by the imagination of Tolkien.

The modes of necessity are interrelated with the modes of contingency, so that perfect necessity is contingent in relation to a priori necessity, a prior necessity is contingent in relation to logical necessity, and logical necessity is contingent in relation to an “ur-contingency” that would transcend non-contradiction. Each mode of contingency, in turn, represents the possibility of something different from what we see in each subsequent mode of necessity. The very possibility that, in time, we can open the window or make some other alteration in reality is a case where we deal with the contingency of present time and our ability to bring about some new possibility. What this adds up to for universals is that as forms of necessity they represent the rules and guideposts that limit and direct possibility: Universals represent all real possibilities. Thus, what Plato would have called the Form of the Bed, really just means that beds are possible. What would have seemed like a reductio ad absurdum of Plato's theory, which if there is the Form of the Bed, there must also be the Form of the Television also (which is thus not an artifact and an invented object at all, but something that the inventor has just "remembered"), now must mean that the universal represents the possibility of the television, which is a possibility based on various necessities of physics (conditioned necessities) and facts (perfect necessities) of history.

Where the power of possibility comes from is a factor unaddressed by Plato. In Aristotle it is represented by matter, which is power and potential; but then matter is so intrinsically amorphous, merely the passive recipient of actualizing "form," that the Neoplatonists identified it with Not-Being (and evil) -- quite apt when Prime Matter, or pure potential, is not actual at all and so in fact doesn't exist -- and both Aristotle and the Neoplatonists eliminated any material component to God (or the One). Rather awkwardly, this left Aristotle's God literally "powerless": He is already perfectly actual, which means that He cannot do anything that He is not already doing. This could be argued theologically, that it would be an insult to God's foreknowledge and wisdom if anything has been left undone that He is going to have to take care of in the future, but at the same time it does seem like an insult to His Omnipotence that He cannot just decide to do something new

The failure of Aristotle's theory is that necessity and possibility are interrelated, actualization does not "use up" possibility, and that what is truly actual, phenomenal objects in the world, consists of contingent individuals and not the necessary universals of the "form." In Spinoza's metaphysics, individuals as natura naturata ("nature natured") are the visible products of coming-into-being, but the creativity of Spinoza's God is limited by a determinism that makes every event a complete product of necessity, with no contingency, and so no radical possibility, at all.

Intentionality has often been seen as the distinctive mark setting human life apart from life in general. This position has been criticised for its implied dualism and Daniel Dennett among others has put forward eliminativist accounts of mental phenomena in humans. From an evolutionary point of view absolute dualism is of course unacceptable, but rather than eliminating the peculiarities of human experiences this paper suggests that one try to trace the evolutionary origins of intentionality. It is suggested that human intentionality be seen as a special case of a more general category termed 'evolutionary intentionality'. Evolutionary intentionality is connected to the dynamic behaviour of systems based on code-duality, i.e. the perpetual reshuffling of messages back and forth between digital (DNA) and analog codes (organisms), which is the core of heredity or semiotic survival.

Mental processes such as expectations, desires or imaginations are always `about' something. If I expect you to listen to me, then this expectation concerns something which is not a part of myself. This `aboutness', whatever it is, seems to be totally absent in the physical world. A rock or a river does not represent other states of affairs. We might treat them as representations but in themselves they are not representations. Intentionality was the term Brentano introduced to characterise the idea that mental states have content (Brentano 1874/1973). And it has often been claimed that intentionality is the distinctive mark setting human mental life apart from all other phenomena in this world.

This idea of intentionality conceived as an exclusively human property has been challenged from mainly two corners in recent times. First, biologists or psychologists concerned with so-called evolutionary epistemology have suggested that also intelligent animals might posses a kind of intentionality; and researchers in the field of cognitive science have claimed that in principle even computers might exhibit intentionality. Such a claim is all the more easy to main-tain if, as the philosophers Patricia and Paul Churchland have suggested, concepts such as "mind", "consciousness", or "rationality", are the "ghosts" of our language, concepts without any real content, "neural phlogistion", so to say (Churchland 1986). Daniel Dennett's writings leads us to the same position although he at least admits the heuristic value of the intentional stance. According to Dennett "intentional systems" should be explained and predicted as if they represented things external, but this does not mean that such systems have any intrinsic intentionality (Dennett 1987).

John Searle has given a forceful philosophical criticism of these eliminativist accounts of mental processes (Searle 1992). Searle sees the discussions in cognitive science about intentionality as yet another version of the old discussion about qualia. He maintains that "first person" experiences such as e.g. feeling of tooth ache, cannot logically be reduced to "third person" (e.g. neurobiological) descriptions. Therefore, although as a materialist he admits that all kinds of experiences are caused by the physico-chemical structure of the brain, he also thinks that the eventual description of such causes would still not grasp the fundamentally subjective feeling of these experiences, i.e. the intentionality as such would not be part of such descriptions

While its tent is to share this criticism of Searle's and especially his denial of the conceptualisation of intentionality as an instantiation of a computer-program, I also think that his approach leads to an unnecessarily hermetic concept of intentionality. Following the categorical system of Charles Sanders Peirce, the founder of the American semiotic tradition, we can say that intentionality (and qualia) belong to the general category of thirdness, which has to do with thought and evolution. And it is the aim of the present paper to demonstrate how a biosemiotic, i.e. a sign-theoretical reframing of biological theory, may help in justifying an evolutionary account of intentionality.

Both Sartre and Merleau-Ponty saw self-awareness as central to consciousness and intentionality, and this self-awareness should be understood as a "pre-reflexive cogito", a consciousness which was there without being reflected upon at all: "C'est la conscience non-réflexive qui rend la reflexion possible: il y a un cogito préréflexif qui est la condition du cogito cartésien" (Sartre 1943) Now, Merleau-Ponty's radical position is that this pre-reflective self-awareness must from the very outset be contaminated by "otherness" or alterity, otherwise intersubjectivity would be impossible: subjectivity cannot consist simply in self-presence because if I were given to myself in an absolutely unique way, I would lack the means of ever recognising the embodied Other as another subjectivity (Merleau-Ponty 1945). This argument is further based on Merleau-Ponty's conception of subjectivity as essentially incarnated. To exist embodied is neither to exist as pure subject, nor as pure object, but to exist in a way that transcends this distinction, i.e. the opposition between "pour-soi" and "en-soi". That self-awareness is intrinsically an embodied self-awareness implies a loss of transparency and purity, and only therefore is intersubjectivity possible.

As Dan Zahavi explains: "When I experience myself and when I experience an Other, there is in fact a common denominator. In both cases I am dealing with incarnation, and one of the features of my embodied self-awareness is that it per definition comprises an outside: I am always a stranger to myself, and therefore open to others" (Zahavi 1996).

Merleau-Ponty was writing in a context of transcendental philosophy which is rather incompatible to the evolutionary concerns of the present paper. I nevertheless think that important aspects of his conception of intentionality is represented in the fundamentally triadic structure shown in figure 1. And I believe that it is exactly this triadic nature of the mental sphere which makes it resistant to the aggressive 'scientification' launched by cognitive science. The triadic structure cannot be reduced to a combination of dyadic relations since intentionality depends on the totality of the triad. It thus formally resembles the triadic sign relation as conceived by C. S. Peirce. According to Peirce "A sign, or Representament, is a First which stands in such a genuine triadic relation to a Second, called its Object, as to be capable of determining a Third, called its Interpretant, to assume the same triadic relation to its Object in which it stands itself to the same Object" (Peirce 1955). Thus, in Peirce's philosophy the Interpretant represents a category of "thirdness" that transcends mere causality, which he saw as "secondness".

All computer programs are completely based on Peircean "secondness", i.e. syntactic operations, since application of the rules governing the manipulation of the symbols does not depend upon what the symbols "mean" (their external semantics), only upon the symbol type. The problem is not only that the semantic dimension of the mental cannot be reduced to pure syntactics. As Peter Cariani explains: there is no logical structure for the whole world so that the sign embedded in a logical "model" bears a definite logically-necessary relation to the world as model" (Cariani 1995). The problem rather is that the semantic level itself is bound up in the unpredictable and creative power of the intentional, goal-oriented embodied mind. The Other is a Representamen which determines an Interpretant (self-awareness) to assume the same triadic relation to the body in which the Other itself stands to the body What we should earn from this analysis of intentionality, but subjectivity and self-awareness is not that these phenomena are forever beyond the horizon of science. Rather we should learn that the key to a scientific understanding of the mental is embodied existence and not the fictitious idea of disembodied symbolic organisation which appeals so strongly to the aritmocentric minds of traditional scientists. Cariani has pointed out, that "Virtually, and all symbols are associated with biological organisms, whether for communication, control, or construction, and whether at a cellular, organismic or social level. We cannot understand symbols fully until we understand their role in the organisation of life" (Cariani 1995). A biosemiotic understanding of evolution seems to be the key to a scientific understanding of intentionality

Cognition seen as an evolutionary product has been studied by evolutionary epistemology. Unfortunately much work in this fascinating area has been guided by too simplistic conceptions of human cognitive abilities. Thus, much of the early work on linguistic capacities of apes was later shown to be inadequate (Sebeok and Umiker-Sebeok 1980) and sociobiological theorising generally commits the error of misplaced concreteness when personality traits are reified as natural objects of indubitable ontological status. The fundamental challenge for evolutionary epistemology as I see it is to accept that Peircean "thirdness" is real. The intentionality of human mental life is not just a "ghost", and yet it must have evolved from something else, it must have been present as a germ in our most related animals. In a strange way Merleau-Ponty himself gives us a cue when he observes that `originally consciousness is not a "I think that" but a "I can"' (Merleau-Ponty 1945). Nervous systems and brains belong to animals - they never appeared in plants - and from the evolutionary beginning their function was to guide body-action, behaviour. It is a well-known fact that animals can and do dream. This implies that the mental states can be uncoupled from bodily action. But the extent of uncoupling between behaviour and mental activity which characterises the human mind is probably unique to that specific animal. The uncoupling makes philosophers wonder how it can be that mental states are always 'about' something. But this is because they don't consider that mental 'aboutness’', human intentionality, grew out of a bodily 'aboutness'. Whatever an organism senses also mean something to it, food escape, sexual reproduction etc. This is one of the major insights brought to light through the work of Jakob von UexkĂĽll: "Every action, therefore, that consists of perception and operation imprints its meaning on the meaningless object and thereby makes it into a subject-related meaning-carrier in the respective Umwelt" (UexkĂĽll 1940/1982). "Umwelt" was UexkĂĽll's term for the phenomenal worlds of animals, the subjective universe in which the animals live, or in other words the ecological niche as the organism itself perceives it.

Rather than pursuing the question of animal intentionality (see Sebeok 1986 for interesting examples) I shall address the question of intentionality as an even more general category of life, an evolutionary "aboutness" or evolutionary intentionality, i.e. the anticipatory power implicitly present in all systems based on code-duality (Hoffmeyer 1995, Hoffmeyer and Emmeche 1991)

Code-duality refers to the fact that living systems always form a unity of two coded and interacting messages, the analog coded message of the organism itself and its re-description in the digital code of DNA. As analog codes the organisms recognise and interact with each other in the ecological space giving rise to a horizontal semiotic system (the ecological hierarchy of Salthe (1985)), while as digital codes they (after eventual recombination through meiosis and fertilisation in sexually reproducing species) are passively carried forward in time between generations. This of course is the process responsible for nature's vertical semiotic system, the genealogical hierarchy (Salthe 1985). Thus, heredity should be understood as `semiotic survival' (Hoffmeyer 1995).

Code-dual systems are anticipatory in the sense, that the digital code (the gene pool) records specifications which did work well enough in the past, and which are then used by the analogy coded organisms to cope with the immediate future, thereby eventually assuring the semiotic survival into the more distant future. This of course is anticipation in the most primitive sense of extrapolation from the past (most human anticipation is so too). But the fundamentally semiotic character of this system very early in evolution assured the creation of sense facilities to strengthen anticipation. Let us now consider an example of evolutionary intentionality (Hoffmeyer 1995b). The Malayan praying mantis, Hymenopus bicornis is pink and rests on the flowers of Mela-sto-ma polyanthum, and closely resembles them in colour and shape. Insects attracted to the flower are caught by the mantis. Clearly, the mantis falsely 'pretends' to be part of the flower. This is as good as any an example of what I propose to call an evolutionary lie. Here of course no mental processes are at play, but the mantis doesn't know that it fools the insect. But if analysed at the time scale of evolution the intentionality of the deception is hard to overlook

The deception was in fact intended in the sense that the 'aboutness' of the evolutionary lineage of our mantis ancestors, i.e. its inherent project of surviving, made the lineage select a strategy which it had 'learned' was effective in deceiving the prey. The term "select" in this context is meant to imply that the lineage as a historical entity is capable of measuring niche conditions and interpret them in terms of its own historically appropriated behavioural capacities including its reproductive potential. In this understanding, the single mantis doesn't lie, but it is nevertheless an integral part of the lying lineage to which it belongs. Seen in the historical setting in which the adaptation took place the 'resemblance' between mantis and flower was meant to be a (false) 'representation', i.e. it was a lie. Lying here takes place, not at the level of the individual, but at the level of the lineage. If it is objected that evolutionary lineages cannot possibly form representations and that therefore they cannot do anything semiotic, I think the answer will be that such a claim presupposes a very narrow conception of what is a representation. For comparison let us consider the case of human visual representation such as e.g. a person who has had the bad fortune of witnessing a man falling to his death from a balloon. The icon formed in the mind of this person will be some mental representation of a complex and changing pattern of a firing collective of neurones coupled to a whole lot of other bodily processes. In the evolving mantis lineage, on the other hand, what we see is that the circumstances, i.e. the fact of preferred bugs feeding on pink flowers, caused an icon to form in the lineage consisting in the phenotypic behaviour of climbing certain pink flowers. This phenotypic behaviour is no more and no less causally connected to the feeding habits of the bugs than the vision of a falling balloon is causally linked to the actual case of a falling balloon. In both cases a representation takes place. In the case of the lineage the behaviour is some phenotypic representation of patterns of gene expression which again represent the natural history of the lineage. In the case of vision also the relation between moving objects and firing neurones are based on personal experiences (a baby cannot form this kind of icon).

Generalising from this example we can now represent evolutionary intentionality graphically as a triadic structure which is formally analogous to the triadic structure of human intentionality. The ecological niche is a sign or Representamen which determines an Interpretant (the actual pattern of life and reproduction) to assume the same triadic relation to the lineage in which the niche itself stands to the same lineage.

Ecological niche conditions thus occupy the same logical position in evolutionary intentionality as "otherness" occupies in human intentionality. At first this suggestion may seem strange, but it should understood in the light of Jakob von UexkĂĽll's umwelts-theory (UexkĂĽll 1940/1982). The Umwelt of an organism is to a large extent a species specific Umwelt, e.g. the umwelt of bees will generally contain a lot of vision in the ultraviolet area which is not part of the human umwelt (unless we take advantage of our technical skills). The umwelt represents a kind of collective memory created through the phylogenetic history of the lineage under the given ecological niche conditions. The umwelt therefore represents a biological counterpart to the internalised otherness at the basis of human self-awareness.

The actual pattern of life and reproduction obviously takes the position of the Interpretant. This pattern refers to the lineage since it is in fact incarnated in the body of the lineage and thus is reflected as hereditary changes in that "body" over time. Niche conditions are represented as survival strategies of the populations which constitute the lineage. But the core of this whole dynamic system is code-duality. The objectivity of the digitally coded message (the pool of genotypes) and the subjectivity of analogy coded messages (the corporeal organisms) is the biological counterpart to the "pour-soi" and the "en-soi". Just like in Merleau-Ponty's conception the non-coincidence of the subject depends on the self-referential temporality of the body-mind, thus the non-coincidence of the lineage is based on the self-referential temporality of heredity, i.e. of the perpetual translations back and forth between the digital and analogue versions of the message through the processes of reproduction and ontogenesis.

Also agreeing with Searle, in that the essence of human intentionality cannot fully be captured through 3rd person descriptions, but it denies that human intentionality is categorically distinct from the phenomena of the natural world. Following the semiotic track lain out in the philosophy of Peirce it is claimed that human intentionality has emerged as a peculiar corporeally individualised instantiation of a more general thirdness which is embedded as an irreducible element in the process of organismic evolution: evolutionary intentionality

Its objective is to examine the philosophical relevance of mind techno-science (MTS), why philosophy finds itself in a paradoxical situation where it cannot ignore this new field of knowledge, and at the same time has to reinvent itself outside its realm. In order to reach this objective, it is necessary to clarify the present interactions between artificial intelligence (AI), cognitive sciences (CS), virtual reality (VR), the humanities, their present conjuncture (post-modernism), and other issues that will be progressively conceptualized. The reason for the connection of these different fields of research seem obvious, but is, in fact, less than clear: the form and content of this connection raise questions that cannot be answered in any one of these fields alone. To deal with this general problem not only requires finding the proper information and methodology, it requires an understanding of the epistemic conjuncture at its core. The questions are many, all more or less confused: in what sort of epistemic conjuncture post-modernism finds itself, why are AI and CS in a situation beyond the reach of their actual practice but at the same time cannot afford to ignore this because it concerns their epistemic and academic environment.

Already the protests from many readers: French fog. Indeed my perspective will appear at first non-analytical, even anti-analytical. But the overplayed opposition between the two traditions, in this precise case, takes a distinctly different aspect: it is between clarifying the already largely debated problems and questioning these very problems through an analysis of their presuppositions. The risk is fully accepted; my view concerns the forest more than the trees. It concerns the forms of argumentation at the root of these problems and the way to deal with them. A two-layered reading scheme is herewith proposed: the first at the level of the global argument, the second at the level of the various problems crossed by the first one and usually discussed by cognitive and AI scientists and philosophers. This perspective asserts that this first level has its own relative autonomy, which it can be analysed with a rigour which, regarding its intrinsic intricacies, satisfies the minimal standards of an analytic tradition. If some parts of the argument do not seem satisfactory, I hope they can be rectified to open the way to a proper knowledge. Philosophy in any case cannot pretend to deliver much more

The starting point is a common sense question: how can one assert that the various sub-disciplines covered by the notions of AI and CS are generating knowledge which can be transferred to the humanities in order to provide knowledge of what is called mind in this field? Is the transfer able to preserve the knowledge value of what is being exported from one field to another?

According to present research in philosophy and historical epistemology in the field called "humanities," mind is not a substance; it is a function within a symbolic order. This order is constituted by a hierarchy of different disciplines that has been relatively stable during a certain period of time until the end of the nineteenth century. Indeed, since the 1850s and 1870s, successive mutations in logic, physics, and mathematics have deconstructed this symbolic order to an extent which seems (at least to me) until today not fully evaluated. The function at the core of the symbolic order had been hypostasised by the philosophical tradition in a conception of the mind, of its capacities (faculties), of its assignments in society, culture, and/or civilization. In any case, the historical hypostatization of this function cannot be taken for a knowledge of the mind, but it has effectively opened the possibility of transforming the function of the mind into an object of science, even of experimental investigation from the mid-nineteenth century on.

This function has imparted to the mind different roles, the most important of them being the origin of knowledge through the different faculties with which the mind was endowed in order to satisfy the function it was given within this symbolic order. So the mind came to be known and understood as the foundation of all sciences. The real sense of this is the following: in return, any development of sciences and the knowledge they produced are to be referred to the activities of the mind and herewith contribute both to its development (the historical unfolding of its virtual capacities) and to its own knowledge. Mind knows itself through the development of the different forms of knowledge it makes possible. In this symbolic order entered on the function of the mind, the role of philosophy is essential: its role is to extract from sciences the knowledge of the mind they carry and to refer to the mind this progress as a deepening of the knowledge of itself necessary to accomplish its assignment. This construction of the mind through its function within a symbolic order has produced since the seventeenth century a major ideology: the progress of sciences, being a progress of the knowledge of the mind, is a progress of all the individual minds and, as such, a progress of humanity or mankind. In his late works, Husserl has clearly expressed this idea and the consequences of its regression.

The humanities are a set of disciplines at the core of a symbolic order; they are regulated by philosophy. These disciplines, developed in the intimacy of the modern mind, are supposed to be its closest expressions the fulfilment of its powers, the medium of humanity. The modern conception of man is built up through the humanities as the presence of the mind in the world..Within the humanities, philosophy is defined as the exercise of reason. What is reason? Reason is supposed to be anchored in the mind as the origin and canon of all its activities; it exhibits and actualizes itself when it extracts from the different fields of knowledge that which concerns the mind so that it recognizes itself in its own productions. Reason is the self-reflection of the mind, the mind in search of itself in its activities. Philosophy, as reason at work in the mind, is the mental process in which all the different expressions of the mind are related to each other in the understanding of their origin. Its duty is to associate (even integrate) each individual mind, their constructs, in the generic mind of mankind. So philosophy constantly weaves the humanities with their different historical patterns; it asserts their coherence within the concept of man as origin and end of all knowledge

In such a brief summary, the argument may appear slightly ridiculous, as strange as a summary of any myth of an ancient people in the ancient Near East or Africa. But this mythology has been repeated for so long in Europe and America, it has produced such wide effects, that its failure at the end of the nineteenth century, its fast withdrawal mostly since the 1960s, leaves a void and a nostalgia that the majority of philosophical research tends simply to fulfil, explicitly or not. The mind techno-science is reaching philosophy and the humanities in this precise context. One idea is to be obtained by this approach Ă  la Foucault. There is no doubt that AI and CS are progressively building an effective knowledge of what they define as mind. But in no way can the inter-discipline emerging at their intersection satisfy the modern function of mind. Neither their programs, nor their results, nor their internal debates can be interpreted inside the modern symbolic order, within this hierarchical organisation of different disciplines that had an endogenic development from the European sixteenth century until the end of the nineteenth. This body of knowledge being effectively produced cannot be referred to the modern mind as being conceived as the origin of different faculties and at work in the knowledge gained from them. The mind techno-science cannot have as its goal the deepening of the knowledge man (the subject of the humanities) has of himself as the origin and end of all human things. It cannot pretend to participate in the spiritual betterment of humanity. To restore a vanished order.

The reason why is that the conditions of the formation and coherence of the modern humanities are no longer satisfied. The traditional part played by the humanities in culture and society has vanished. The crisis of the humanities is not only a fashionable theme in the humanities departments of the industrial-world universities. Since the end of the nineteenth century, this is a fact, an epistemic situation, the consequences of which are difficult to fully assess. The humanities crisis is the most obvious consequence of a deeper transformation concerning the symbolic order aggregating inside one another the different fields of knowledge. Physics and mathematics dropped out in the 1880s. They no longer referred to philosophy and through it to the activities of a mind: they were building within themselves and by themselves their own foundations. This explains why the humanities are nowadays mostly reduced to philosophy, and philosophy itself divided between a quest for a back seat in the sciences and literary theory.

AI and CS are rising in this very peculiar epistemic conjuncture. A place has been left vacant to be occupied. Professional philosophers are still being trained in the different modern schools. A reconstruction of the modern function of philosophy is possible, even anticipated and asked for: the roads are drawn, the problems are well known (mind/body, mind/brain, physical/physiological, natural/artificial, etc.). The philosopher E. Husserl even tried at the beginning of the twentieth century to reconstruct the modern conception of philosophy: perhaps he failed because he did not have a proper conception of mind at his disposal! Now new answers from the mind techno-science can be provided; they are able to justify the old questions of the philosophical tradition. A ground knowledge can be deciphered through the controversies of the scientists and engineers who are ignorant of philosophy. The grand program of a reconstruction of the humanities can be designed. The present conjuncture is certainly an ambiguous opportunity for philosophy, but there is no such place to occupy, no such function to fulfil. The function has vanished. AI and CS are not coming to save the humanities. Neither are they going to take their place, because philosophy has failed to play its role. The mind techno-science will not fulfil Husserl's utopia to transform philosophy into a science.

Because of its methodology, problems, and criteria, the analytical tradition seems, for the moment, bound to reconstruct itself in the cognitive sciences: it feels itself independent from the epistemic conjuncture. Paradoxically, a style of philosophy, coming from research as diverse as M. Foucault or P. Bourdieu, have the potential to overcome the modern frame of philosophical problems and even to arrive at the rigour that it has been missing. The mind techno-science emancipates philosophy from its modern function. This is why it belongs to the post-modern epistemic conjuncture. The humanities cannot be revamped by the mind sciences, but only further deconstructed. Philosophy has to overcome its nostalgia and explore the virtualities of the present situation, the post-modern experience.

The epistemic conjuncture is more intricate. Even if AI and CS research is at a loss to provide the reconstruction of the humanities, even if this historic mythology is forgotten, the sciences of the mind raise important epistemological questions. The French epistemological tradition holds and shows that each science develops itself by the construction of its object. Through its concepts, formalisms, and experimental procedures, a theory filters the phenomena and herewith generates a quasi-object reduced to a set of parameters that can be experimentally studied. This quasi-object is not a mental construction. It grows within the development of a theory and its experimental basis and indicates the type of properties an object has within a discipline or sub-discipline. It cannot be separated from the theory to which it is linked and from the instruments by which this theory develops the different experimentation by which it proves and disproves itself. Even sciences at a primitive stage of their development, when they are not yet clearly cut off from folk knowledge, are already constructing a quasi-object. The object of any science is always, as coined by Bruno Latour, a "hybrid," indistinctly natural and artificial.

This entails two major consequences. First, it cannot be asserted that reality can be reduced to what is known by a science. Second, there is no other way than science to know what reality is. So the Real (what is reality) cannot be called upon outside of science, through philosophy, any belief, intuition, theology or poetry. But in return, the different sciences are not providing societies with a unified or unanimous knowledge of what the reality they study is by itself. Scientific knowledge cannot be cut off from the methods through which it is produced: the objects, reality or levels of reality any science investigates are defined by a theory and its method of experimentation.

From this epistemological point of view, it follows that AI research cannot state what intelligence is by itself. But intelligence cannot be known outside of the different sciences that are being built. This is why cognition is the quasi-object of the mind sciences. Cognition is not the object they are trying to know as if it existed by itself. Cognition is being constructed according to the development of these sciences and their interactions. It is a concept by which these different sciences give an operational name to the quasi-object they are producing. Intelligence, as the essence of the human mind, has not to be protected from mind techno-science, neither is it necessary to prove and explain at length that intelligence is not what these sciences are studying. The epistemological explanation is a sufficient answer that should dry up many popular (and) philosophical debates rising from the ghost of the humanities.

The epistemic conjuncture and its problems are much more complex. Indeed the present epistemological situation of the mind sciences is ambiguous and partly explains the philosophical temptations above denounced. As they are progressing from computer science models to the conceptionist paradigm, they overcome the initial behaviorist model dictating the processes being studied: the initial models were simply falsified by the very processes they allowed to investigate and they had to be refined and new ones were slowly proposed. In this situation, the mind sciences are requiring a finer description of their quasi-object, based on more complex conceptual models. The filters have to change and they have been changing in the last fifteen years. But precisely because these sciences are investigating at the same time by experimentation and computer simulation, they are not, for the time being, able to define and construct by themselves this hybrid (cognition, intelligence, and their different modalities) which is their quasi-object. Within their investigation, a type of cognition has to be so drastically reduced that mind sciences cannot pretend to explain what they are supposed to. At the same time they need a full conception of this object in order to reduce it to the parameters at their disposal. This is their present and temporary epistemological deficit

So the mind sciences find themselves in the position of requiring a pre-description of their object. They have to look for it outside, import it from outside, because they do not yet have the theoretical means to build the filters in which the effective cognitive or intelligent processes could be analysed in related parameters, so that they could be reconstructed and tested. Of course, this is how hard mind sciences are progressing already, but they are still under the influence of folk psychology and the pre-description of cognitive behaviours. In any case, the problem is not that intelligence or cognition cannot become an object of science, but that the present reduction is too strong and requires being related to different pre-descriptions outside the mind sciences

But here philosophy enters the game. Different historical schools amply provide for the time being such pre-descriptions, because their linguistic self-reflective methods based on the potentials of natural logic were the only ones available to describe basic mental processes such as belief, cognition, attention, perception, intention, etc. Phenomenology and its different trends provide an important, and more recent stock of relatively well refined descriptions of mental states and processes. These schools can provide these badly needed pre-descriptions, and their specialists can revive them and position themselves in the very development of the mind sciences

This is a false conception of the present situation of philosophy. It is just a way for modern philosophy to continue its routine and even pretend to provide (unexpected) true (scientific) answers to old problems. Everybody seems satisfied in this false association: modern philosophical inquiry seems justified instead of being disqualified, the mind scientists are gaining some ideological prestige they do not even need. Indeed, if the epistemic conjuncture concerns the global organisation of knowledge, the epistemological situation concerns the state of development of a discipline or of a theory. The epistemological situation of mind techno-science explains why it is so concerned with philosophy issues, but it also explains why some philosophical schools find so much interest in them: they can recycle their presuppositions with fresh data, launch debates, and even provide guidelines or orientations. Epistemology teaches that the present situation is only a temporary step. The next one is all the more easy to predict, because it has already happened: the formation of the connectionist paradigm shows how the mind sciences are becoming able to provide the filters for their own descriptions of the cognitive processes they are investigating. They are in the process of reducing their dependence on linguistic self-descriptions provided by philosophy and folk cognitive psychology. Connectionism attests the emergence of mind sciences as this autonomous inter-discipline I have been calling "mind techno-science.."A decisive step has been reached.

In such a situation, the domain of modern philosophy is even further contested. The problem is not at all that the mind has become a proper object of science; that has been the case since the 1850s. The problem is that the mind sciences have become able to construct themselves outside of the conception of philosophy which pretends to decide what mind is or is not, if the knowledge to be gained is possible or not, valid or not. Mind techno-science, by becoming autonomous, implicitly shows that even the analytical approach is neutral regarding its development. Just as physics and mathematics had become autonomous in the late nineteenth century, a science and a technology of the mind have become possible. This mind techno-science cannot even become a substitute for the humanities, a ground knowledge: the positivist dream is no longer feasible, simply because the order of knowledge (the web of interactions between fields of knowledge and practices) is no longer organized in a way to make it possible. The exercise of philosophy has become external to the knowledge of mind. Philosophy cannot pretend any longer that mind is its sanctuary, a strange object appearing to itself when it is described and analysed by this peculiar use of language called philosophy. Philosophy finds itself outside the mind, the mind of the philosophers as well as the mind of humanity or mankind. In fact it seems (for the moment at least) nowhere and everywhere.

This final uprooting of philosophy needs not to be dramatized. In the present situation, the task of philosophy is certainly difficult to perform, but it is at the same time quite obvious. First it is necessary to avoid any pretended fear or anxiety of (re)constructed mind, virtual (parallel) reality, artificial (non-natural) intelligence, as if we were waiting for a new Frankenstein under the cover of a Heideggerian conception of technè;. Any form of post-modern blues or pathos (the end of all things modern, the philosopher as guru) is quite superfluous and rhetorical. The path is actually predictable. Philosophy has to learn from the mind techno-science what mind is, not what it is outside of them but how it is constructed, debated, investigated in the formation and development of this inter-discipline. Philosophers of the mind have to internalize their investigations within the mind sciences; they will probably have to become mind scientists in order to become their epistemologists. There is one obvious reason to justify this tentative assertion: many mind scientists have de facto become the epistemologists of their discipline, and this work inside their own practice has played a major role in the various developments of their field. Regarding language, as there is a parapsychology, and philosophers have been able in the past to play the role of para-linguists: they pretended for a long time they were producing some knowledge of language, even if it were and still is difficult to establish its clear status. Somehow this prospect seems doomed regarding mind techno-science: its epistemology is already at work This is the reason why I think modern philosophy has no regeneration to expect from the formation of a mind techno-science. It is just another proof of the need and opportunity for philosophy to reinvent itself as it always did throughout its history. But the present conjuncture cannot be compared with the 1920s, when the young Heidegger understood that the programme of his master, Edmund Husserl, was impossible to fulfil and had become a utopia. The modern philosophy of the subject could not be reconstructed in order to save the role of reason as well as the function of philosophy in the European civilization. The epistemic conjuncture was a dramatic philosophical situation: if this reconstruction could not be properly accomplished, it meant that the new sciences of the late nineteenth and early twentieth centuries could not improve the knowledge of the mind required for the progress of humanity. Certainly Heidegger's solution has been worse than what he denounced as impossible. But his thought has been thoroughly developed, studied, and enough understood. Who can pretend nowadays that philosophy can discover what it was before (and therefore after) it was linked to science, the individual subject, modern society, etc.? The Heideggerian solution is achieved, as well as its pseudo-scientific opposites. Philosophy cannot ignore the development of mind techno-science.

Still, even if he provided the wrong answer, Heidegger has left us with the right question. The question of thought is indeed the relevant one, as long as thought knows how to invent and discover what it could be by experimenting with its position and relevance within the different fields of knowledge. At present, philosophy seems only possible as thought producing itself in an order of knowledge that nobody knows but that everybody practices in his research. Indeed, thought is concerned by the mind sciences, but neither to be digested by them as their epistemology, nor to ignore them and express its own possibility as fiction (fabula) or as a form of literature (d'écriture)

So the role of philosophy is not to reflect upon the mind sciences, but to think how thought is concerned by the development of mind techno-science, because it investigates what is thought and what is thinking in a mind. This problem is made possible at the interaction between mind techno-science (AI, CS, neuro-physiology, etc.) and the practice of thought. What becomes thinking when the various operations that have been traditionally defining thought as cognition are being simulated and mechanized, i.e. become reproducible by artefacts and machines, even if these artefacts are very abstract and formal ones? Then a non-modern distinction between thought and intelligence becomes necessary, because in the present situation, the problem not only concerns what is an intelligent behaviour, but the very intelligence of thought. Intelligence indeed has many forms and many levels, but it can only be known or investigated as a type of response to some change in an environment so that the said intelligent subject or entity reaches, through this intelligent process, a better (or new) adaptation to its environment and/or is capable of preserving or developing its autonomy. Thought, to be intelligible/intelligent, requires being treated as a behaviour or process. This very situation changes the relation between intelligence and thought. It forces thought to gain a new intelligence of intelligence and is profoundly transformed by this situation. This experience seems to me one of the most radical questions for philosophy at present.

This proves that the situation of thought and intelligence is at the core of mind techno-science. Not only does it guide its development but it presides over the progressive association of the different fields of research composing it today. It is basically a technological question since the 1940s and an analysis of this technology is able to clarify the question and situation of thinking today, as well as some problems raised by the relations between AI, CS, virtual reality, etc. I will call it "Intelligence Technology" (IT) in order to exhibit that it is not a technique as means to realize some goals under the guidance of some ideal (man, reason, spirit, etc.) or under the power of some interest (economical, etc.). This technology generates within itself its conception of thought and makes possible at its border another conception of thinking. As already shown, the humanities, either as a modern ideal or as academic institutions, are not directly concerned by this question, except through the very possibility and relevance of philosophy.

The question reads is there any to some intelligence in Intelligence Technology? The answer is the opening of another thought that can only be proved in action. This interaction within thinking, between thought and its intelligence, is the question: no theory can be made, it has just to be tried out. But I certainly do not intend to take a heroic stand and enunciate what is thinking today. On the contrary, the situation needs not to be dramatized, because it already occurred in the history of philosophy, even if the problems to raise and the answers to provide have to be original. Indeed in the early seventeenth century, Descartes saw that analytic geometry was introducing new ways of organizing thinking, a new form of intelligence. It did not concern the mind itself but its conception, not cognitive behaviours as the spontaneous activities of this mind, but a conception of knowledge and thought over imposed on the mind. A new mind was not constructed, but a new image of the mind in its act of thinking was constructed and a new definition of man became possible. This is what Descartes called method and he formulated its basic rules, not for them to be simply applied and followed, but to exhibit that a new organization and practice of thought were possible, that they could be explored and that the results of the exploration could transform the different fields of knowledge, and even open up new ones. His work was very dependant on the order of knowledge that he was at the same time contributing to establish. This "method" could be called today a model of rationality.

Probably holds that if proven by now that I am no Descartes, but the situation of philosophy in his epistemic conjuncture is quite similar to ours. IT is offering a new method and its basic rules or steps can be formulated; they have been born in computer science and information technology, and my objective is to show that they play a major role in mind techno-science. The description of these rules will not teach anything new to anyone working in these fields, but this is precisely the reason why it is so important to exhibit them.

The form of what is given (investigated) is a behaviour, a process or the function of a process. So the function always supposes a process and every process expresses a function. The first step of the method is the description of the process, i.e. its analysis in order to discern its different phases, the elementary functions composing it. This analysis is the uncovering of the structure of the process or of a function in a process. Structure can be symbolic or, according to the connectionist paradigm, subsymbolic. Indeed, the concept of structure designates a level in the analysis of phenomena and not a specific type of formal theory. What is here investigated are the properties of this level and this requires the development of original descriptive and explanatory hypotheses. The key point in IT is the relation between this structure and the process from which it was exhibited.

The second step is the expression of this structure in a formal language. It was traditionally a mathematical one, but in IT the problem is not only the formal language itself, but the language in which this structure adequately formalized can be programmed so that it can be reproduced and therefore the function itself simulated. The very stake of this second step is the decisive character of IT: once a structure is expressed (in the biotechnological sense) in a formal language, it can be programmed so that it becomes possible to interfere with it, to introduce variations in order to satisfy better the function or to act eventually upon the function itself. This potential action within the structure on the function raises fundamental questions. IT makes it possible to express structures by interfering with them, to simulate or develop new versions of any function or new functions that have in common a structure or some elements of one. To be able to analyze the structure of a function in order to act upon it and so to find within this very structure variations of the function or new functions is what is at stake and has to be thought. Functions have, in fact, become virtual modalities of structures within a technology. In IT, structures are neutral regarding the functions they have been gathered from. The consequences of this fact are innumerable and effectively bring humanity (the human community) into a new age of its evolution.

The third step is to select the medium capable of expressing the structure and its virtualities in order to fulfil the function. The medium is the carrier of the structure; it can, for instance, transmit it, introduce it into an artefact (any object, machine, etc.), etc. It actualizes the structure in an artefact, in a given environment, and for a certain task. Strictly speaking, the medium does not carry or embody the structure itself but the structure being programmed to perform a function or a set of functions. The carrier is somehow the matter in the Aristotelian sense, programmed or programmable. The decisive point is that in IT the medium is neutral regarding the structure it expresses, as the structure is neutral regarding the function. The same medium can carry different structures and, more important for our objective, the same structure can be expressed by different media.

To follow Descartes's suit, the fourth step is to program the function in a medium in order to perform the function, reproduce its various steps and their order. The fifth step is to test the program to make sure that every moment of the initial or intended process is adequately satisfied.

This is the effective situation of thought today and many points could now be clarified. The first one concerns some aspects of what virtual reality means. IT brings in a radical new conception of structure. Since the Greeks, it has been conceived as an autonomous and formal level of determination in reality, expressed and treated by mathematics. Now structure is not only the form of an object, of an entity or a process, it has become the intelligence of a process. This technology manipulates the structure it analyses and installs in it the results of these manipulations. So in IT, a structure includes its virtualities and the analysis of a process generates the virtualities of this process. This initial or actual process is to be conceived as the existing actualization of a set of virtualities internal to the structure and constituting it. This is made possible because the structure is programmable in a medium (or carrier) which over determines the object, is over imposed to it, so as to reconstruct it and make an artefact out of it.

The management of structures has become effective within their objects, entities or processes. It opens a radical transformation of our conceptions of any being. From now on, any being includes in itself its other modalities as part to its own being. Heidegger explained that things had become objects for subjects who were perceiving them and reducing them to what they appeared to them. Now the objects are becoming artefacts: what the subject perceives is only one modality of an artefact whose structure includes other modalities that exist only through IT. The individuality of an artefact comprehends virtualities which can be actualized by a technology. So virtual reality is not another reality, it is the reality. Reality has become virtual. This does not mean that what is virtual is not real and that in post-modernity reality vanishes in the realm of artefacts. It is another experience of reality: the actual or existing reality contains its victuals, other types of actualization. The object and the subject are overlapping.

Has therefore the substance of the subject become its structure which includes its potentials? Yes, if this means that the subject is not any longer closed within one's self, some master of his own being. But since Heidegger, philosophy has exhausted this interpretation. The answer is to be found in the negative one: according to a model of rationality derived from IT, the structure cannot be reduced to the form or the dunamis in Aristotle or to a program in genetics. The reason is, according to the form of the given, that the analysis of structures in IT has as a purpose the knowledge of functions or processes. IT transforms the conception of knowledge in a virtual action inside the process on the functions it satisfies: the knowledge of the process is a virtual action on the function. So the clear objective of this type of knowledge is not to study pre-programmed potentials already inscribed in a code or in the substance of a subject in order to make or let it happen. It is not a return or a reconstruction of an Aristotelian paradigm. On the contrary, the stake seems to be the opening of the structure, the introduction into it, through a given technology, of virtualities that have to be interpreted and decided upon according to the functions they are supposed to accomplish. In short, IT is not a study of what is already there but of what can happen within what there is. One reaches the most controversial point of this paper, and it needs to be justified or falsified: the function sets the limit of the technology. IT seems to be a technology that constructs its limit into itself.

The second point to be clarified is central to mind techno-science and concerns the relation between mind, brain, computer science, physics, neurology, etc. My remarks will be strictly philosophical and do not pretend to have any practical epistemological relevance; they just follow from the argumentation being built up. My assumption is that mind techno-science is presently over determined by the model of rationality at the core of IT. It explains why mind is conceived as cognition and that cognition is in its turn reduced to various cognitive behaviours or processes like problem solving, belief, attention, perception, etc. In fact what falls under cognition is an analysis of different cognitive structures. This examination can only be achieved in IT, at a symbolic or subsymbolic level, by their monetization in the field of computer science. Therefore the problem is not that mind is or is not a computer, nor what sort of computer a mind is. Certainly mind is not a computer, but computer science is at present the analysis of the structure of cognitive processes. To understand this fact and not to fall into the trap of endless controversies, one has to remember that mind techno-science cannot be thought as the present and future substitute of philosophy or of humanities. The whole (false) problem simply mixes the level of the structure and the level of the medium.

It was just argued that the level of the structure is neutral regarding the level of the medium, that a structure can be expressed by different carriers. From the point of view of IT, brain is a carrier of cognitive structures and in this respect it is similar to any physical system, for instance a machine, a computer or anything else which could perform the function described by the structure. A medium can be physical, neurological, etc., and this does not matter at all. The questions of the relation between minds and machines, brains and computers are often wrongly formulated because they ignore the level of the structure. So the relations between the different fields of research in the mind techno-science can be clarified if one acknowledges that this inter-discipline is organized by a model of rationality having its source in IT. This is why I said at the beginning that philosophy had not much to say, but that it was necessary to reduce some false problems and let an epistemology of the mind techno-science develop. Certainly philosophy has a lot to learn from its development, but at present its main task is to learn how to stop asking the wrong questions. I hope I have not made the situation worse.

Is necessary to examine some of its limits and consequences? Is there something alarming in these new virtualities offered to the power of humanity or inhumanity? Yes, if one thinks the IT paradigm according to biological and genetic research, in reference to the integrity of life or of the living being. In this case, epistemology is badly needed to explain the differences and the limits of such a paradigm according to the different fields where it is introduced and interferes. An epistemology proves its relevance when it is anchored in the very evolution of a field of knowledge, articulated to the internal and external questioning of scientists at work. Instead of deploring the end of the humanities or surreptitiously reconstructing them, it would be more relevant to study why epistemology is incapable of providing the knowledge of the sciences that our societies so badly need to understand themselves, but their past as well as what they are becoming. So to mingle the model of rationality provided by IT and the specific problems of molecular biology is false, as Descartes was wrong to assert that animals or bodies were machines.

Indeed this problem forces us to return to the question of the order of knowledge in which Intelligence Technology is developing. mind techno-science is not the substitute of the humanities and IT is not a technology taking the place of reason! At this point philosophy is radically involved. This can be introduced by further developing the end of the difference between subject and object which was one of the main features of the modern symbolic order. Such a difference does not concern artefacts. Artefacts are no longer objects, they require being known from the inside, by distinguishing their structure and its virtualities, the medium expressing it and, most of all, the functions they satisfy. Objects have become artefacts. The subject is within the artefact at the connection between the function and the structure. The artefact as it is used in everyday practice by an individual is designed. Certainly the design of an artefact is what appears to a subject, but it is conceived strictly according to the function and it does not express either the structure, or even the carrier. The design is neutral regarding the medium and the structure: the matter (which is not the medium!) of an artefact is selected according to the function.

The modern industrial conception of the object, "Form follows function," is taking a completely different meaning, because form is not any longer the structure. Form simply concerns the design. Artefacts are designed not for a substantive subject, knowing who he is or what he wants, but for a subject who explores its virtualities in the discovery and practices of artefacts. Individuals are not any longer in front of objects but in the middle of artefacts with which they interact, which they use as parts of what they are. So what they are is the uses, dispositions, and practices they develop, exchange, adapt, and invent: artefacts are the virtualities of individuals and individuals develop virtual artefacts. The object has lost the substance that was provided for it by the subject who was in front of it. Now objects are functions for virtual individuals. A world of artefacts is an age when functions, uses, practices are what matter and not substance and identity.

IT and its key concepts (structure, medium, design, function) are some of the main nodes in the present order of knowledge. But the striking feature is the primacy of function. The technology which is reducing the object to an artefact by managing its structure finds within itself its own limit: function is the beginning and the end. Function is no longer dictated by the production, the form by the matter, the structure by the form, because the manipulation of structures includes in them virtualities which are in the end decided by social practices. The relation between technology and society is radically transformed. I do not fall into a post-modern utopia of uses and customs rising and overtaking technology by the people for the people, of a humanity free from the power of technology. I just explain that the future of IT lies not within IT but outside of it, in the social and cultural practices. The core feature of IT is that what is outside of it finds itself introduced inside of it: its internal finality is what is external to it. To reach that point, structures had to become flexible, transformable, manageable. They had to include virtualities. In the end virtualities exist only according to the capacity of individuals to make them happen by actualizing some of them. IT supposes a world of events, chance, opportunities and, of course, accidents.

Urgently, structures have to be differently thought. Apparently, economists have been explaining this for the last twenty years: human capital is the main resource of high-technology societies. But they have a restricted view of this capital when it is reduced to techno-scientific skills, to the different competencies required by an industrial system based on information technology. Information is not intelligence. The virtuality of IT is that structures do not govern any longer but are governed by the functions they have the potentials to fulfil. Once again, function is the beginning and the end of IT. So the development of IT in societies, throughout their different sectors, is closely determined by the capacity of the individuals to develop and experience new and different behaviours and attitudes. These individual and collective innovations diversify social functions, desires, needs, and demands. IT is the capacity to analyze them. The consequences are innumerable: in the end, these functions are the basis of what is produced and sold. But in North America, Europe, and Japan, we see today a strong process of concentration in information industries. Of course this trend might be necessary to meet the level of investment required to implement globally information technology. But the objective and/or result of this very concentration, making the headlines, is the control of the demand by the strong structuring of the offer. To me, it seems to contradict the potentials of IT and conflict with the expected social and economical consequences of information technology. A bad philosophy and a poor epistemology might have today serious consequences.

Perceiving and observing by a sentient being (and in many non-sentient mechanisms) produce output having some relationship to the state of the world outside the observer. The characteristics of the output of the process serve as input to memory structures that store beliefs. A belief is an idea, or statement, that has one or more characteristics' values that match the values for representandums. Belief may thus be understood as a representation that is not necessarily fully justified and is not necessarily completely true, but must be true in part. A belief is an idea that is held based on some support. Thus, Swinburne has suggested that if a person believes proposition p then p must be more probable than Unfortunately, this implies that if there is a proposition, one holds a belief either in the proposition or in its negation. The holding of incorrect or weak beliefs becomes problematic, as does the imposition of a logical formulation on this sort of problem.

A statement of belief contains one or more characteristics' values matching in full or in part the values for representandums. The output of a process or set of processes provides a representation of the input to the processes. We can therefore describe a percept as the set of values in this output; it is essentially the information in the output of a process about the input. A perceiving function, f(),provides a percept, f(x), about input x. Belief is thus transmitted through the hierarchy.

Knowledge has been frequently described as ``justified true belief," a belief held by an individual that is both true and for which they have some justification. Thus, for a belief to be knowledge, it must be the case that the belief is, in fact, true, and the believer must have justification for the belief. A belief that is true but for which we have no evidence cannot be described as knowledge. If there are homunculi inside computers performing operations, those who have long believed in their presence cannot be said to have had knowledge of this, since their belief, while true, has never been justified (we assume.)

It had become common to describe knowledge as ``justified true belief" when Gettier wrote a brief article that raised a problem with this definition. As a result of Gettier's work, we can be certain that ``knowledge is not, or is not merely, justified true belief" . There have been several responses to Gettier's argument against accepting knowledge as being only justified true belief. One possible approach is to add a condition requiring that the grounds for believing a proposition do not include any false beliefs, to the requirements of justification, truth, and belief. However, this addition and several other modifications that have been proposed fail to avoid counterexamples in which ``knowledge is lacking despite the believer's not inferring his belief from any false beliefs" . Other approaches to understanding knowledge have been proposed and supported, such as having a disposition to behave or a disposition to feel a certain way

We accept here that knowledge is something like ``justified true belief." A belief is an internally accepted statement. The result of an observation or an inferential or deductive product combining observed facts about the world with reasoning processes. To understand knowledge in a way consistent with a hierarchical notion of information, it becomes necessary to understand the notions of ``truth" and ``justification" in a manner consistent with the hierarchical context.

A statement may be understood as ``true" if it exactly represents what it is describing. This is referred to as the ``correspondence" theory of truth. This applies not only to statements but to the representation and belief. The coherence theory of truth, on the other hand, suggests that truth is essentially derived from a system. A statement is true when it is consistent with a system of accepted statements. Truth may also be viewed as a representation that is learned and that will not be altered, even given additional experiences. William James thus defined truth as the vanishing point toward which we imagine that all our temporary truths will some day converge.

The justification of a belief is based on internal considerations concerning the qualities of the function producing the belief. A belief is ``justified" if and only if the input to the function is accurately represented in the output. Consider a handheld calculator which accepts the keystrokes “2" + “2" = 4, and then it plays the digit "4." We note that the digit displayed is not of the same form as the input, e.g., a keystroke. Instead, an accurate function takes keystrokes and produces a displayed number. If the calculator is broken and produces the digit ``3" given the above set of keystrokes, we clearly don't have knowledge that 2+2=4.Consider a different case where the calculator is broken but the above set of keystrokes produces, through erroneous subprocesses, the digit ``4" in the display. While the output is correct or ``true" and may be interpreted as a belief, it is not justified--the function is not accurate in that it does not operate as the user intends or understands the calculator to operate.

Other models of knowledge have been proposed, such as the notion that knowledge is one's "image," what one subjectively believes to be true. This is close to what we have referred to as a belief, and choosing to call it "knowledge" appears to only confuse the issue. Yet, like the more conventional philosophical idea of knowledge, it can be understood as the values in the output of a process, actually, the hierarchical series of processes that range from low level atomic processes up to sophisticated intellectual processes.

Perception and observation can be understood as conveying information about the input to certain processes (for humans, sensory processes such as seeing, hearing, smelling, etc.) The output of such a process may be understood as a belief. Such a belief may constitute knowledge about the input when the process or set of processes producing the belief operate in a manner consistent with the understanding of the process. These definitions of knowledge and belief are broader than the common language notions of the terms and less human-entered, in the case of belief, making the concepts more objective and more easily studied. We note that knowledge is information that is both true and justified. These perceptual, observational, and processing functions take as input sensory data from the real world, as well as personal beliefs and cultural biases, when producing information bearing output. This conceptual framework for understanding information provides a mechanism for understanding both the cultural influence on information, as well as the most minute phenomena studied by physicists In the pages that follow are to have two goals: (1) to extend explanations of the evolution of language, I-consciousness and our impression of having free will in the light of what is now called the "social intelligence hypothesis": the evolution of language is forced by natural selection mainly because of its advantage as a tool and weapon for and within the social struggle of our ancestors; (2) to show how biological and linguistic insights may contribute to the understanding of one of the most puzzling philosophical issues – and indeed of our conception as human beings – i.e. (the possibility of) our experience of ourselves and as autonomous agents. The philosophical problems of I-consciousness and free will cannot be solved as it would require the reconciliations of apparently inconsistent premises; but it may be dissolved by eliminating one of the premises, namely the claim that there are irreducible entities like free-floating selves or Cartesian egos with the ability to act due to their own non-physical power. Nevertheless our misleading conception of being such selves with free will has to be explained. And evolutionary biology and linguistics seem to be able to do this: The ego-illusion of systems which permanently confuse themselves with their own self-model, and the (in some sense inadequate) belief of having free will are sophisticated tools with great evolutionary advantages – they are the most subtle form of deception that was rewarded by natural selection, namely, a systematic and stable deception of our own.

Obviously, organisms need not be very mindful to live and reproduce. But some are. Why? Considering social factors are the most promising approach for an answer (Byrne & Whiten 1988, Whiten & Byrne 1997). A main starting point was the observation that primates appear to have more intelligence than is required for their everyday wants of feeding and ranging. Since evolution is unlikely to select for superfluous capacities, Nicholas Humphrey (1976) conjectured that something had been forgotten, namely the social complexity inherent in many primate groups, and suggested that the social environment might have been a significant selective pressure for primate intelligence. Since better access to food or a safer place to sleep or a higher rank in the complex hierarchies of primate societies normally increase the probability of producing more offspring than other group members, social intelligence pays off pretty well. Natural selection therefore favours it (or its inherited requirements). And since this selective pressure applies to all group members, an evolutionary arms race is set up, leading to a further increase of intelligence. This development probably corresponds to the rapid expansion of our ancestors’ neocortex – especially the frontal parts, which are most important for working memory and planning (Goldman-Rakic 1992) and probably consciousness (LeDoux 1996). This cortical enlargement – about a factor of three to four during the last five million years – is otherwise hard to explain. And it is biologically expensive, because the brain consumes about 20 percent of the energy when the body is idle but accounts for only two percent of its mass. Furthermore, there is evidence for a correlation between neocortical size and group-size or social complexity (Barton & Dunbar 1997)

Thus, social interactions might have been the most important driving force for the evolution of primate intelligence. The elaborated mental abilities of higher primates are conceived as the product of a cognitive arms race leading to more and more sophisticated representational capabilities (representation of complex social relationships, higher-order intentional stance, theory of mind, mind reading). This climate of competition and conflict favours the use of social manipulation to achieve individual benefits at the expense of other group members. Observing social relationships carefully, struggling for influences, making alliances, or deceiving more powerful leaders got more and more important. Particularly useful for this are manipulations in which the losers are unaware of their loss (as in some kinds of deception), or in which there are compensatory gains (as in some kinds of co-operation). Therefore, egoistic intentions remain hidden. A lot of zoo and field experiments as well as behavioural studies in the wild have already confirmed (and reinforced) these hypotheses. It was shown, for example, that apes – and to a lesser degree perhaps also monkeys – may be able to respond differently, according to the beliefs and desires of other individuals (rather than according only to the other’s overt behaviour). Hence, they possess a theory of mind (Premack & Woodruff 1978) and can assume what Daniel Dennett (1988) has called the intentional stance: They ascribe intentions to others and take them into consideration for their own actions.

Language is, among other things, a very useful tool and medium for explicit representations and metarepresentations including an intentional stance, self-attributions, I-consciousness, higher-order volitions, autonomous agency etc. These are not an epistemic luxury but have a function, i.e. a causal role. They allow a more precise representation of the external and internal states and their rational and emotional evaluation. They allow a broader range of reactions in complex situations, especially in social contexts. The concept of self reifies the organizing activity of an organism that incorporates its experience into its future actions. These capabilities are – at least at the higher-order level of human beings – based on and boosted by language, and this is probably the main reason for the development of larger brains and linguistic capabilities (cf. Goody 1997). Thus, it is reasonable to assume that these cognitive capabilities are an important factor for the origin and evolution of language and cannot be excluded by any elaborated theory trying to explain this still rather mysterious issue (cf. e.g. Aitchison 1996, Jablonski & Aiello 1998, Noble & Davidson 1996): Language was incorporated in cognitive representations of own’s and others’ intentions and offered more abstract and efficient ways to use these representations; language permits more effective classification, storage and distribution of information, and thus more efficient use of memory and communication; language is an important means to envisage the future; and language-in-use is a new and very effective sort of tool for co-operation between individuals, because it makes information explicit and easily communicable even in the absence of visual contact. Language also paved the way for even more sophisticated deceptions (i.e. lies) and influencing others to act in accordance with one’s own goals. Language is based on symbolic and abstract thought, but conversely it also enhanced their further development. Finally, language lead to more and more sophisticated models of the world and of ourselves.

Self-consciousness is a rather shaky term with many different meanings which often depend on each other, e.g. notions like self-awareness, self-knowledge, self-recognition, sense of ownership etc. (cf. Frank 1994, BermĂşdez, Marcel & Eilan 1995). Self-consciousness is not a single ability or property but a complex entanglement of different features creating a special kind of knowledge. As a premise, it is assumed here that self-consciousness does not come ready-made into existence, but bootstraps itself with the help of other minds in a complex interplay of the infant with the social and physical environment starting from inborn dispositions. It depends on perspectivity due to entered information acquisition, bodily awareness due to proprioception and feedback from results of one’s own actions (including the experience of resistance). These are crucial ingredients for a higher-order form of self-consciousness, i.e. I-consciousness. It is conceptualizable and verbalizable. It is based on a feature which is called a self-model. This is an episodically active representational entity (e.g. a complex activation pattern in a human brain), the contents of which are properties of the system itself. It is embedded and constantly updated in a global model of the world created also by the brain based on perceptions, memories, innate informations etc. (Metzinger 1993). Self-models are limited in a crucial way. They cannot represent their own representations as their own representations as their own representations and so on ad infinitum. But there is (or at least was) also no need for that. From an evolutionary perspective, it would have been quite disadvantageous for our ancestors to forget their physical and social environments and plunge into a self-amplifying spiral of self-reflection. Hence, there is a – probably hard-wired – self-referential opacity: The phenomenal mental models employed by our brains are semantically transparent, i.e. they do not contain the information that they are models on the level of their content (Van Gulick 1988). Possibly these phenomenal mental models are activated in such a fast and reliable way that the brain itself is not able to recognize them as such anymore because of a lower temporal resolution of meta representational processes due to limited temporal and physical resources. If so, the system "looks through" its own representational structures as if it were in direct and immediate contact with their contents, creating a special sort of self-intimacy. This leads us to a rather dramatic – and possibly offending – hypothesis: We are systems which are not able to recognize their self-model as a self-model. For this reason we are permanently operating under the conditions of a "naive-realistic self-misunderstanding". We experience ourselves as being in direct and immediate epistemic contact with ourselves. Hence, we are systems which permanently confuse themselves with their own self-model (Metzinger 1996). In doing this, we generate an ego-illusion, which is stable, coherent, and cannot be transcended on the level of conscious experience itself.

Another controversial issue is the problem of free will ( Honderich 1988, O’Connor 1995, Walter 1998)). To define free will in the strongest sense, Libertarians often presume three necessary conditions which, taken together, are sufficient: intelligibility, freedom, and origination. Intelligibility means that a person’s free choices are based on intelligible reasons. Freedom means that this person can make different choices under completely identical conditions, i.e. that this person could act otherwise even if all natural laws and boundary conditions (including his or her own physical states) are the same. Origination means that the person is able to create his or her choices and acts according to these choices in a nonphysical way. But this presupposes an ontology (e.g. a kind of dualism or idealism) which goes beyond and is at least partly independent of the physical world. However, even such an ontology won’t offer what Libertarians want, for it cannot avoid the dilemma of plunging into an infinite regress or abruptly step on the brake at a mysterious causa sui. This is because in order for me to be truly or ultimately responsible for how I am, so that I am truly responsible for what I want and do (at least in certain respects), something impossible has to be true: There has to be a starting point in the series of acts that made me have a certain nature – a beginning that constitutes an act of ultimate self-origination. But there is no such starting point. Therefore, even if I can act as I please, I can’t please as I please. That is not to say that there are no higher-order volitions, for instance wanting to want not to stay that lazy anymore. But ultimately my reasons, beliefs and volitions are non- (our sub-)consciously determined – by earlier experiences, heredity, physiology or external influences – and therefore not ultimately up to me. Thus, in order to be ultimately autonomous and responsible, one would have to be the ultimate cause of onself, or at least of some crucial part of oneself (Strawson 1986). But this would strangely promote man to something like an Aristotelian God, a prime mover. (This is no polemic exaggeration but what Libertarians have actually conceded, see e.g. Chisholm 1964, Kane 1989)

However, there is no hint for the existence of humans as prime movers and nonphysical forces interacting with our physical world through causal loopholes. Nevertheless we do conceive ourselves, at least sometimes, as being free. We have the feeling that it is up to us to decide between alternatives. This feeling depends on second-order emotions (without which we cannot act and choose in complex situations despite of rationality), an intentional stance, a "healthy" (non-deprived) development, non-predictability or epistemic indeterminism (that is to say we cannot know the future for certain, and especially not our own future), rationality (the ability to reflect and reason), planning (and hence higher-order thoughts, a concept of the future et cetera), higher-order volitions, and sanity. These features are compatible with a naturalistic world view (Vaas 1996 & 1999) and even with determinism. Therefore it is not to deny a weaker form of free will. But this does not imply the existence of the kind of freedom and origination for which Libertarianism is arguing. The Libertarian will still insist that our subjective impression of freedom be a powerful argument for free will. Thus, a sceptic should be able to explain such an impression within a naturalistic framework. And this is what an evolutionary perspective might achieve: Ascribing intentional states to others necessarily includes ascribing volitions to them and assuming that they have the power to transfer their volitions into actions somehow, because this is the only way to get advantages from the intentional stance at all. For, if other beings are thought to have intentions but they would be causally inert, that is to say their behaviour has nothing to do with their volitions, this ascription of intentions and hence volitions simply wouldn’t matter. However the intentional stance is not an irrelevant luxury. It is a powerful tool to get along with the complexity of the social world and even an anthropomorphically-conceived nonsocial world (up to highly restricted activities – e.g. in playing computer chess nowadays it is common and helpful to think and act as if the computer "wants" and "plans" something). Individuals endowed with this tool are better prepared for the struggle of social life. And it is advantageous to assume the volitions of others as somehow being independent of the environment or the past. Not absolutely independent of course, but in an approximate sense – because this makes it a lot easier to deal with them due to the fact that complex organisms can act (or react) quite differently in similar circumstances and quite similar in very different circumstances. There is another reason to take a concept of volition as revolutionarily advantageous, and this is just the other side of the coin: To deal with other individuals in a complex way means also to plan one’s own actions carefully and evaluate their effects. This presupposes some kind of awareness of one’s own volition, hence a concept of will and self. Higher-order representations also take one’s own mental states into account – not only for decisions and follow-up analyses but also as a parameter in the plans of others regarding oneself. Thus, it is reasonable or even necessary to ascribe volitions to oneself, too – because otherwise one cannot reason about the mental states of others who are presumably dealing with oneself. This makes one’s own volitions explicit – and much more flexible. For instance, an individual may think: "She believes that I want to do this, and she will react to this in a certain way to get an advantage over me – and therefore I will act otherwise and not do this but that." At least since the point from which there has been language with an inbuilt grammatical structure distinguishing between subjects and objects, active and passive, present and future – but probably much earlier –, such concepts of volition, actions and self-notions have been flourishing. This was not only the case in contexts of cheating, however! In the course of time co-operation became more and more important among our early ancestors. And the existence of some form of language already implies a high degree of co-operation (Calvin & Bickerton 2000) – spoken language would never have emerged unless most people, most of the time, followed conventional usage. But co-operation in complex, not inherited forms also presupposes an intentional stance and the capacity to ascribe volitions to others.

Finally, evolution shaped our minds respectively our brains to cope with our complex social lives. We are forced by our very nature to interact with other people in a fundamentally different way than to interact with, say, stones and sticks (Strawson 1962). From this it is no longer a big step to a notion of free will which is a powerful tool to act in consonance with or opposition to others and to establish some kind of moral responsibility – a very effective way to influence the behaviour of others and justify punishments. Thus, free will even succeeded to become an entity of religious, philosophical or political theories and a postulate for jurisdiction. Of course we need not dismiss an intentional and personal stance. It is, obviously, crucial for our survival. We cannot leave our subjective standpoints, turning exclusively to an objective, perspectiveless view. We may accept that we have, ultimately, no free choice. Nevertheless, in our everyday life we think and act as if we did. Even sceptical philosophers do – or they might find themselves out of the race quickly. Nature is stronger than insight and "the human brain is, in large part, a machine for winning arguments, a machine for convincing others that its owner is in the right – and thus a machine for convincing its owner of the same thing. The brain is like a good lawyer: given any set of interests to defend, it sets about convincing the world of their moral and logical worth, regardless of whether they in fact have any of either. Like a lawyer, the human brain wants victory, not truth; and, like a lawyer, it is sometimes more admirable for skill than for virtue" (Wright 1994),

As Labov (1977) noted,”one of the most human thing that human beings do is talk to one another. we can refer to this activity as conversation, discourse, or spoken interaction." As "one of the most human things" which we do, it stands to reason that meaning is often assumed to be shared during verbal interaction. However, we know that words are laden with symbolic meaning in addition to being tools for the simple sharing of information or experience. A critical point is that each of us differs in terms of our information and experience, and despite the ideal of having a "standard language"-- even among people speaking the same dialect of the same language, or being truly "bilingual"--the fact is that each of us on this planet adds our own nuance to words, or phrases, or intonation, or some combination thereof.

Sociologists, social psychologists, and others study the effects of such ubiquitous experience as exposure to the language of television, of political campaigns, and of newspaper headlines. Advertisers know that many people suspend their "truth filters" for 30-second segments at a time. Psychoanalysts routinely explored "distortion" of communication, in expressing and receiving facts, fantasies, and associated experiences which carry a mutually-understood "meaning". Those reading this paper online will surely recognize that one may well get quizzical responses to comments made about one's "mouse" or being involved in a "fatal crash"

Discourse is dependent on both the context of the conversation, in "real time", and the overwearied vocabularies which are acquired over the course of social, professional, and vocational training. In other words, we communicate to some extent using a vocabulary contained in the scripts of our daily lives and daily experiences.

Freud, nearing the end of his life, and holding his first and only seminar in America, was asked for the secret of happiness, and (in German, paraphrased here), answered "Work and Love". The drives. But while Freud was best known for his interpretation of the "love" portion of that formula, the "work" portion of life is perhaps more amenable to systematic study and is also quite interesting to examine.

One's work experiences are where a great deal of our vocabulary and communication skills come from, and sometimes even our relational styles. Our workday shapes our thoughts and sets our neurons ablaze even as we are dreaming or trying to express with a loved one the trials and tribulations of our work day. Knowledge of one's use of language as a tool is knowledge of a great deal more.

In fact, we each may be speaking a different dialect of the same language, as doctors and lawyers and beauticians and homemakers and teachers and software engineers all take for granted that we are processing words and meaning in the same way. Consider, however, how we may hear "computerese" spoken in a corporate lunchroom, the latest news from Paris haute couture spoken while strolling through Bloomingdales, and self-referential, psychoanalytically-derived reverie from the student of clinical psychology. Are they speaking the same language?

How does one's vocabulary and learned way of associating words to meaning affect the way one thinks and communicates across a range of situations? How will the course of psychotherapy, which is heavily dependent on verbal representation and interaction, be affected by one's linguistic disposition and the world view this may represent (or reinforce)? These sort of broad questions will be the focus of the present paper. It maybe anticipated that many more questions will be raised than answered, but this is not seen as necessarily being a bad thing.

To what extent can we say that a speaker knows rule R of his or her language rather than rule R', given that both rules produce the same grammatical outcome? If rule R' provides a more general, technically precise formulation of the same conditions formulated by rule R, do we ascribe knowledge of R' to the speaker - even if the speaker admits only to knowing R? Assuming that a third-person report drawing on the best available theory should take precedence over the speaker's own first-person report, Chomsky claims we can and should ascribe knowledge of the more precise rule to the speaker. I argue that while the third-person reports offered by observers drawing on the best available theories provide standards by which a given behaviour may be evaluated, corresponding first-person accounts must be taken into consideration as criteria of assertibility constraining what we may conclude about the person's actual knowledge.

Given the following two choices: (A), I often read the newspaper on Sunday.(B) I read often the newspaper on Sunday. Which is a native English speaker-- call him or her S -- most likely to produce? It should be fairly obvious that he or she would likely produce the grammatically correct sentence A. What may not be so obvious is his or her reason for choosing A over B.

Chomsky explains the choice by citing the speaker's knowledge of the appropriate rule. In rejecting the grammatically incorrect sentence B, Chomsky claims, speaker S shows that he or she "knows that verbs cannot be separated from their objects by adverbs". Call this "rule R." But because he holds that the prohibition of such adverbial intervention is a consequence of the more general rule of strict adjacency, Chomsky goes further and claims that what S really knows is that "the value for the case assignment parameter in E is strict adjacency" emphasis in the original). Call this "rule R'." Both rule R and rule R' describe S's behaviour. But are we justified in claiming that S in fact knows rule R'?

It would be helpful first of all to clarify what Chomsky means by knowing a rule. Extrapolating from behavioural evidence, Chomsky claims (with some more or less weak provisos, e.g., that if a speaker's utterances conform to the conditions specified by a language rule, then that speaker knows the rule. In short, a speaker who observes a rule can be said to know that rule. In addition, Chomsky claims that knowing a rule of language is an instance of knowing-that, and therefore involves propositional knowledge. Thus according to Chomsky, if S acts in accordance with rule R of his or her language, then S knows rule R and therefore knows that R.

Chomsky's claim that knowledge of language is knowing-that has an important corollary. That is that the person who knows rule R not only knows that R, but believes that R (e.g., Ascription of knowledge of language to a person therefore entails a corresponding ascription of belief to that person. When we state that someone knows a language rule, we are in effect making a statement about his or her attitude (belief) toward the propositional content embodying the language rule.

It seems to me that in cases in which knowledge of language is ascribed, we are justified in recasting talk about knowledge into talk about beliefs. That is because what interests us is not whether or not a given rule is true, i.e., whether or not it accurately describes the appropriate language behaviour, but rather, whether or not it is considered true of his or her language by the speaker. Our ascription of knowledge of the given language rule thus involves a statement about S, specifically, about how things are with him or her as demonstrated by his or her attitude toward the relevant proposition(s). For that reason, framing the question in terms of S's beliefs is perfectly legitimate, and shows exactly what is at stake when we ascribe knowledge of language to a person.

On the basis of the foregoing, I would suggest that for any language rule R, knowing that R means the following: We can say that S knows R if S believes the propositions comprising R. If R can be stated as p, then S knows R if S believes that p. Further, for S to believe that p is for S to be disposed normally to feel/hold/agree that p.

Applying this to the example introduced at the beginning of this paper, we would say that S's knowing R means that S believes that verbs cannot be separated from their objects by adverbs.

By the same token, S's knowing R' means that S believes that the value for the case assignment parameter in E is strict adjacency.

Further, if we claim that S produces sentence A and not sentence B because S knows that R, we are in effect asserting that S's producing A comes about by virtue of S's cognitive/doxastic state having a certain content. Or: S believes that R and because S believes that R, S utters A rather than B

The general claim here is that language behaviour takes the form it does by virtue of the content of the cognitive/doxastic state that enters into/supports/underlies that behaviour. Conversely, the actual content of that cognitive/doxastic state would represent the speaker's knowledge of language and would explain why he or she produces the appropriate utterances.

Given the above definition of what it is to know, and the connection between the content of a cognitive/doxastic state and the role it plays in the explanation of behaviour, the question becomes: Which formulation of a rule describes the speaker's actual belief(s), and which formulation simply describes a set of conditions to which the speaker's behaviour (unknowingly) conforms?

Chomsky offers an answer that can be called the argument from the best theory. He states that [W] are entitled to propose that the rule R is a constituent element of Jones's language (I-language) if the best theory we can construct dealing with all relevant evidence assigns R as a constituent element of the language abstracted from Jones's attained state of knowledge)

Chomsky's argument is that if our best theory for explaining a speaker's behaviour includes attributing to him or her knowledge of a given rule, then we should conclude that this knowledge does in fact enter into the speaker's behaviour and that the speaker therefore does know the rule. Implicit in this argument is the provision that a third person attribution of knowledge of language has authority over a first person report, if the third-person attribution is made on the basis of the best possible theory available. From this point of view, the third-person claim that another person's language behaviour implicates a given cognitive/doxastic content is true simply by virtue of the claim's having been derived from the best available theory. Because it is derived from the best available theory, the third-person attribution must take precedence over any relevant first-person account.

But this does not tell us whether or not S knows rule R in the required sense of believing that R. It does not, in other words, tell us whether or not S holds the requisite attitude - belief - in relation to the propositions and constituent concepts embodying the rule he or she is said to know. This is what we need to ascertain, but how?

It is informative to note in this regard a point that Searle has raised in general objection to Chomsky's claim that speakers are (actually, in fact) following the rules he and other grammarians have formulated. According to Searle, for any attribution of rule-following, we need to show that the attributed rules are "rules that the agent is actually following, and not mere hypotheses or generalizations that correctly describe his behaviour." For Searle, the argument from the best theory does not suffice, since the descriptive or predictive accuracy of the attributed rule does not by itself prove that the rule is in fact being followed. We need, instead, "some independent reason for supposing that the rules are functioning causally"

There seem to be two points bundled into Searle's objection. The first, which Searle explicitly makes, is that behaviour which seems to be in accord with a rule must be shown to be guided by that rule in fact and not simply hypothetically. More generally, if we are to claim that a person is behaving in a certain way on account of his or her given cognitive/motivational content, we must show that this given cognitive/motivational content does in fact enter into the production of the behaviour in the specified manner.

The second point, which Searle doesn't make but which I find implicit in the call for obtaining an independent reason for attributing a rule, is that the person to whom such rule following is attributed should (somehow) understand him or herself to be following the rule. This would mean (among other things) that he or she should show evidence of believing that R, for the given attributed rule R. Such evidence could be found in the appropriate first-person avowal of belief or acceptance that R; such a first-person avowal would in fact constitute an independent reason for attributing actual, as opposed to hypothetical, rule-following to that person, if we read "independent" to mean something like "coming from a source other than the person doing the attributing." Like the previous point, this point can be generalized. Given cases in which it is claimed that a speaker knows a given rule of language, we would want independent corroboration of that claim.

A common sense attempt to corroborate a knowledge claim would have us solicit a first person report from the speaker him- or herself. We might, for instance, ask the speaker to describe what, if any, language rule he or she understands him- or herself to be following in producing a given utterance. With this evidence, we would be able to determine whether or not our hypothesized ascription of rule-following (and with it the corresponding ascription of belief) is accurate.

This first approach would require the speaker to be able to convey to us on his or her own why his or her language behaviour exhibits the regularity observed of it. But there are prima facie two problems here. First, it seems clear that not all speakers can formulate the rules their language behaviour seems to conform to, and second, not all speakers are aware of their reasons for producing utterances in the form that they do.

But neither of these considerations should be taken to mean that S necessarily cannot give us the kind of testimony we would want. In the first place, the inability to state or otherwise express a rule is not necessarily evidence that one does not know (or would not recognize) the rule any more than the inability to describe a concept is evidence that one does not know (would not recognize) the concept. And in the second place, one's not being aware of one's reasons for behaving in a given way is not necessarily evidence that one does not know why one behaved in the given way. Many actions do not ordinarily require a high degree of attentiveness to the conditions of their production in order to succeed. This is obviously true of performances based on, e.g., physical skills, which often require little or no attentive monitoring for their success. But it is also true of more formalized behaviours, language performances among them. For example, a speaker may concentrate attention on what he or she is saying and apparently not think about the syntactic conditions his or her utterance must meet. And yet afterward the speaker may acknowledge that he or she did indeed mean to conform to the appropriate syntactic rule. In any case, it seems reasonable to suppose that a speaker initially inattentive to the reasons behind his or her syntactic behaviour can at least in principle become aware of them and may impart that awareness to others. The question is how.

On the face of it, at least, introspection would appear to be the most obvious way for the speaker to gain such awareness, since what we are concerned with here are psychological facts which, one would think, could be discovered through one's focussing attention on one's own inner states. But Chomsky rejects introspection, claiming that it can tell the introspector neither that the given rule holds nor that the rule enters into the appropriate "mental computations" involved in language production.

But introspection does not exhaust all possible avenues for securing the kind of first-person evidence we would want to obtain. We could state the rule we think S is following, and ask S whether or not he or she would accept this as the correct description of the reason he or she produced the given utterance. We could likewise present S with different formulas presenting under different descriptions the same linguistic regularity observed of S, and ask him or her to choose which one correctly describes his or her understanding of why he or she conformed to that regularity. We might, to return to our previous example, show S rules R and R', and ask him or her which one, if any, describes his or her understanding of why sentence A is preferable to sentence B. No matter which specific approach would be taken, the crucial criterion would be that ascription to S of knowledge of a rule be contingent upon S's recognizing and agreeing to the propositions contained in the rule.

Thomas Nagel has in fact suggested something like this. As he puts it, So long as it would be possible with effort to bring the speaker to a genuine recognition of a grammatical rule as an expression of his understanding of the language, rather than to a mere belief, based on the observation of cases, that the rule in fact describes his competence, it is acceptable, in that to think, would indeed ascribe knowledge of that rule to the speaker. emphasis in the original).

An ascription of knowledge to a person should be contingent upon the acceptance by that person of the appropriate propositions and/or concepts as accurately articulating what he or she believes.

Generally, we can ascribe belief B to S if S, when B is brought to his or her attention, feels/holds/agrees that B. Without S's feeling/holding/agreeing that B, we could not confidently ascribe B to S. In addition, S's feeling/holding/agreeing that B can consist in the recognition that B or the acquisition of the attitude that B

Thus for S to accept that rule R correctly reflects what he or she knows about the appropriate aspect of language, S must either recognize that R or acquire the belief that R. If S were to recognize that R, then S would simply be exercising an already-existing disposition to normally hold/feel/agree that R, given the appropriate circumstances. If S were to acquire the belief that R, then S would, on the basis of, e.g., evidence presented, become disposed to hold/feel/agree that R is the case, given the appropriate circumstances. In other words, when we recognize that R, we are exercising or expressing a belief we already have, though perhaps we never had the need or opportunity to do so before. When we are brought to accept that R, we are acquiring, and consequently expressing, the belief that R. Note that in either case, the acceptance condition involves a first-person avowal of belief.

Note also that it is not necessary that the speaker come to this avowal through reflection or "introspection" or otherwise on his or her own. If a rule is described to the speaker, and the speaker agrees that he or she believes (or is brought to believe) that the rule holds in the appropriate circumstance, then it is reasonable to attribute to him or her knowledge of that rule. But it is also true that if the speaker does not recognize or accept the rule as articulating something he or she believes or has come to believe, then the plausible attribution to him or her of knowledge of that rule would be difficult to maintain.

Would Chomsky agree to make the ascription of knowledge of language rules contingent on the acceptance condition? On the one hand, he seems to accept a scenario in which a speaker comes to know the rules of grammar "from the outside" - that is, by having them taught or otherwise brought to his or her attention by another party. On the other hand, his position on the usefulness of the first-person perspective generally is that it isn't. His view seems to be based not only on his own belief that much knowledge of language is tacit, but on the widely recognized observation that first-person accounts are inherently unreliable. If we examine these two points, however, we will find that, rather than invalidating the first-person perspective altogether, they serve only to qualify the claims that can be made for it.

If, as Chomsky claims, knowledge of language is largely tacit, then claims regarding a speaker's knowledge of a given rule may be a difficult matter to decide from the speaker's point of view. Given Chomsky's understanding of tacit knowledge as knowledge that is "generally inaccessible to consciousness" and therefore presumably opaque to the knower, it is easy to see how it would be difficult to make knowledge ascription contingent on the appropriate first-person avowal. But this difficulty may be more apparent than real.

First, tacit knowledge as Chomsky understands it would appear to differ very little from ordinary knowledge outside of its being tacit., Chomsky does not claim that a speaker's tacit knowledge of language is inferentially isolated from his or her other attitude states, and in fact he has stated that speakers' decisions to use their tacit knowledge are influenced by their "goals, beliefs, expectations, and so forth" . Far from existing behind a kind of firewall separating it from ordinary beliefs and other attitude states, tacit knowledge of language would seem to be woven into the speaker's overall network of attitude states, and to exert some variety of influence on - as well as to be influenced by - those states.

Second, ordinary beliefs themselves may be largely tacit. As indicated above, beliefs are to some extent dispositional. Following , our having consciously thought about or avowed a belief is a contingent rather than a necessary feature of beliefs. This means that, as with tacit knowledge, we may "have" beliefs without necessarily having consciously thought about them. Nevertheless, when a belief of ours is brought to our attention, we do, under ordinary circumstances, tend to recognize it as such. There is no reason this cannot hold for tacit knowledge as well. In fact all that would be necessary for us to say that someone knew (believed) something, whether tacitly or not, is that when confronted with a statement or other formulation of the belief, that person should be disposed normally to feel/hold/agree that it is true

It may be objected here that the acceptance condition is contingent on the belief's accessibility to consciousness, and that tacit knowledge is, by definition, inaccessible to consciousness and therefore exempt from the acceptance condition. Again, there is no reason to suppose that tacit knowledge cannot behave like ordinary dispositions to believe, and thus to be brought to awareness given the proper circumstances. Certainly, Chomsky's statement that one can come to know initially tacit rules "from the outside" would seem to indicate his acknowledgement that one could at least in principle have conscious access to one's tacit knowledge. If this is so, then there is no reason in principle that tacit knowledge must remain tacit and thus exempt from the acceptance condition. We might say then that tacit knowledge of language is tacit to the extent that it is initially inaccessible to the person to whom it is attributed, but that given the proper conditions, this inaccessibility can be converted to the kind of accessibility enjoyed by our ordinary knowledge and thus can be brought into play in relation to the acceptance condition.

As mentioned above, Chomsky believes that first person reports regarding what one thinks one is doing are not always reliable. As he puts it, "We might ask Jones what rule he is following, but, . . . such evidence is at best very weak because people's judgments as to why they do what they do are rarely informative or trustworthy". There is truth to this assertion, but a closer look is warranted.

What Chomsky seems to be referring to here is the normal indeterminacy that may and often does characterize an agent's first person accounts of his or her reasons for performing in a given way. Such indeterminacy may be a product of any or all of a number of factors, including the relative attentiveness with which one does something, the degree of fine-grainedness or explicitness demanded of the first-person account, and the fact that internal states are not objectively separate from the first-person perspectives form the basis of reports about those states. Absolute certainty here is out of the question -- but that does not in and of itself invalidate first person accounts.

In fact, I would be inclined to understand the indeterminacy of first person reports as analogous to the underdetermination of theory by evidence. Because of the latter, we cannot (and can never) be certain that the evidence pointing to certain theoretical conclusions is absolutely conclusive. But - and Chomsky has argued this point against Quine - such underdetermination does not in and of itself automatically invalidate any reasonable conclusions we may feel we are warranted in drawing from the evidence. Just because it is possible that our conclusions will be proven wrong by more or better or subsequent evidence does not mean that we are not justified in drawing the most reasonable conclusions we can based on the evidence available to us. A similar case can be made for the value of first-person accounts. They may be far from infallible, but because they represent expressions or manifestations of what one thinks is the case with oneself, they constitute admissible evidence regarding a person's attitude states.

In fact it reasonably can be held that because they tell us how things are with a person from that person's point of view, first-person reports have a certain privileged status in instances where we are trying to determine someone's attitude toward a given proposition or set of propositions. Evidence regarding what someone thinks he or she believes would seem to be especially relevant if we want to determine whether or not that person knows (believes) a given rule of language. It seems to me that in this case a first-person account would make for useful evidence that should not be ruled out on a priori grounds

It is useful in this context to think of first-person reports in terms of the Wittgensteinian notion of criteria. Criteria, briefly, are normative considerations that provide grounds for justifying assertions, and thereby help to set conditions under which assertions are appropriate. By this reading, first-person reports would provide the criteria of assertibility constraining third-person ascriptions. First-person reports would, in other words, set out the normative conditions under which third-person ascriptions would be deemed appropriate or not, and would thus serve to restrict the range of attitudes we can (reasonably) ascribe to others. Since, as has pointed out, such criteria serve to help fix "what we may be wrong about, deceived about, under an illusion about" (62 quoted in , then first-person reports would exert a potentially limiting influence on third-person ascriptions. Specifically, they would show how claims made from the third-person perspective may fall outside the range of possible understandings that reasonably can be attributed to the person in question.

By this light, the acceptance condition would act as the relevant criterion for ascribing knowledge of a given language rule. A third-person ascription that met the acceptance condition would, all things being equal, be considered a justified ascription. Conversely, a third-person ascription that did not meet the acceptance condition would, all things being equal, be difficult to justify. Thus S's accepting rule R as expressing his or her reason for uttering sentence A rather than sentence B would provide justification for ascribing knowledge of R to S. If on the other hand S did not accept R as expressing his or her understanding of the appropriate language behaviour, then by the criterion of the acceptance condition we would not be justified in ascribing knowledge of R to S.

In spite of their built-in indeterminacy, then, first person reports and avowals would seem, for better or for worse, to be the relevant criteria by which to check the plausibility of third person ascriptions of knowledge. Consequently, first-person reports and avowals would be useful in cases where we wish to adjudicate apparently conflicting claims regarding what a given speaker knows

It is easy to see how such conflict could arise. Take, again, the example of S's producing sentence A rather than sentence B. This may been explained alternately as being due to S's knowing that R - i.e., believing that adverbs are prohibited from intervening between verbs and their objects - or S's knowing that R' - i.e., believing that the value for the case assignment parameter in E is strict adjacency. Given the two very different sets of propositions comprising these rules, we would appear to have two competing claims regarding the speaker's object of belief.

For even if we agree that rule R is nothing more than a consequence of the more general rule R' it is not at all certain that S would recognize this. Nor is it certain that S would understand the constituent concepts of which R' is comprised, even if he or she understood the constituent concepts of R. Many ordinary speakers of English (and others) know what verbs, objects, and adverbs are, but do not know what strict adjacency is. We could expect these speakers' first-person accounts of why they produced sentence A rather than sentence B to be put in terms of verbs, objects, and adverbs, and not in terms of strict adjacency. Accordingly, we reasonably could expect that linguists (and perhaps even only a subset of linguists) would be disposed to describe S's language behaviour in terms of strict adjacency, but that S, as an ordinary speaker, would not. As Chomsky concedes, many people may be reluctant to attribute knowledge of R' to S on account of the "unfamiliarity of the notions Case assignment and adjacency parameter." Chomsky, of course, does not hesitate to claim that S does indeed know strict adjacency. But his willingness to acknowledge others' reluctance to grant this point is interesting.

Still, Chomsky holds that the unfamiliarity of the concepts used to explain S's behaviour is "irrelevant to the description of [S's] state of knowledge" . What would seem to matter here is only that the concepts belong to the best available theory, in which case we must assume that they accurately reflect S's state of knowledge. But it seems to me that what really is at issue here is not the relative familiarity of the concepts per se, but rather whether or not these (or other) concepts are properly part of S's repertoire of beliefs about the language. It seems reasonable to suppose that S cannot have the requisite attitude toward concepts that he or she cannot be said to possess. If that is the case, then S's familiarity with the given concepts is hardly a matter of indifference. As Davies has pointed out in a similar context, whether or not a person understands the concepts he or she is said to know is indeed a relevant consideration.

In fact, it is difficult to see how one's not understanding a concept one is said to know can be irrelevant to deciding whether or not one knows a rule or proposition in which that concept figures. Consider the following example: I drink water because I am thirsty and I know that water will quench my thirst. But the best theory of why I drink water goes something like this: when I drink water, the water is absorbed into my bloodstream by osmosis as it enters my stomach. This causes both my blood volume and pressure to increase, and the osmotic strength of my blood to be restored to a normal level. Because this is the best theory, does that mean that I drink water because I know what that theory states?

If we take a position analogous to the position Chomsky takes regarding knowledge of the rules of language, it seems to me we would have to answer "yes." Just as Chomsky holds that S's producing sentence A is guided by his or her knowing that the value for the case assignment parameter is strict adjacency, we would hold that my drinking water is guided by my knowing that the intake of water works through osmosis to cause blood volume and pressure to rise, and osmolarity to reach the proper level. In both of these cases, we would be claiming that the person in question knows what the best theory available states about the reasons for his or her behaviour, and that this knowledge enters into the relevant mechanisms for producing the behaviour

But do I in fact know this technical explanation for my drinking water? Again, it is drawn from the best theory available, and certainly, my behaviour is perfectly in accord with what it would be if I did know what the theory describes. But the fact is that I did not know the theory (nor for that matter had I even heard of the term osmolarity) until I asked an expert. I did not, in other words, have the requisite familiarity with the propositions I would have to have if I could be said to know what the theory states.

My having consulted an expert raises a crucial point. For, according to Chomsky, my knowledge includes what is known to experts within my speech community. Citing Putnam's notion of the division of linguistic labour, Chomsky asserts that the meaning of a term may be expressed in terms of the specialized knowledge of others in my speech community. By virtue of my being a member of a given speech community (presumably, in this case, speakers of English), in other words, my knowledge of language encompasses the best theories as formulated by the appropriate experts.

But if, as I believe we should, we are to agree that the acceptance condition sets legitimate assertibility criteria constraining knowledge ascriptions, we cannot automatically attribute the experts' knowledge to any given member of a speech community. Recall the definition of knowing introduced above: even given that S belongs to a speech community in which R' is accepted as the best available explanation of a particular language behaviour, we still would have to show that S him- or herself stands in the proper attitude to the propositions and concepts making up R'. It is not enough that someone from his or her speech community stands in such an attitude; he or she must him- or herself stand in that attitude.

In light of this, I believe we can reconceive the relationship between S and rule R' of the best (yet unfamiliar) theory explaining S's language behaviour. Assuming that ascription of knowledge of R' to S is unjustified given the acceptance condition and the corresponding criteria of assertibility set by S's first-person reports, we can say that, because R' is potentially available to S by virtue of its arising from the best theory available to the relevant experts in S's speech community, R' is the standard against which S's knowledge can be measured. This is not to say that S knows R' but rather that S's state of knowledge can be brought to a level such that S will accept R' as the correct explanation of the given language behaviour

Like first-person criteria of assertibility, third-person standards of explanation cast our assertions of knowledge and avowals of belief in a normative light. My first-person report of why I think I behaved in a given way may be an adequate account of my own beliefs on the subject, but it may fail utterly as an adequate explanation of that behaviour - in the context of the most advanced or accepted thinking on the subject. The upshot of this is that we must think of the best third-person ascriptions of knowledge as hypotheses embodying explanatory standards that people may (or perhaps should) meet in the appropriate context.

This last qualifier is crucial, for there is a degree to which the adequacy of a response will be gauged in terms of the analytical or explanatory framework within which it is elicited. There may be circumstances in which "Because I knew it would quench my thirst" would be a sufficient answer to the question "Why did you drink that glass of water?" Similarly, it can be argued that there may be contexts -- the teaching of grammar to children, for instance - in which the preferability of sentence A to sentence B is better explained in terms of verbs, adverbs, and objects rather than in terms of strict adjacency.

In a general sense, third-person standards and first-person criteria set certain conditions that our assertions and avowals may meet. Explanations drawn from the best theories provide the standards toward which our own state of knowledge and repertoire of beliefs may aspire. Criteria of assertibility derived from first-person reports and avowals provide conditions placing constraints on what third-person ascriptions may hold. Thus even if the best theories for explaining behaviour serve as standards to which knowledge of that behaviour can aspire, first-person accounts still must be factored in as legitimate constraints on the range of third-person ascriptions.

When we are justified in believing a claim, we are often so justified because our belief is based on other beliefs. Yet, it is not an adequate defence of a belief merely to cite some other belief that supports it, for the supporting belief may have no epistemic credentials at all - it may be a belief based on mere prejudice, for example. In order for the supporting belief to do the work required of it, it must itself pass epistemic muster, standardly understood to mean that it must itself be justified. If so, however, the question of what justifies this belief arises as well. If it is justified on the basis of some yet further belief, that belief, too, will have to be justified; and the question will arise as to what justifies it.

Thus arises the regress problem in epistemology. Skeptics maintain that the regress cannot be avoided and hence that justification is impossible. Infinitists endorse the regress as well, but argue that the regress is not vicious and hence does not show that justification is impossible. Foundationalists and coherentists agree that the regress can be avoided and that justification is possible. They disagree about how to avoid the regress. According to foundationalism, the regress is found by finding a stopping point for the regress in terms of foundational beliefs that are justified but not wholly justified by some relationship to further beliefs. Coherentists deny the need and the possibility of finding such stopping points for the regress. Sometimes coherentism is described as the view that allows that justification can proceed in a circle (as long as the circle is large enough), and that is one logically possible version of the view (though it is very hard to find a defender of this version of coherentism). The version of coherentism that is more popular, however, objects in a more fundamental way to the regress argument. This version of coherentism denies that justification is linear in the way presupposed by the regress argument. Instead, such versions of coherentism maintain that justification is holistic in character, and the standard metaphors for coherentism are intended to convey this aspect of the view. Neurath's boat metaphor - according to which our ship of beliefs is at sea, requiring the ongoing replacement of whatever parts are defective in order to remain seaworthy–and Quine's web of belief metaphor–according to which our beliefs form an interconnected web in which the structure hangs or falls as a whole - both convey the idea that justification is a feature of a system of beliefs.

To see exactly where this conception of justification takes a stand on the regress problem, a formulation of the standard sceptical version of the regress argument will be helpful. To formulate such an argument, we need to use the idea of an inferential chain of reasons. Such an inferential chain traces the inferential dependence of a given belief, including in it as first link the belief in question, as second link whatever reason justifies it, as third link whatever epistemically supports the reason in question, and so on. The sceptical argument then proceeds as follows: No belief is justified unless its chain of reasons (I) is infinitely long,(ii) stops, or (iii) goes in a circle. An infinitely long chain of reasons involves a vicious regress of reasons that cannot justify any belief: Any stopping point to terminate the chain of reasons is arbitrary, leaving every subsequent link in the chain depending on a beginning point that cannot justify its successor link, ultimately leaving one with no justification at all.

Circular arguments cannot justify anything, leaving a chain of reasons that goes in a circle incapable of justifying any belief: coherentists are ordinarily characterized as maintaining that premise 4 of this argument is false. Though such a view would count as a version of Coherentism, standard Coherentism has no quarrel with 4, but instead rejects 1 because it presupposes that justification is non-holistic. Premise 1 assumes that justification is linear rather than holistic in virtue of characterizing justification in terms of inferential chains of reasons, and it is this feature of the regress problem to which typical coherentists object.

In sum, then, Coherentism can be negatively characterized as the view that, first, agrees with foundationalism that there is no regress of justification that is infinite (thereby rejecting both skepticism and infinitism) and, second, disagrees with foundationalism that justification depends on having an inferential chain of reasons with a suitable stopping point. This negative point can be maintained either by denying that the chain has a stopping point, thereby endorsing a linear version of Coherentism, or by denying the assumption that justification requires the existence of an inferential chain of reasons, thereby endorsing a holistic viewpoint. Since the primary examples of Coherentism in the history of the view are holistic in nature, I will focus in the remainder of this entry on this version of the views

Cherentists often defend their view by attacking foundationalism, implicitly relying on the implausibility of infinitism and skepticism. They attack foundationalism by arguing that no plausible version of the view will be able to supply enough in the way of foundational beliefs to support the entire structure of belief. This attack takes two forms. First, coherentists argue against the very idea of a basic belief, maintaining that it is always a sensible question to ask, “Why do you believe that (i.e., what reason can you give me for thinking that is true)?” Second, coherentists attack the idea that the kind of foundation developed will be adequate to support the structure. If, as is usual, foundationalists limit foundational beliefs to those about our experience in the specious present, it is hard to see how such a limited foundation can support the entire edifice of beliefs, including beliefs about the past and future, about the vast array of scientific opinion both about the observable realm and the unobservable, and about the abstract domain of mathematical and logical truth and the truths of morality. Foundationalists may, of course, introduce epistemic principles of justification that license whatever chain of reasons they wish to endorse from the foundations to the rest of the edifice of belief, but the resulting theory will look more and more ad hoc as new epistemic principles are offered whenever the threat of skepticism looms regarding a kind of belief not defensible by standard inductive and deductive rules of inference.

Regardless of the persuasiveness of these challenges to foundationalism, coherentists must and do go beyond negative philosophy to provide a positive characterization of their view. A bit of taxonomy and some specific examples will allow us to see how the required positive characterization is provided by coherentists. A useful taxonomy for Coherentism can be provided by distinguishing between subjective and objective versions of Coherentism. At a purely formal level, a version of Coherentism results from specifying two things: first, the things that must cohere in order for a given belief to be justified, and second, the relation that must hold among these things in order for the belief in question to be justified. In the realm of the logical space of Coherentism, both features can be given subjective or objective construals.

Consider first the items that need to cohere. As noted already, coherentists typically adopt a subjective viewpoint regarding the items that need to cohere, maintaining that the system on which coherence is defined is the person's system of beliefs. Coherence could be defined relative to other, more objective systems, however. Social versions of Coherentism may define coherence relative to the system of common knowledge in a given society, for example, and religious versions may define coherence relative to some body of theological doctrine. These latter two systems are objective in that the obtaining of the system in question implies nothing about the person whose belief is being evaluated. For this reason, they tend to be rather implausible, since they deny the perspectival character of justification, according to which whether or not one's beliefs are justified depends on facts about oneself and one's own perspective on the world. Versions that combine subjective and objective features are also possible. For example, a theory might begin with the system of a person's beliefs, and supplement it with additional claims that any normal person would believe in that person's situation. It is true, however, that standard versions of Coherentism are subjective about the items relative to which coherence is defined.

Even if this aspect of the view is subjective, however, belief is not the only subjective item to which a theorist might appeal, leaving one to wonder what explains the uniform agreement among coherentists that coherence should be defined relative to the class of beliefs. The reasons for this uniformity fall into two categories. One kind involves the claim that the only other possibly relevant mental states are experiential states (appearance states, sensation states), and that such states cannot be reasons at all since they lack propositional content(see Davidson 1989). This viewpoint has little plausibility to it, however. It may be true that there are some experiential states without content (perhaps the experience of pain is an experiential state without content), but it is equally true that some have content. It can appear to a person that it is raining, and the mental state involved has as content the proposition that it is raining.

A more plausible way to pursue this kind of argument is to maintain that if experiential states play a role in justification, they'll have to be able to play that role whether or not they are the kind of state that has propositional content. So, if some lack content and cannot be reasons on account of lacking content, then experiential states cannot play a role at all.

The difficulty with this line of argument is the conception of reasons it involves. It is true that if an experience has no content, then it cannot be in virtue of its content that it provides a reason. Even so, it is far from obvious that a reason has to be one in virtue of its content, for if we attend to ordinary defences people give of their beliefs, they often cite their experience as a reason. One can question whether they are merely explaining their beliefs rather than justifying them, but when that distinction is clarified, they'll still cite their experience as their reason (“Why are you grimacing?” “Because my leg hurts.” “Why do you think your leg hurts?” “Because I can feel it.” “Well, your experience may explain why you believe that your leg hurts, but I'm not asking for an explanation of your belief, I'm asking you to provide a reason for thinking that your belief that your leg hurts is correct; can you give me such a reason?” “Yes, because I can feel it hurting . . . )

The second category of defence for the idea that coherence is a relation on beliefs involves an argument to the effect that other mental states are either irrelevant to the question of the epistemic status of a belief (e.g., affective states such as hoping, wishing, fearing, and the like) or are insufficient for generating positive epistemic status (e.g., states such as sensation states or appearance states) - there is, after all, the issue of what to make of the sensory input, and that issue takes us beyond the sensation state itself (Lehrer 1974). The former point is unproblematic, but the latter point fails to imply the claim in question. Arguing that an appeal to experiential states is insufficient for justification in no way shows that an appeal to such states is not necessary for an adequate account of justification.

There is, however, a deeper motivation behind coherentists' aversion to defining coherence over a subjective system that includes experiential states. The worry is that appealing to experiential states in any way will result in a version of foundationalism. The understanding of foundationalism which results from the regress argument involves two features. The first is an asymmetry condition on the justification of beliefs - that inferential beliefs are justified in a way different from the way in which non-inferential beliefs are justified - and the second is an account of intrinsic or self-warrant for the beliefs which are foundationally warranted and which support the entire structure of justified beliefs. There are various proposals for how this latter commitment of foundationalism is to be formulated, but we can already see the outline of an argument for requiring that coherence not be defined over a system that includes experiential states. For if a theory were to include such states in the class of things with which a belief must cohere in order to be justified, the above considerations might seem to suggest that such a theory would have to involve some notion of intrinsic warrant or self-warrant. Some justification or warrant would be possessed by a belief, but not in virtue of some warrant-conferring relationship to any other belief. Hence, it might seem, this relation between the appearances and related beliefs would have to generate at least some positive degree of warrant for such beliefs, even if that warrant were not sufficient for full justification. Even if not sufficient for full justification, though, the theory would appear typically foundationalist in that it includes some notion of positive warrant not dependent on any relationship to other beliefs.

This argument is quite persuasive, but is ultimately flawed. The distinctive feature of foundationalism, in the context of the relationship between appearances and beliefs, is that this relation between appearances and beliefs is taken to be one which imparts positive epistemic status (perhaps only in the absence of defeaters). So, for example, if a version of foundationalism appeals to the appearance that it is raining as that which undergirds the foundational warrant for the belief that it is raining, that theory must maintain that the appearance supplies some positive warrant for the belief. It is this warrant-conferring requirement that allows Coherentism to escape the above argument, for it is open to coherentists to deny that appearances impart, or tend to impart (even in the absence of defeaters), any degree of positive epistemic status for related beliefs. The coherentists can maintain, instead, that appearances are necessary (in the usual situations) for those beliefs to have some degree of positive epistemic status, but in no way sufficient in themselves for any degree of positive epistemic status. Coherentists can go on to identify what would be sufficient in conjunction with the relation to appearances in typically coherentist fashion, focussing on the way in which any one of our beliefs is related to an entire system of information in question. The resulting theory would be one in which experience plays a role, but not the kind of role that is distinctive of foundationalism.

Another way to make this same point is to recall that Coherentism is not committed to the view that coherence is a relation on the system of the person's beliefs. For one thing, coherence might be a relation on an objective body of information, perhaps in the form of coherence with some body of common knowledge (or, more plausibly, by supplementing a system of beliefs with information any normal person would believe). So when coherentists defend a subjective version of the items over which coherence is defined, there cannot be some definitional requirement on the view that coherence must be a relation on a system of beliefs. That conclusion could be drawn only if there were a sound argument that showed that any appeal to experience would turn a theory into a version of foundationalism. Since the argument for that conclusion is flawed as explained above, Coherentism proper need not prohibit the subjective system over which coherence is defined from containing experiential states.

The second positive feature required of Coherentism is a clarification of the relation of coherence itself, and here again we find an important distinction between subjective and objective approaches. The most popular objective approach is explanatory Coherentism, which defines coherence in terms of that which makes for a good explanation. On such a view, hypotheses are justified by explaining the data, and the data are justified by being explained by our hypotheses. The central task for such a theory is to state conditions under which such explanation occurs .BonJour (1985) presents a different objective account of the coherence relation, citing the following five features in his account: (1) logical consistency;(2) the extent to which the system in question is probabilistically consistent; (3) the extent to which inferential connections exist between beliefs, both in terms of the number of such connections and their strength; (4) the inverse of the degree to which the system is divided into unrelated, unconnected subsystems of belief; and (5) the inverse of the degree to which the system of belief contains unexplained anomalies.

These factors are a good beginning toward an account of objective coherence, but by themselves they are not enough. We need to be told, in addition, what function on these five factors is the correct one by which to define coherence. That is, we need to know how to weight each of these factors to provide an assessment of the overall coherence of the system.

Even such a specification of the correct function on these factors would not be enough. One obvious fact about justification is that not all beliefs are justified to the same degree, so once we know what the overall coherence level is for a system of beliefs, we will need some further account of how this overall coherence level is used to determine the justificatory level of particular beliefs. It would be easy if the justificatory level simply matched the overall coherence level for the system itself, but this easy answer conflicts with the fact that not all beliefs are justified to the same degree

One way to address this problem is to distinguish between beliefs and strength of belief or degrees of belief. We believe some things more strongly or to a greater degree than other things. For example, I believe there is a cup of coffee on my desk much more strongly than I believe that I visited my parents in 1993, even though I believe both of those claims. Using the concept of a degree of belief, a coherentist may be able to identify what degree of belief coheres with a system of (degrees of) belief, and thereby explain how some beliefs are more justified than others. The explanation would be that one belief is more justified than another just in case a greater degree of belief coheres with the relevant system for one of the two beliefs.

The best-known example of a theory that employs the language of degrees of belief is also a useful example of a subjective account of the coherence relation. Such a subjective account can be developed by identifying a subjective theory of evidence that determines whether and when a person's belief, or degree of belief, is justified. A beautiful and elegant theory of this sort is a version of probabilistic Bayesianism. The version in question identifies justified beliefs with probabilistic coherence, so that a (degree of) belief is justified if and only if it is part of a system of beliefs against which no Dutch book can be made. (A Dutch book is a series of fair bets which are such that, if accepted, are guaranteed to produce a net loss.) In addition, this version of Bayesianism places a conditionalization requirement on justified changes in belief. Conditionalization requires that when new information is learned, one's new degree of belief match one's conditional degree of belief on that information prior to learning it. So if p is the new information learned, one should change one's degree of belief in q so that it matches one's degree of belief in q given p (together with everything else one knows) prior to learning q. The idea is that each person has an internal, subjective theory of evidence at a given time, in the form of conditional beliefs concerning all possible future courses of experience, so that when new information is acquired, all one needs to do is consult one's prior conditional degree of belief to determine what one's new degree of belief should be. Further, it is this subjective theory of evidence that defines the relation of coherence on the system of beliefs in question: coherence obtains when a belief conforms to the subjective theory of evidence in question, given the other items in the set of things over which coherence is defined

More generally, subjective versions of the coherence relation can be thought of in terms of the specification of a theory of evidence that is fully internal to the believer. One obvious way for the theory of evidence to be fully internal is for the theory of evidence to be contained within the belief system itself, as is true on the Bayesian theory above. There are other options, however. A subjective theory could appeal to dispositions to believe rather than to actual beliefs, or to something like one's deepest epistemic standards for trying to get to the truth and avoid error. Foley (1986) develops such a view in service of a type of foundationalist theory, understanding one's deepest standards in terms of the views one would hold given time to reflect without limitation and interference, and subjective coherentists could adopt much of this account in service of their view.

This broader characterization of the options open to subjective versions of the coherence relation carries the additional cost of appealing to the concept of what is internal to a believer, a notion that is none too clear (see the related entry justification, epistemic, internalist vs. externalist conceptions of). In broad terms, there are two important ways of thinking about what is internal here, one emphasizing whether the feature in question is somehow “in the head”, and the other emphasizing whether the feature is accessible to the believer on the basis of reflection alone. Unconscious beliefs would count as internal in the first sense, but not in the second; one's own existence is internal in the second sense, but presumably not in the first.

When offering a taxonomy of subjective versus objective characterizations of the coherence relation, it is not necessary to prefer one of these characterizations of what be internal. Instead, we can allow either to be used to specify a subjective account. Doing so places a greater burden on what kinds of arguments could be given for preferring one account of the coherence relation to another, and here the arguments will proceed in two stages. The first stage will address whether one's account of the coherence relation should be objective or subjective. On the side of an objective construal are the manifold intuitions in which we describe views as unjustified even though they are, from the point of view of the believer, the best view to hold. For example, we would say that cultic beliefs, such as the belief that accepting a blood transfusion is a terrible thing to do, are unjustified; and our judgment is not altered by learning that the believer in question was raised in the cult and can't be held responsible for knowing better. On the side of a subjective construal are the arguments for access internalism, according to which the fact that some people can't be held responsible for knowing better is a clear sign that their beliefs are justified, for justification is a property whose presence is detected by careful reflection. Another argument for subjective accounts relies on the new evil demon problem. Descartes' evil demon problem threatens the truth of our beliefs, for the demon makes the beliefs of the denizens of that world false. The new evil demon problem involves the concept of justification rather than truth, threatening theories that require objective likelihood of truth for a belief to be justified. For beliefs in demon worlds are false and likely to be so, but seem to have the same epistemic status as our beliefs do, since, after all, they could be us.

Recently, a new argument has appeared for subjective accounts of justification and, by extension, for subjective accounts of the coherence relation, if Coherentism is the preferred theory of justification. This argument appeals to the idea that an adequate theory of knowledge needs to account both for the nature of knowledge and for the value of knowledge. This issue arose first in Plato's dialogue between Meno and Socrates, in which Meno originally proposes that knowledge be more valuable than true belief because it get us what we want (his particular example is finding the way to Larissa). Socrates points out that true belief will work just as well, a response that befuddles Meno. When he finally replies, he expresses perplexity regarding two things. He first wonders whether knowledge is more than true belief, and he also questions why we prize knowledge more than true belief. The first issue is one concerning the nature of knowledge, and the second concerning the value of knowledge. To account for the nature of knowledge requires minimally that one offer a theory of knowledge that is counterexample-free. To account for the value of knowledge requires an explanation of why knowledge is more valuable than its (proper) parts, including true belief and justified true belief (for more on why knowledge is more than justified true belief, see knowledge, analysis of). Such an explanation would seem to require showing two things: first, that justified true belief is more valuable than true belief; and second, that justified true belief plus whatever further condition is needed to produce a counterexample-free account of the nature of knowledge is more valuable than justified true belief on its own. These requirements show the need for a conception of justification that adds value to true belief, and it is difficult for objective theories of justification to discharge this obligation. In the context of objective accounts of the coherence relation, such an account would be governed by a formal constraint to the effect that satisfying that account would increase one's chances of getting to the truth, and theories of justification guided by such a constraint are prime examples of theories that find it difficult to explain why justified true belief is more valuable than mere true belief. The problem they encounter is called "the swamping problem." It occurs when values interact in such a way that their combination is no more valuable than one of them separately, even though both factors are positively valuable. Examples that provide relevant analogies to the epistemic case include: beautiful art is no more valuable in terms of beauty for having been produced by an artist who usually produces beautiful artwork; functional furniture has no more functional value for coming from a factory that normally produces functional furniture. Just so, true beliefs are no more valuable from the epistemic point of view - the point of view defined in terms of the goal of getting to the truth and avoiding error - by having the additional property of being likely to be true.

Adopting a subjective theory allows one to avoid the swamping problem. The swamping problem arises for theories that characterize the teleological concept of justification in terms of properties whose presence makes a belief an effective means for getting to the goal of believing the truth and avoiding error. Subjective theories may also characterize the relationship between justification and truth in terms of a means/ends relationship, but they reject the requirement that something is a means to an end only if it is an effective means to that end, i.e., only if it increases the objective chances of that goal being realized. Subjectivists advert to the deepest and most important goals in life as examples, for such goals are rarely ones for which we have much idea of which means will be effective. Consider, for example, the goal of securing some particular person as a spouse, or the goal of raising psychologically healthy, emotionally responsible children. In each case, there are well-known ways in which achieving these goals can be sabotaged, and so we try not to proceed in that fashion. The problem is that there are too many ways that have worked for other people in securing similar goals, with no-good way of assessing which of these ways would be effective in the present case. Doing nothing will certainly not work, but among the various actions available, we can only choose and hope for the best.

Subjectivists say the same for beliefs. They maintain that what is objectively a good ground for a belief is no more transparent to us than is how to maximize happiness over a lifetime. We learn by trial and error on what to base our beliefs, in much the same way as we fumble along in trying for fulfilling existence. In doing our best in the pursuit of truth, subjectivists hold, we generate justification for our beliefs, even if all we have is hope that our grounds for belief make our beliefs likely to be true.

Whether these arguments on behalf of subjectivism in the theory of knowledge are weighty enough to overcome the strong intuitions on behalf of more objective accounts is not yet settled, though there is something approaching a consensus that subjectivism cannot quite be right in spite of the arguments in its favour. To the extent that the arguments are deemed plausible, a burden is created for relieving the tension that exists between the attractions of objective accounts and the arguments for subjective accounts. One move to reconcile this conflict is to posit different senses of the term ‘justified’ and its cognates. There are costs to such a move, however. One cost is that subjectivists and objectivists are confused, thinking they are disagreeing when they are not. In ordinary cases when a term has more than one meaning, competent speakers of the language are not confused in this way. Another cost is that ambiguity must be posited without any linguistic clues to its existence, and ambiguities that linguists would not discover but can only be discovered by philosophers are suspect for that reason.

Besides these family disputes within the coherentist clan, there are various problems that threaten to undermine every version of Coherentism. The focus here will be on three problems that have been widely discussed: problems related to the non-linear character of Coherentism, the input problem, and the problem of the truth connection.

The non-linear approach adopted by the most popular versions of Coherentism raises concerns that Coherentism is incompatible with a proper account of the basing relation. In brief, an account of the basing relation is needed to explain the difference between a situation where a person has good evidence for a belief, but believes it for other reasons, and a situation where has person holds the belief because of, or on the basis of, the evidence. The idea behind an appeal to the basing relation is that if the explanation of a person's belief does not appeal to the evidence for the belief, then the belief itself is not justified (even if the person has good evidence for the belief and thus the content of the belief is, in some sense, justified for that person). In the former case, where the belief is based on the evidence for it, we will say that the belief is doxastically justified; when there is good evidence for the belief, but the belief is held on other grounds, we will say that the belief is only propositionally justified.

The difficulty is that this way of drawing the distinction makes it appear that holistic Coherentism can only use the distinction if, somehow, the entire belief system of a person explains the holding of each belief that is a part of the system since, it would seem, a belief needs to be based on that which justifies it if the belief is to be properly based. If Coherentism is at its best in its holistic guises, then Coherentism succumbs because it is unable to distinguish properly based from improperly based beliefs (see Pollock 1985). If one goes so far as to maintain the stronger position that Coherentism can only be a holistic theory, then coherentists may find themselves in the position of having to maintain that all warranted beliefs are properly basic. For if holistic coherentists cannot draw a distinction between properly and improperly based beliefs, every belief will have automatically survived all requisite tests for warrant just by cohering with the relevant system. If a belief is properly based when it has survived all appropriate scrutiny, then all warranted beliefs will be properly basic, according to Coherentism.

Another way to voice this complaint is to find in the belief system a set of beliefs that can be inferentially related in an appropriate way, thereby allowing for the final step of the inference to be justified. It doesn't follow, however, that any inferential path using the same set of beliefs is a justifying one, simply because one such path is. So suppose there are two paths through the same set of five beliefs, one allowing for justification and the other not allowing for it. Let the contents of the beliefs be p, q, r, s, and t. Further, let each belief imply the next in sequence, i.e., p implies q, q implies r, and so forth. Assume as well that p, q, r, and s are all justified for the person in question. If so, a person can come to justifiably believe t by inferring from p to q to r to s and then to t. Suppose, however, that there are no other inferential relationships here besides the ones already assumed. If the order of inference were from p to s to r to q and then to t, believing t would not be justified. If holistic Coherentism can only explain proper basing in terms of whatever justifies the belief, then holistic Coherentism will be in trouble since in the case in question there is no difference in the system of beliefs in question. The only difference is in the order of inference, and this difference need imply no difference in belief.

One resource for a coherentist to use in replying to this concern about the basing relation is to distinguish between that which justifies a belief and that which is epistemically relevant to the epistemic status of belief, using this distinction to challenge the assumption that proper basing must be characterized in terms of that which justifies a belief. Consider a very abstract example. Suppose we have evidence e for p. This evidence can be defeated by further information we have, and this defeater might itself be undermined by even further information, information that would reinstate justification for p. Furthermore, there is no limit to the complexity that might be involved in this sequence of defeaters and reinstaters. Suppose, then, that the sequence of defeaters and reinstaters is significantly complex, e.g., suppose there are 20 levels of defeaters and reinstaters. From the perspective of a linear view, what must the person base a belief that p on in such a case in order for that belief to be justified? It would be unrealistic to assume that all 20 levels play a causal role in the belief, for it is not necessary to consider explicitly the sequence of defeaters and reinstaters in order to be justified in believing p. All that is necessary is that there be a reinstater for every level of defeat. If so, however, even a linear theorist will give an account of the basing relation on which it is acceptable to base a belief on something other than that which justifies the belief, all-things-considered.

Such a theorist may still maintain that one must base the belief on something that imparts prima facie justification (the kind of justification that will be all-things-considered justification if there is a reinstater for every defeater). What matters to the present discussion, however, is that even for non-holists there can be parts of a system of beliefs that are relevant to the justificatory status of a belief and yet which need not play a role in the proper basing of a justified belief. If, on the one hand, everything involved in the all-things-considered justification of a belief has to play a role in the basing relation, then every theory will be susceptible to unrealistic assumptions about the basing relation, for it is implausible to think that known rebutted defeaters enter into any kind of causal or deliberative process of belief formation and hence are not suitable candidates for helping to explain the presence of the resulting belief. For example, if I build a room with a blacklight in it, but include a device to block the light from shining on anything less than six feet off the floor, then I can know the colour of my daughter's shirt without this information about room construction entering into the story of belief formation - I need not consciously think of that information or engage in any inference guided by it, and that information need to be part of the cause of my belief. If, on the other hand, a belief can be properly based by being based on only part of the all-things-considered justification for the belief, then holists are free to clarify the basing relation in non-holistic terms as well. They can say that a belief is properly based when its presence is explained by features relevant to the all-things-considered justificatory status of a belief, even if these features themselves do not constitute an all-things-considered justification of the belief.

A simple example of such a feature illustrates how this idea would work in a holistic setting. On a holistic theory, every particular belief is insufficient for warrant on its own. Even so, a given belief might be an essential ingredient of the larger system on which coherence is defined, where that system is one of the systems under which a target belief in question could be justified. In such a case, the belief is relevant to the epistemic status of the target belief, even though it imparts no warrant to the target belief. Beliefs with such special epistemic relevance can be used to clarify what is required for a belief to be properly based without violating the holistic requirement that no such beliefs impart any degree of warrant by themselves.

A second major problem for Coherentism is the isolation objection, also called “the input problem,” which Laurence BonJour formulates as follows:: Coherence is purely a matter of the internal relations between the components of the belief system; it depends in no way on any sort of relation between the system of beliefs and anything external to that system. Hence if, as a coherence theory claims, coherence is the sole basis for empirical justification, it follows that a system of empirical beliefs might be adequately justified, indeed might constitute empirical knowledge, in spite of being utterly out of contact with the world that it purports to describe. Nothing about any requirement of coherence dictates that a coherent system of beliefs need receive any sort of input from the world or be in any way causally influenced by the world (BonJour 1985)

The input problem concerns the relationship between a system of beliefs and the external world. It underlies a multitude of counterexamples to Coherentism on which we take a person at a given time with a coherent system of beliefs whose system of beliefs meshes well with their experience of the world at that given time. We then freeze this coherent system of beliefs, and vary the person's experience (so that the person still thinks, e.g., he's climbing a mountain when he's really at an opera house experiencing a performance of La Boheme), thereby isolating the system of beliefs from reality. The result is that Coherentism seems to be a theory that allows coherence to imply justification even when the system of beliefs is completely cut off from individuals' direct experience of the world around them.

The standard response by coherentists is to try to find a way to require some effect of experience in a belief system, perhaps in the form of spontaneous beliefs (BonJour 1985). Such attempts are not very promising, and lead to the impression that the only way to deal with the input problem is to transform Coherentism into a version of foundationalism. That is, the harder coherentists try to find some ineliminable effect of experience on a belief system, the more their theory hinges on finding a role for experience in the story of justification; and when foundationalism is conceived as the kind of theory that allows such a role, then the efforts of coherentists to find such a role for experience look more like acquiescence to the inevitability of affirming foundationalism. For if the only way to avoid the isolation objection is to insist that a belief system must be responsive to experience in order for the beliefs involved to be justified, and if any appeal to experience commits one to foundationalism, then Coherentism succumbs to the isolation objection. The aforementioned, however, there is nothing in Coherentism proper that requires coherence to be defined solely as a relation on beliefs. It is a mere artifact of the history of the view that coherentists always claim such, and whatever the force of the isolation objection against standard versions of Coherentism, it disappears as a problem unique to coherence theories once experience is allowed to play a role in a coherentist theory.

A longstanding objection to Coherentism can be expressed by noting that a good piece of fiction will display the virtue of coherence, but it is obviously unlikely to be true. The idea is that coherence and likelihood of truth are so far apart that it is implausible to think that coherence should be conceived of as a guide to truth at all, let alone the singular such guide that justification is supposed to constitute.

This concern over the truth connection is sometimes put in the form of the alternative systems objection, according to which there is always some coherent system to fit any belief into, so that if a person were to make sufficient changes elsewhere in the system, any belief could be justified. This particular version of the worry involves too many distractions from the fundamental problem, however. For one thing, it appeals to the idea of making vast changes to one's system of beliefs, but beliefs are not the sort of thing over which we typically can exert control. Furthermore, there is no reason to think that only one system of beliefs can be justified, so rather than constituting an objection to Coherentism, this particular formulation of the problem in question looks more like a pleasantly realistic consequence of any adequate theory of justification.

Hidden behind the explicit language of the alternative systems objection, however, is a deeper concern relying on the idea that justification is somehow supposed to be a guide to truth, and mere coherence is not a likely indicator of truth. The deeper concern will have be to formulated carefully, however, for once we see the proper response to the isolation objection above, it is far from clear how Coherentism suffers from any failure on this score that would not equally undermine foundationalism. For one way of thinking about the isolation objection is in terms of the idea that coherent systems of belief can be completely cut off from reality, in the same way that a good piece of fiction can be, and once such severance occurs, likelihood of truth must go as well. As we have seen, however, nothing about Coherentism proper forces it to succumb to this problem (as long as finding a role for experience in the story of justification blocks the objection, as it must if foundationalism can escape the objection), and if coherentists are able to find a role for experience in their theory, then coherentism cannot be criticized for failure to provide a suitable guide to truth anymore than foundationalism can.

Moreover, there are problems with casual formulations of the truth concern. First, such casual formulations can run into difficulty explaining how one can be justified in believing a scientific theory rather than believing merely the conjunction of its empirical consequences. Since the theory implies its empirical consequences, the conjunction will, in ordinary cases, have a higher probability than the theory (since it is a theorem of the probability calculus that if A entails B, then the probability of A is less than or equal to the probability of B). Second, casual formulations of the truth concern ordinarily fall prey to the new evil demon problem discussed earlier. Inhabitants of demon worlds would appear to have roughly the same justified beliefs that we have (since they could be us), but their beliefs have little chance of being true. So any formulation of the truth concern that insists that justification must imply likelihood of truth will have to find an answer to the new evil demon problem. Further, one of the fundamental lessons of the lottery and preface paradoxes has been held to be that justified inconsistent beliefs are possible. (The lottery paradox begins by imagining a fair lottery with a thousand tickets in it. Each ticket is so unlikely to win that we are justified in believing that it will lose. So we can infer that no ticket will win. Yet we know that some ticket will win. In the preface paradox, authors are justified in believing everything in their books. Some preface their book by claiming that, given human frailty, they are sure that errors remain, errors for which they take complete responsibility. But then they justifiably believe both that everything in the book is true, and that something in it is false, from which a contradiction can be easily derived.) The paradoxes are paradoxical because contradictory beliefs cannot be justified, but inconsistent beliefs, even when the inconsistency is known, are not the same thing as contradictory beliefs (the challenge, of course, is to find a principled way to stop the inconsistency from turning into a contradiction). If justified inconsistent beliefs are possible, and it surely seems that they are, then a system of beliefs can be justified even if the entire system has no chance whatsoever of being true. . . .

This possibility of justified inconsistent beliefs has been held to constitute a refutation of coherentism (see, e.g., Foley 1986), but some coherentists have demurred (e.g., Lycan 1996). One idea is to partition a system of beliefs and only apply the requirement of consistency within partitions of the system, not to the entire system itself. If consistency applies only with partitions, then, presumably, that is also where coherence does its work, leaving us with a coherence theory that is less than globally holistic. A further issue is how the partitioning is to be accomplished, and in the absence of an account of how to do so, it remains undetermined whether the possibility of justified inconsistent beliefs is compatible with coherentism.

It is fair to say that the issue of the truth connection has not been resolved for coherentism. In a way, this fact should not be surprising since the issue of the truth connection is a fundamental issue in epistemology as a whole, and it affects not only coherentism but its competitors as well

Unlike the truth condition, condition (ii), the belief condition, has generated at least some discussion. Although initially it might seems obvious that knowing that p requires believing that p, some philosophers have argued that knowledge without belief is indeed possible. Suppose Walter comes home after work to find out that his house has burned down. He utters the words "I don't believe it." Critics of the belief condition might argue that Walter knows that his house has burned down (he sees that it has), but, as his words indicate, he does not believe it. Therefore, there is knowledge without belief. To this objection, there is an effective reply. What Walter wishes to convey by saying "I don't believe it" is not that he really does not believe what he sees with his own eyes, but rather that he finds it hard to come to terms with what he sees.

A more serious counterexample has been suggested by Colin Radford. Suppose Albert is quizzed on English history. One of the questions is: When did Queen Elizabeth die?" Albert doesn't think he knows, but answers the question correctly. Moreover, he gives correct answers to many other question to which he didn't think he knew the answer. Let us focus on Albert's answer to the question about Elizabeth (E) Elizabeth died in 1603. Radford makes the following two claims about this example: Since he takes (a) and (b) to be true, Radford would argue that knowledge without belief is indeed possible. How would an advocate of the JTB account respond to Radford's proposed counterexample? Their response would be, in short, that this is not a case of knowledge without belief because it isn't a case of knowledge to begin with. Albert doesn't know (E) because he has no justification for believing (E). If he were to believe (E), his belief would be unjustified. This reply anticipates what we have not yet discussed: the necessity of the justification condition. Let us first discuss why friends of JTB hold that knowledge requires justification, and then discuss in greater detail why they would not accept Radford's alleged counterexample

Why is condition (iii) necessary? Why not say that knowledge is true belief? The standard answer is that to identify knowledge with true belief would be implausible because a belief that is true just because of luck does not qualify as knowledge. Beliefs that are lacking justification are false more often than not. However, on occasion, such beliefs happen to be true. Suppose William takes a medication that has the following side effect: it causes him to be overcome with irrational fears. One of his fears is that he has cancer. This fear is so powerful that he starts believing it. Suppose further that, by sheer coincidence, he does have cancer. So his belief is true. Clearly, though, his belief does not amount to knowledge. But why not? Most epistemologists would agree that William does not know because his belief's truth is due to luck (bad luck, in this case). Let us refer to a belief's turning out to be true because of mere luck as epistemic luck. It is uncontroversial that knowledge is incompatible with epistemic luck. What, though, is needed to rule out epistemic luck? Advocates of the JTB account would say that what is needed is justification. A true belief, if an instance of knowledge and thus not true because of epistemic luck, must be justified. But what is it for a belief to be justified?

Among the philosophers who favour the JTB approach, we find bewildering disagreement on how this question is to be answered. According to one prominent view, typically referred to as "evidentialism", a belief is justified if, and only if, it fits the subject's evidence. Evidentialists, then, would say that the reason why knowledge is not the same as true belief is that knowledge requires evidence. Opponents of evidentialism would say that evidentialist justification (i.e., having adequate evidence) is not needed to rule out epistemic luck. They would argue that what is needed instead is a suitable relation between the belief and the mental process that brought it about. What we are looking at here is an important disagreement about the nature of knowledge, which will be our main focus further below. In the meantime, we will continue our examination of the JTB analysis.

Returning to Radford's counterexample to the belief condition, which we considered above. We are now in a position to discuss further the reply to it. Recall that Albert does not take himself to know the answer to the question about the date of Elizabeth's death. He does not because he has does not remember having learned the basic facts of British history. Now, it is of course true that he did learn these facts, and is indeed able to recall them. But is this by itself sufficient for knowing them? Philosophers who think that knowledge requires evidence would say that it is not. Albert needs to have evidence for believing that he learned those facts. Until he is quizzed, he has no such evidence. After the quiz, when he is told that most of his answers were correct, he does have the requisite evidence. For once he comes to know that he is able to produce consistently correct answers to the questions he is asked, he has acquired evidence for believing that he must have learned this subject matter at school. This evidence is also evidence for the answers he has given. So at that point, the justification condition is met, and thus (since the other conditions of knowledge are also met) he knows (again) that Elizabeth died in 1603. However, he did not know this before he finds out that he must have learned those facts, for at that point his answer to the question lacked justification, and thus did not add up to knowledge. Evidentialists would deny, therefore, that Radford has supplied us with a counterexample to the belief condition.

"Is Justified True Belief Knowledge?", Edmund Gettier presented two effective counterexamples to the JTB analysis. The second of these goes as follows. Suppose Smith has good evidence for the false proposition (1) Jones owns a Ford. Suppose further Smith infers from (1) the following three disjunctions: (2) Either Jones owns a Ford or Brown is in Boston. (3) Either Jones owns a Ford or Brown is in Barcelona. (4) Either Jones owns a Ford or Brown is in Brest-Litovsk. Since (1) entails each of the propositions (2) through (4), and since Smith recognizes these entailments, he is justified in believing each of propositions (2)-(4). Now suppose that, by sheer coincidence, Brown is indeed in Barcelona. Given these assumptions, in believing (3), Smith holds a justified true belief. However, is it an instance of knowledge? Since Smith has no evidence whatever as to Brown's whereabouts, and believes what is true only because of luck, the answer would have to be ‘no’. Consequently, the three conditions of the JTB account -- truth, belief, and justification - are not sufficient for knowledge. How must the analysis of knowledge be modified to make it immune to cases like the one we just considered? This is what is commonly referred to as the "Gettier problem"

Epistemologists who think that the JTB approach is basically on the right track must choose between two different strategies for solving the Gettier problem. The first is to strengthen the justification condition. This was attempted by Roderick Chisholm. The second strategy is to search for a suitable further condition, a condition that would, so to speak, "degettierize" justified true belief. Let us focus on this second strategy. According to one suggestion, the following fourth condition would do the trick: (iv) S's belief that p is not inferred from any falsehood.

Unfortunately, this proposal is unsuccessful. Since Gettier cases need not involve any inference, there are possible cases of justified true belief in which the subject fails to have knowledge although condition (iv) is met. Suppose, for example, that James, who is relaxing on a bench in a park, observes a dog that, about 8 yards away from him, is chewing on a bone. So he believes (5) There is a dog over there. Suppose further that what he takes to be a dog is actually a robot dog so perfect that, by vision alone, it could not be distinguished from an actual dog. James does not know that such robot dogs exist. But in fact a Japanese toy manufacturer has recently developed them, and what James sees is a prototype that is used for testing the public's response. Given these assumptions, (5) is of course false. But suppose further that just a few feet away from the robot dog, there is a real dog. Sitting behind a bush, he is concealed from James's view. Given this further assumption, James's belief is true. So once again, what we have before us is a justified true belief that doesn't qualify as an instance of knowledge. Arguably, this belief is directly justified by a visual experience; it is not inferred from any falsehood. But if (5) is indeed a non-inferential belief, then the JTB account, even if supplemented with (iv), gives us the wrong result that James knows (5)

Another case illustrating that clause (iv) won't do the job is the well-known Barn County case. Suppose there is a county in the Midwest with the following peculiar feature. The landscape next to the road leading through that county is peppered with barn-facades: structures that from the road look exactly like barns. Observation from any other viewpoint would immediately reveal these structures to be fakes: devices erected for the purpose of fooling unsuspecting motorists into believing in the presence of barns. Suppose Henry is driving along the road that leads through Barn County. Naturally, he will on numerous occasions form a false belief in the presence of a barn. Since Henry has no reason to suspect that he is the victim of organized deception, these beliefs are justified. Now suppose further that, on one of those occasions when he believes there is a barn over there, he happens to be looking at the one and only real barn in the county. This time, his belief is justified and true. But its truth is the result of luck, and thus his belief is not an instance of knowledge. Yet condition (iv) is met in this case. His belief is clearly not the result of any inference from a falsehood. Once again, we see that (iv) does not succeed as a solution to the Gettier problem.

Above, we noted that the role of the justification condition is to ensure that the analysand does not mistakenly identify as knowledge a belief that is true because of epistemic luck. The lesson to be learned from the Gettier problem is that the justification condition by itself cannot ensure this. Even a justified belief, understood as a belief based on good evidence, can be true because of luck. Thus if a JTB analysis of knowledge is to rule out the full range of cases of epistemic luck, it must be amended with a suitable fourth condition, a condition that succeeds in preventing justified true belief from being "gettiered." We will refer to an analysis of this type as a "JTB+" conception of knowledge. The analysis of knowledge may be approached by asking the following question: What turns a true belief into knowledge? An uncontroversial answer to this question would be: the sort of thing that effectively prevents a belief from being true as a result of epistemic luck. Controversy begins as soon as this formula is turned into a substantive proposal. According to evidentialism, which endorses the JTB+ conception of knowledge, the combination of two things accomplishes this goal: evidentialist justification plus degettierization (a condition that prevents a true and justified belief from being "gettiered"). However, according to an alternative approach that has in the last three decades become increasingly popular, what stands in the way of epistemic luck - what turns a true belief into knowledge - is the reliability of the cognitive process that produced the belief. Consider how we acquire knowledge of our physical environment: we do so through sense experience. Sense experiential processes are, at least under normal conditions, highly reliable. There is nothing accidental about the truth of the beliefs these processes produce. Thus beliefs produced by sense experience, if true, should qualify as instances of knowledge. An analogous point could be made for other reliable cognitive processes, such as introspection, memory, and rational intuition. We might, therefore, say that what turns true belief into knowledge is the reliability of our cognitive processes.

This approach -- reliabilism, as it is usually called - can be carried out in two different ways. First, there is reliabilism as a theory of justification (J- reliabilism). Here the idea is that while justification is indeed necessary for knowledge, its nature is not evidentialist but reliabilist. The most basic version of this view - let's call it "simple" J-reliabilism - goes as follows: S is justified in believing that p if, and only if, S's belief that p was produced by a reliable cognitive process. Second, there is reliabilism as a of knowledge (K-reliabilism). According to this approach, knowledge does not require justification. Rather, what it requires (in addition to truth) is reliable belief formation. Fred Dretske defends this view as follows: Those who think knowledge required something other than, or at least more than, reliably produced true belief, something (usually) in the way of justification for the belief that one's reliably produced beliefs are being reliably produced, have, it seems to me, an obligation to say what benefits this justification is supposed to confer . . . Who needs it, and why? If an animal inherits a perfectly reliable belief-generating mechanism, and it also inherits a disposition, everything being equal, to act on the basis of the belief so generated, what additional benefits are conferred by a justification that the beliefs are being produced in some reliable way? If there are no additional benefits, what good is this justification? Why should we insist that no one can have knowledge without it?

Further below we will discuss how advocates of the JTB approach might answer Dretske's question. In the meantime, let us focus a bit more on Dretske's account of knowledge. According to Dretske, reliable cognitive processes convey information, and thus endow not only humans, but (nonhuman) animals as well, with knowledge. He writes: I wanted a characterization that would at least allow for the possibility that animals (a frog, rat, ape, or my dog) could know things without my having to suppose them capable of the more sophisticated intellectual operations involved in traditional analyses of knowledge.

Attributing knowledge to animals is certainly in accord with our ordinary practice of using the word ‘knowledge’. Dretske seems right, therefore, when he views the result that animals have knowledge as a desideratum. A second advantage of his theory is, so Dretske claims, that it avoids Gettier problems. He says Gettier difficulties . . . arise for any account of knowledge that makes knowledge a product of some justificatory relationship (having good evidence, excellent reasons, etc.) that could relate one to something false . . . This is [a] problem for justificational accounts. The problem is evaded in the information-theoretic model, because one can get into an appropriate justificational relationship to something false, but one cannot get into an appropriate informational relationship to something false.

Solving the Gettier-problem is, however, a bit more complex than this passage suggests. Consider again the case of Henry in Barn County. He sees a real barn in front of him, yet does not know that there is a barn near-by. Exactly how can Dretske's theory explain Henry's failure to know? After all, he perceives an actual barn, and so does not stand in any informational relationship to something false. So if perception, on account of its reliability, normally conveys information, it should do so in this case as well. Alas, it doesn't. Clearly, if a theory like Dretske's is to handle this case and others like it, it must be supplemented with a clause that makes it immune to the case of the fake barns, and other examples like it.

Evidentialists reject both J-reliabilism and K-reliabilism. They reject J-reliabilism because they advocate internalism: they take justification to be something that is "internal" to the subject. J-reliabilists disagree; they take justification to be something that is "external" to the subject In order to pin down what the "internality" of justification is supposed to be, let us turn to Roderick Chisholm, one of the chief advocates of internalism. In the third edition of The Theory of Knowledge, Chisholm says the following: If a person S is internally justified in believing a certain thing, then this may be something he can know just by reflecting upon his own state of mind In the second edition of this book, he characterizes internalism in a somewhat different way: We presuppose . . . that the things we know are justified for us in the following sense: we can know what it is, on any occasion, that constitutes our grounds, or reasons, or evidence for thinking that we know

These passages differ in the following respect: in the first Chisholm is concerned with the property of justification (a belief's being justified); in the second, with justifiers: the things that make justified beliefs justified. What is common to both passages is the constraint Chisholm imposes. In the first passage, Chisholm characterizes justification as something that is recognizable on reflection, and in the second as the sort of thing that can be known on any occasion. Arguably, this is just a terminological difference. It would not be implausible to claim that what can be recognized through reflection is something that can be recognized on any occasion, and what can be recognized on any occasion is something that can be recognized through reflection. Although this point deserves further examination, let us here simply assume that recognizability on reflection and recognizability on any occasion amount to the same thing. In what follows, we will refer to it as direct recognizability. The aforementioned, in the first passage Chisholm imposes the direct recognizability constraint on justification, in the second on justifiers. Does this amount to a substantive difference? If the direct recognizability of justifiers implies the direct recognizability of justification, and vice versa, then the two passages we considered would indeed just be alternative ways of stating the same point. Whether they really are is debatable, but here we will simply assume that it makes no difference whether internalism is characterized in terms of the direct recognizability of justification, or that of justifiers.

Chisholm, then, defines internalism in terms of how justification (justifiers) is (are) knowable, that is, in terms of direct recognizability, or epistemic accessibility. This type of internalism may therefore be called accessibility internalism. Alternatively, internalism can be defined in terms of limiting justifiers to mental states. According to this second way of defining internalism, justifiers must be internal to the mind, i.e., must be mental events or states. Internalism thus defined could be referred to as mental state internalism. Whether accessibility internalism and mental state internalism are genuine alternatives depends on whether mental states (and events) are directly recognizable. If they are, what appear to be genuine alternatives might in fact not be. Since here we cannot go into the details of this issue, we will cut this matter short and simply define internalism, as suggested by Chisholm, in terms of direct recognizability, while acknowledging that it might be preferable to define it by restricting justifies to mental states. We will refer to internalism as defined here as "J-internalism," since it imposes the direct recognizability constraint on not knowledge, but justification. Justification is directly recognizable. At any time t at which S holds a justified belief B, S is in a position to know at t that B is justified. J-internalism is to be contrasted with J-externalism, which is simply its negation. Justification is not directly recognizable. It is not the case that at any time t at which S holds a justified belief B, S is in a position to know at t that B is justified. (There are times at which S holds a justified belief B but is not in a position to know that B is justified.)

Next, we will discuss what consequences we can derive from J-internalism. To begin with, we can derive the result that simple J-reliabilism is an externalist theory. According to Simple J-Reliabilism, reliability by itself - without the subject's having any evidence indicating its presence - is sufficient for justification. So simple J-reliabilism allows for possible cases of the following kind To illustrate this point, let us consider a familiar example due to Laurence BonJour. Suppose Norman is a perfectly reliable clairvoyant. At time t, his clairvoyance causes Norman to form the belief that the president is presently in New York. However, Norman has no evidence whatever indicating that he is clairvoyant. Nor has he at t any way of recognizing that his belief was caused by his clairvoyance. Norman, then, cannot at t recognize that his belief is justified. So Simple J-reliabilism implies that Norman's belief is justified at t although Norman cannot recognize at that his belief is justified. Simple J-Reliabilism, therefore, is a version of J-externalism.

Second, J-internalism allows us to derive the consequence - as it should - that evidentialism is an internalist theory. The question of what a person's evidence consists of is of course not uncontroversial. Nor is it uncontroversial what kind of cognitive access a subject has to her evidence. However, it would certainly not be without a good deal of initial plausibility, at least if one looks at the matter from the point of view of the evidentialist, to make the following two assumptions. First, a subject's evidence consists of both her beliefs and experiential states (such as sensory, introspective, memorial, and intuitional states). Second, a subject's beliefs and experiential states are directly recognizable to her. And if we now add the further assumption (mentioned above) that the direct recognizability of justifiers implies the direct recognizability of justification, then we get the result that evidentialism is a form of J-internalism. Let us display the argument in detail:

The crucial premises in this argument are (2) and (4). Surely, evidentialists would be reluctant to call "evidence" something that is not directly recognizable to a subject So (2) would appear to be a premise that evidentialists are likely to endorse. And (4) expresses no more than one part of what we already assumed: that the direct recognizability of justifiers implies the direct recognizability of justification, and vice versa. Of course, this assumption might be challenged. What seems safe to say, therefore, is the conditional point that, if (2) and (4) capture what is essential to evidentialism, then evidentialism implies internalism about justification As mentioned, the evidentialists also reject K-reliabilism. They do so because, pace Dretske, they think that internal justification -- justification in the form of having adequate evidence -- is necessary for knowledge. In other words, they deny that a belief's origin in a reliable cognitive process is sufficient for the belief's being an instance of knowledge. Let us refer to this position as internalism about knowledge, or K-internalism, and let us define it using the concept of internal justification: the kind of justification that meets the direct recognizability constraint.

Internal justification is a necessary condition of knowledge. A belief's origin in a reliable cognitive process is not sufficient for its being an instance of knowledge. K-externalism is simply the negation of internalism: Internal justification is not a necessary condition of knowledge. A belief's origin in a reliable cognitive process is sufficient for its being an instance of knowledge. Consequently, there are cases of knowledge without internal justification. We have merely concerned ourselves with what internalists and externalists disagree about with regard to both justification and knowledge. In the next two sections, we will examine what reasons internalists and externalists can cite in support of their respective views.

To begin with, one straightforward argument for J-internalism proceeds from evidentialism as a premise. For as we have seen above, there is a plausible construal of evidentialism that proceeds from the direct recognizability of a person's evidence to the direct recognizability of justification. So philosophers who are attracted to evidentialism are likely to be attracted to J-internalism as well. Furthermore, as was already mentioned at the end of the previous section, evidentialism is not only a view about the nature of justification, but also a view about the nature of knowledge. And what evidentialists would say about the nature of knowledge is this: having justification -- in the form of having adequate evidence -- is a necessary condition of knowledge. But such justification is plausibly construed as internal justification, as satisfying the direct recognizability constraint that J-internalism imposes. S is justified in believing that p iff in believing that p, S does not violate his epistemic duty. The concept of duty employed here must not be confused with ethical duty, or prudential duty. The type of duty in question is specifically epistemic. Exactly what epistemic duties are, however, is a matter of controversy. The basic idea is that epistemic duties are those that arise in the pursuit of truth. Thus we might express (1) alternatively as follows: S is justified in believing that p iff in believing that p, S does not fail to do what he ought to do in the pursuit of truth. Of course, this way of putting things leads us directly to a further question: in the pursuit of truth, exactly what is it that one ought to do? Evidentialists would say: it is to believe what, and only what, one has evidence for. Now if that is one's epistemic duty, then those who take justification to be deontological can employ the argument considered above (which proceeds from evidentialism to J-internalism) to derive the conclusion that deontological justification is internal justification. So the combination of deontology about justification with evidentialism allows for a pretty straightforward derivation of J-internalism.It has also been suggested that there be a more direct argument from deontology to J-internalism, an argument that does not depend on evidentialism as a premise. It derives the direct recognizability of justification from the premise that what determines epistemic duty is directly recognizable. Therefore: (2) follows directly from the deontological conception of justification. (5) is nothing new?; we have assumed it above already. The argument's main premise is of course (3) Certainly (3) is not obviously implausible. Nevertheless, it is open to criticism, as is (5), which we merely assumed. Obviously, then, the argument is not uncontroversial. Nevertheless, it seems fair to say that it represents a straightforward and defensible derivation of internalism from deontology.

Third, internalism (J or K) can be defended indirectly on the basis of objections to particular externalist accounts of justification or knowledge. Since reliabilism is the dominant externalist approach, let us briefly consider a couple of internalist objections to reliabilism. First, recall BonJour's example of Norman: a subject who unwittingly possesses a reliable faculty of clairvoyance. This faculty produces the belief that the president is in New York, a belief that is reliably produced, and thus according to simple J-reliabilism justified. But is that belief really justified? Internalists would say that Norman's belief is actually unjustified, and thus not an instance of knowledge. They would say, therefore, that a belief's being reliably produced is not sufficient for making it justified, and that a true belief's being reliably produced is not sufficient for making it an instance of knowledge.

Second, internalists would say that reliable belief production is not even necessary for knowledge. Suppose you are a victim of Descartes's evil demon. You believe that you have a body and that there is a world of physical things, but in fact neither of these beliefs is true. There is no physical world at all. Since your perceptual beliefs are not reliably produced under these circumstances, simple J-reliabilism implies that they are unjustified. To internalists, this is an intuitively implausible result. They would take your beliefs to be (by and large) justified because they are (by and large) based on adequate evidence or good reasons. Hence they would reject the claim that being produced by reliable faculties is a necessary condition of epistemic justification.

One reason for externalism lies in the attraction of "philosophical naturalism." According to Gilbert Harman, this view, when applied to ethics, "is the doctrine that moral facts are facts of nature. Naturalism as a general view is the sensible thesis that all facts are facts of nature."What naturalists in ethics want, according to Harman, is to be able to locate value, justice, right, wrong, and so forth in the world in the way that tables, colours, genes, temperatures, and so on can be located in the world. According to this conception of naturalism, a naturalist in epistemology wants to be able to locate such things as knowledge, certainty, epistemic justification, and probability "in the world in the way that tables, colours, genes, temperatures, and so on can be located in the world." How, though, are naturalists to accomplish this? According to one answer to this question, they can accomplish this by identifying the non-epistemic grounds on which epistemic phenomena supervene. Alvin Goldman describes this desideratum as follows: The term "justified," I presume, is an evaluative term, a term of appraisal. Any correct definition or synonym of it would also feature evaluative terms. I assume that such definitions or synonyms might be given, but I am not interested in them. I want a set of substantive conditions that specify when a belief is justified . . . I want a theory of justified belief to specify in non-epistemic terms when a belief is justified

However, internalists need not deny that epistemic phenomena supervene on non-epistemic grounds, and that it is the task of epistemology to reveal these grounds. That is, internalists might as well agree that what a theory of justification ought to accomplish is an account of the substantive conditions of justification that is carried out in non-epistemic terms. It is doubtful, therefore, that the goal of locating epistemic value in the natural world establishes a link between philosophical naturalism and externalism.

According to a second answer to the question of how epistemic value can be located in the natural world, the way to do that is to employ the methods of the natural sciences. Appealing to this methodological constraint, externalists might argue that, because the study of justification and knowledge is an empirical study, justification and knowledge cannot be what internalists take it to be, but rather must be identified with reliable belief production: a phenomenon that can be studied empirically. It is far from clear, however, that the fundamental questions of epistemology can be answered by employing the methods of natural science. If they cannot be answered that way, then epistemology cannot be done without employing, at least to some extent, the a priori methods of the armchair philosopher. But then the universal scope of the methodological constraint in question remains unmotivated, and no compelling reason remains to think that justification and knowledge are the sort of thing that can only be studied empirically, and thus cannot be what internalist take them to be

A second reason for externalism (more specifically, J-externalism) has to do with the connection between justification and truth. Internalists conceive of a justified belief as a belief that, relative to the subject's evidence or reasons, is likely to be true. However, such likelihood of truth is compatible with the belief's actual falsity. Indeed, such likelihood of truth is compatible with the evil demon scenario in which the vast majority of your empirical beliefs, although justified, is in fact false. Externalists consider this connection between justification and truth too thin, and thus demand a stronger kind of likelihood of truth. Reliability is usually taken to fill the bill William Alston, for example, would concur that, without a reliability constraint, the connection between justification and truth becomes too tenuous. He argues that only reliably formed beliefs can be justified, and defines a reliable belief-producing mechanism as one that "would yield mostly true beliefs in a sufficiently large and varied run of employments in situations of the sorts we typically encounter." Suppose we endorse this conception of justification. Let's suppose further that most of our beliefs are justified. It then follows that most of the beliefs we form in ordinary circumstances would have to be true most of the time. Such a belief system could still be brought about by an evil demon. However, it would not be a belief-system consisting of mostly false beliefs, and thus the evil demon responsible for it wouldn't be quite as evil as he could be. So what Alston-type justification rules out is this: a belief system of mostly justified beliefs that is generated by an evil demon who sees to it that most of our beliefs are false. This, then, is the benefit we can secure when, as externalists suggest, we make reliability a necessary element of justification.

Internalists would object that a strong link between justification and truth runs afoul of the rather forceful intuition that the beliefs of an evil demon victim are justified although they are mostly false. In response, externalists might concede that the sort of justification internalists have in mind and attribute to evil demon victims is a legitimate concept, but question the epistemological relevance of that concept. Of what epistemic value (of what value to the acquisition of knowledge), they might ask, is internal justification if it is the sort of thing an evil demon victim can enjoy, a person whose belief system is massively marred by falsehood? Internalists would reply that internal justification should not be expected to supply us with a guarantee of truth, and that its value derives from the fact that internal justification is necessary for knowledge.

A third reason for externalism has to do with Dretske's question about justification: "Who needs it, and why?" Dretske would say, of course, that nobody needs it (for the acquisition of knowledge, that is) because reliable belief production is sufficient for turning true belief into knowledge. With this, internalists disagree. They take the existence of examples like BonJour's clairvoyant Norman as a decisive reason to reject this sufficiency claim. According to them, Norman's belief about the whereabouts of the president, although reliably formed, is clearly unjustified, and thus not an instance of knowledge. Internalists, therefore, would answer Dretske's question thus: Those who wish to enjoy knowledge need justification, and they need it because one does not know that p unless one has adequate evidence or undefeated reasons for believing that p.

In reply to this, Dretske might repeat a point - a point that amounts to a fourth reason for externalism - from the passage we considered above: he takes animals such as frogs, rats, apes, and dogs to have knowledge. This is surely in line with the way we ordinarily use the concept of knowledge. The owner of a pet who does not attribute knowledge to it would be hard to find. But are animals capable of the sophisticated mental operations required by beings who enjoy the sort of justification internalists have in mind? It would seem not. At this point, the disagreement between internalists and externalists appears unresolvable. On the one hand, there are examples like BonJour's clairvoyant Norman, examples that strongly suggest that internal justification be necessary for knowledge. On the other hand, there is Dretske's point that knowledge is enjoyed by not only humans but animals as well. And this strongly suggests that internal justification is not necessary for knowledge.

K-internalism and K-externalism, then, are supported by conflicting intuitions. On the one hand, there is the thought that in order to know, one must have justification in the form of having adequate evidence or reasons. On the other hand, there is the thought that knowledge, resulting from reliable cognitive faculties, is not reserved to humans only. Both of these thoughts are inherently plausible. However, if it is indeed true that animals are not the sort of beings that can have internally justified or unjustified beliefs, these intuitions cannot be reconciled. If they cannot, then we get as a result of this irreconcilability two alternative and competing analyses of knowledge: one internalist, the other externalist. Let us state a gloss of the respective analyses. In these analyses, the term "internal justification" stands for the kind of concept internalists have in mind, and the term "external justification" for the kind of concept externalists employ.. S knows that p iff. If the internalism/externalism controversy is seen as essentially a controversy over the nature of knowledge, the debate over J-internalism vs. J-externalism would appear to be a case of talking past each other. J-internalists and J-externalists simply intend justification to achieve different things. They each operate with a different concept of justification. J-externalists take justification to be the sort of thing that turns true belief into knowledge, and view the Gettier problem merely as the problem of adding the right sort of bells and whistles to the justification-condition. J-internalists, on the other hand, cannot view degettierization as something that can, in the form of a suitable clause, be tacked on to the justification condition, for degettierization is an external matter. Rather, internalists take justification to be the sort of thing that turns true and degettiered belief into knowledge. Since J-internalists and J-externalists assign different roles to justification, what they ultimately disagree about is not the nature of justification, but the sort of thing in relation to which the theoretical role of epistemic justification is fixed: knowledge. Internalists assign justification the role of turning true and degettiered belief into knowledge because they take internal justification to be necessary for knowledge. In contrast, externalists assign a different role - that of turning true belief into knowledge - to justification because they think that internal justification is not necessary for knowledge. It is this difference in their respective views on the nature of knowledge that leads to different views on the nature of justification.

Thus we are confronted with a fundamental disagreement about the nature of knowledge. Externalists such as Dretske would say that the desideratum of making knowledge a natural phenomenon that is instantiated equally by humans and animals must trump the demand that knowledge require the possession of justification in the form of adequate evidence. They would have to say, therefore, that Norman, the unwitting clairvoyant, has knowledge just as much as a mouse that knows where to look for the cheese. Internalists would argue the other way around. To them, Norman-type cases establish the necessity of adequate evidence or undefeated reasons. And so they would say that, just as Norman's reliable clairvoyance (by itself, in the absence of evidence) does not give him knowledge, a mouse's reliable cognitive mechanisms do not give it knowledge of where to look for the cheese. Externalists would say that it merely seems to us that Norman lacks knowledge when in fact he has it. Internalists would say that it merely seems to us that animals know when in fact they do not.

Who is right about the nature of knowledge: internalists or externalist? It might be a mistake to expect that there is a decisive argument that settles the dispute one way or the other. Most likely, one reason why the nature of knowledge is a subject matter of philosophy is that in the end its nature remains enigmatic. Nevertheless, the common ground shared by IK and EK should not be overlooked. Both require true belief and external justification. What is contentious is merely the further question of whether knowledge requires internal justification as well

The traditional formulation of propositional knowledge (in Western philosophy) involves three key components: justification, truth, and belief (JTB). Propositional knowledge is, in this tradition, a justified belief held about a truth. To elaborate, the formulation holds that three conditions are necessary, and jointly sufficient for "knowledge". First, belief: you do not know something unless you also hold it as true in your mind; if you do not believe it, then you do not know it. Second, truth: there can be no knowledge of false propositions; belief in a falsehood is delusion or misapprehension, not knowledge. Third, justification: the belief must be appropriately supported; there must be sufficient evidence for the belief.

Thus, knowledge is like a three-legged stool which cannot stand when any one leg is removed. Consider lack of belief: it may be true that Alice's twin sister has just been killed in a car accident, and the police officer reporting the fact may be sufficient evidence to warrant belief, but Alice may find herself unable to accept it, and will thus fail to know it. Lack of truth also disqualifies knowledge: the pre-Copernican belief (amply justified at the time) that heavenly bodies moved around a stationary Earth is false, and is thus not knowledge, even if educated persons of the day operated under the misapprehension that it was. Lastly, lack of justification precludes knowledge: if a charlatan fortune-teller informs Alice that she will meet the man of her dreams within a month, then this proposition isn't knowledge for Alice even if she believes it and it actually happens. Knowledge must be properly grounded, and the charlatan's claim had no grounds whatsoever.

This traditional formulation is not without its problems. One could argue, for example, that "knowledge", so defined, is not a very interesting concept: the individual questions of whether a proposition is true, whether a subject believes it, and whether the subject is justified in doing so do not become more interesting when the answers happen to be uniformly affirmative. Or one could argue the pragmatic case that "knowledge" is not a useful concept: it's all very well to ponder whether subject S knows proposition P given a hypothetical situation with specified truths, but what of knowledge in the real world, where determining the truth of P is part of the problem?

More significantly, perhaps, one could argue that JTB is not actually an entirely sufficient account of knowledge; that situations arise in which a justified true belief is not knowledge. Edmund Gettier makes a famously disruptive case for this view in a short paper entitled, "Is Justified True Belief Knowledge?" (originally published in Analysis, 1963, pp. 121-3). Consider the following scenario from that paper. Smith and Jones are candidates for a job, and Smith believes that (a) Jones will get the job, and (b) Jones has ten coins in his pocket. Smith's belief in both these propositions is justified: a company executive has informed him that Jones will be hired, and he's seen the coins in question. Based on these justified beliefs, Smith also believes (quite justifiably) their logical implication: the person who will get the job has ten coins in his pocket.

Events transpire in such a way that Jones does not get the job, despite assurances to the contrary, and the job is offered to Smith instead. As chance would have it, proposition turns out to be true anyway, because Smith also had ten coins in his pocket, although he didn't realise it at the time. Thus, Smith justifiably believed proposition, and it turned out to be true, but did he know it? The traditional account says so, but does this still match our intuitive grasp of what knowledge entails? It seems not.

One possible way of saving the JTB account from Gettier is to argue that Smith's justification for was undermined, and thus he did not know because his belief was not appropriately justified. Proposition follows logically from (a) and (b) only if they are both true, and it turns out that (a) is false. Proposition can still be true independently of both (a) and (b), as actually transpired, but Smith's grounds for belief in © was the truth of (a) and (b). If there is a shortfall in JTB, it is merely that we ought to have mentioned that justification must not be undermined by subsequent events.

This embellishment of JTB salvages it from the given counterexample by denying the presence of justification, but other Gettier-style counterexamples may still prove problematic. More than anything else, this saving measure serves to demonstrate how much wriggle-room exists in the "justification" component, and that makes it a more intrinsically interesting concept (to my mind) than its possible by-product, "knowledge".

These days it would appear that the Special Theory of Relativity was beyond any form of doubt however I have a theoretical proof that would strongly suggest that the theory is fundamentally flawed. Indeed the proof is so straight forward it is a wonder so many supposedly acute minds have previously overlooked it. The proof runs as follows : If an observer with velocity v heads towards a beam of light one would have expected that the measurable velocity of the light beam would have been c + v. However according to the Special Theory of Relativity because time slows down and length decreases with velocity, the measured velocity of the beam would still be c. In other words a change in space and time for the observer slowed the new velocity of c + v back down to c again. However if the observer now heads in the opposite direction with the same velocity one would have expected that the measurable velocity of the beam without any relativistic effects, would now be c – v. But on this occasion a change in space and time for the observer would have to increase the measured velocity of light, the exact opposite of the case with c + v. But how could this be if time slows and length decreases with velocity, for the opposite to occur one would have expected that time would have needed to have speeded up and length increased? However both cannot be the case so therefore the speed of light could not remain constant when an observer’s velocity changed with respect to either magnitude or direction.

The origin of this scientific red herring lies with the famous (though some may perhaps argue infamous) Michelson-Morley experiment. It was conducted by the two Americans whom it was named after in 1887 in order to prove or disprove the existence of ‘aether’, the enigmatic substance thought to be contained in a vacuum upon which a light wave was able to move upon. The apparatus consisted of two beams of light meeting at right angles at an interferometer. If the Earth’s speed effected either of the velocities of the light beams then the interference pattern obtained would change. However it was found that the speed of the Earth about the Sun did not appear to effect the interference pattern in any way and it was upon this observation that Einstein based his Special Theory of Relativity.

However just the briefest look at the exact set-up of the apparatus used by Michelson and Morley clearly reveals that the experiment could never have worked anyway. Indeed the logic supporting it is so flawed it is a wonder that no-one appears to have ever noticed. The two light beams which meet at the interferometer first travel away from it and at equal distances are reflected back again to the same half-silvered glass it started from. However because each light beam exactly doubles back on itself each time, it is obvious what the light beam would have gained as a result of the Earth’s velocity in one direction, it would exactly lose on the way back again in the opposite direction, and vice versa. Indeed the experiment would never have proved or disproved the existence of the aether either

Since the proof stated above clearly shows that the Special Theory of Relativity could never work, it must also be the case that a large part of the General Theory of Relativity is equally unsound since it is entirely based upon the Special Theory. As a consequence it would therefore appear that a significant part of twentieth century physics needs to be re-thought since the Theory of Relativity is intimately interwoven into it. Indeed Einstein’s theory is so well established these days that it is even included in many of the physics text books

"Proof that E could Never Equal mc²" which questions both the theoretical and mathematical basis of the famous equation of mass-energy equivalence, E = mc².

First it is impossible to picture empty space. All our efforts to imagine pure space from which the changing images of material objects are excluded can only result in a representation in which highly-coloured surfaces, for instance, are replaced by lines of slight colouration, and if we continued in this direction to the end, everything would disappear and end in nothing. Hence arises the irreducible relativity of space.

Whoever speaks of absolute space uses a word devoid of meaning. This is a truth that has been long proclaimed by all who have reflected on the question, but one which we are too often inclined to forget.

If I am at a definite point in Paris, at the Place du Panthéon, for instance, and I say, "I will come back here tomorrow;" if I am asked, "Do you mean that you will come back to the same point in space?" I should be tempted to answer yes. Yet I should be wrong, since between now and tomorrow the earth will have moved, carrying with it the Place du Panthéon, which will have travelled more than a million miles. And if I wished to speak more accurately, I should gain nothing, since this million of miles has been covered by our globe in its motion in relation to the sun, and the sun in its turn moves in relation to the Milky Way, and the Milky Way itself is no doubt in motion without our being able to recognise its velocity. So that we are, and shall always be, completely ignorant how far the Place du Panthéon moves in a day. In fact, what I meant to say was,"Tomorrow I shall see once more the dome and pediment of the Panthéon," and if there was no Panthéon my sentence would have no meaning and space would disappear.

This is one of the most commonplace forms of the principle of the relativity of space, but there is another on which Delbeuf has laid particular stress. Suppose that in one night all the dimensions of the universe became a thousand times larger. The world will remain similar to itself, if we give the word similitude the meaning it has in the third book of Euclid. Only, what was formerly a metre long will now measure a kilometre, and what was a millimetre long will become a metre. The bed in which I went to sleep and my body itself will have grown in the same proportion. When I awake in the morning what will be my feeling in face of such an astonishing transformation? Well, I shall not notice anything at all. The most exact measures will be incapable of revealing anything of this tremendous change, since the yard-measures I shall use will have varied in exactly the same proportions as the objects I shall attempt to measure. In reality the change only exists for those who argue as if space were absolute. If I have argued for a moment as they do, it was only in order to make it clearer that their view implies a contradiction. In reality it would be better to say that as space is relative, nothing at all has happened, and that it is for that reason that we have noticed nothing.

Have we any right, therefore, to say that we know the distance between two points? No, since that distance could undergo enormous variations without our being able to perceive it, provided other distances varied in the same proportions. We saw just now that when I say I shall be here tomorrow, that does not mean that tomorrow I shall be at the point in space where I am today, but that tomorrow I shall be at the same distance from the Panthéon as I am today. And already this statement is not sufficient, and I ought to say that tomorrow and today my distance from the Panthéon will be equal to the same number of times the length of my body.

But that is not all. I imagined the dimensions of the world changing, but at least the world remaining always similar to itself. We can go much further than that, and one of the most surprising theories of modern physicists will furnish the occasion. According to a hypothesis of Lorentz and Fitzgerald, all bodies carried forward in the earth's motion undergo a deformation. This deformation is, in truth, very slight, since all dimensions parallel with the earth's motion are diminished by a hundred-millionth, while dimensions perpendicular to this motion are not altered. But it matters little that it is slight; it is enough that it should exist for the conclusion I am soon going to draw from it. Besides, though I said that it is slight, I really know nothing about it. I have myself fallen a victim to the tenacious illusion that makes us believe that we think of an absolute space. I was thinking of the earth's motion on its elliptical orbit round the sun, and I allowed 18 miles a second for its velocity. But its true velocity (I mean this time, not its absolute velocity, which has no sense, but its velocity in relation to the ether), this I do not know and have no means of knowing. It is, perhaps, 10 or 100 times as high, and then the deformation will be 100 or 10,000 times as great.

It is evident that we cannot demonstrate this deformation. Take a cube with sides a yard long. it is deformed on account of the earth's velocity; one of its sides, that parallel with the motion, becomes smaller, the others do not vary. If I wish to assure myself of this with the help of a yard-measure, I shall measure first one of the sides perpendicular to the motion, and satisfy myself that my measure fit s this side exactly ; and indeed neither one nor other of these lengths is altered, since they are both perpendicular to the motion. I then wish to measure the other side, that parallel with the motion ; for this purpose I change the position of my measure, and turn it so as to apply it to this side. But the yard-measure, having changed its direction and having become parallel with the motion, has in its turn undergone the deformation so that, though the side is no longer a yard long, it will still fit it exactly, and I shall be aware of nothing.

What, then, I shall be asked, is the use of the hypothesis of Lorentz and Fitzgerald if no experiment can enable us to verify it? The fact is that my statement has been incomplete. I have only spoken of measurements that can be made with a yard-measure, but we can also measure a distance by the time that light takes to traverse it, on condition that we admit that the velocity of light is constant, and independent of its direction. Lorentz could have accounted for the facts by supposing that the velocity of light is greater in the direction of the earth's motion than in the perpendicular direction. He preferred to admit that the velocity is the same in the two directions, but that bodies are smaller in the former than in the latter. If the surfaces of the waves of light had undergone the same deformations as material bodies, we should never have perceived the Lorentz-Fitzgerald deformation.

In the one case as in the other, there can be no question of absolute magnitude, but of the measurement of that magnitude by means of some instrument. This instrument may be a yard-measure or the path traversed by light. It is only the relation of the magnitude to the instrument that we measure, and if this relation is altered, we have no means of knowing whether it is the magnitude or the instrument that has changed.

But what I wish to make clear is, that in this deformation the world has not remained similar to itself. Squares have become rectangles or parallelograms, circles ellipses, and spheres ellipsoids. And yet we have no means of knowing whether this deformation is real.

It is clear that we might go much further. Instead of the Lorentz-Fitzgerald deformation, with its extremely simple laws, we might imagine a deformation of any kind whatever; bodies might be deformed in accordance with any laws, as complicated as we liked, and we should not perceive it, provided all bodies without exception were deformed in accordance with the same laws. When I say all bodies without exception, I include, of course, our own bodies and the rays of light emanating from the different objects.

If we look at the world in one of those mirrors of complicated form which deform objects in an odd way, the mutual relations of the different parts of the world are not altered; if, in fact, two real objects touch, their images likewise appear to touch. In truth, when we look in such a mirror we readily perceive the deformation but it is because the real world exists beside its deformed image. And even if this real world were hidden from us, there is something which cannot be hidden, and that is ourselves. We cannot help seeing, or at least feeling, our body and our members which have not been deformed, and continue to act as measuring instruments. But if we imagine our body itself deformed, and in the same way as if it were seen in the mirror, these measuring instruments will fail us in their turn, and the deformation will no longer be able to be ascertained.

Imagine, in the same way, two universes which are the image one of the other. With each object P in the universe A, there corresponds, in the universe B, an object P1 which is its image. The co-ordinates of this image P1 are determinate functions of those of the object P ; moreover, these functions ma be of any kind whatever - I assume only that they are chosen once for all. Between the position of P and that of P1 there is a constant relation ; it matters little what that relation may be, it is enough that it should be constant. Well, these two universes will be indistinguishable. I mean to say that the former will be for its inhabitants what the second is for its own. This would be true so long as the two universes remained foreign to one another. Suppose we are inhabitants of the universe A ; we have constructed our science and particularly our geometry. During this time the inhabitants of the universe B have constructed a science, and as their world is the image of ours, their geometry will also be the image of ours, or, more accurately, it will be the same. But if one day a window were to open for us upon the universe B, we should feel contempt for them, and we should say, "These wretched people imagine that they have made a geometry, but what they so name is only a grotesque image of ours; their straight lines are all twisted, their circles are hunchbacked, and their spheres have capricious inequalities." We should have no suspicion that they were saying the same of us, and that no one will ever know which is right.

We see in how large a sense we must understand the relativity of space. Space is in reality amorphous, and it is only the things that are in it that give it a form. What are we to think, then, of that direct intuition we have of a straight line or of distance? We have so little the intuition of distance in itself that, in a single night, as we have said, a distance could become a thousand times greater without our being able to perceive it, if all other distances had undergone the same alteration. And in a night the universe B might even be substituted for the universe A without our having any means of knowing it, and then the straight lines of yesterday would have ceased to be straight, and we should not be aware of anything.

One part of space is not by itself and in the absolute sense of the word equal to another part of space, for if it is so for us, it will not be so for the inhabitants of the universe B, and they have precisely as much right to reject our opinion as we have to condemn theirs.

If this intuition of distance, of direction, of the straight line, if, in a word, this direct intuition of space does not exist, whence comes it that we imagine we have it? If this is only an illusion, whence comes it that the illusion is so tenacious ? This is what we must examine. There is no direct intuition of magnitude, as we have said, and we can only arrive at the relation of the magnitude to our measuring instruments. Accordingly we could not have constructed space if we had not had an instrument for measuring it. Well, that instrument to which we refer everything, which we use instinctively, is our own body. It is in reference to our own body that we locate exterior objects, and the only special relations of these objects that we can picture to ourselves are their relations with our body. It is our body that serves us, so to speak, as a system of axes of co-ordinates.

For instance, at a moment a the presence of an object A is revealed to me by the sense of sight; at another moment b the presence of another object B is revealed by another sense, that, for instance, of hearing or of touch. I judge that this object B occupies the same place as the object A. What does this mean? To begin with, it does not imply that these two objects occupy, at two different moments, the same point in an absolute space, which, even if it existed, would escape our knowledge, since between the moments a and P the solar system has been displaced and we cannot know what this displacement is. It means that these two objects occupy the same relative position in reference to our body.

But what is meant even by this? The impressions that have come to us from these objects have followed absolutely different paths - the optic nerve for the object A, and the acoustic nerve for the object B - they have nothing in common from the qualitative point of view.' The representations we can form of these two objects are absolutely heterogeneous and irreducible one to the other. Only I know that, in order to reach the object A, I have only to extend my right arm in a certain way; even though I refrain from doing it, I represent to myself the muscular and other analogous sensations which accompany that extension, and that representation is associated with that of the object A

Now I know equally that I can reach the object B by extending my right arm in the same way, an extension accompanied by the same train of muscular sensations. And I mean nothing else but this when I say that these two objects occupy the same position

I know also that I could have reached the object A by another appropriate movement of the left arm, and I represent to myself the muscular sensations that would have accompanied the movement. And by the same movement of the left arm, accompanied by the same sensations, I could equally have reached the object B

And this is very important, since it is in this way that I could defend myself against the dangers with which the object A or the object B might threaten me. With each of the blows that may strike us, nature has associated one or several parries which enable us to protect ourselves against them. The same parry may answer to several blows. It is thus, for instance, that the same movement of the right arm would have enabled us to defend ourselves at the moment a against the object A, and at the moment b against the object B. Similarly, the same blow may be parried in several ways, and we have said, for instance, that we could reach the object A equally well either by a certain movement of the right arm, or by a certain movement of the left

All these parries have nothing in common with one another, except that they enable us to avoid the same blow, and it is that, and nothing but that, we mean when we say that they are movements ending in the same point in space. Similarly, these objects, of which we say that they occupy the same point in space, have nothing in common, except that the same parry can enable us to defend ourselves against them.

Or, if we prefer it, let us imagine innumerable telegraph wires, some centripetal and others centrifugal. The centripetal wires warn us of accidents that occur outside, the centrifugal wires have to provide the remedy. Connections are established in such a way that when one of the centripetal wires is traversed by a current, this current acts on a central exchange, and so excites a current in one of the centrifugal wires, and matters are so arranged that several centripetal wires can act on the same centrifugal wire, if the same remedy is applicable to several evils, and that one centripetal wire can disturb several centrifugal wires, either simultaneously or one in default of the other, every time that the same evil can be cured by several remedies

It is this complex system of associations, it is this distribution board, so to speak, that is our whole geometry, or, if you will, all that is distinctive in our geometry. What we call our intuition of a straight line or of distance is the consciousness we have of these associations and of their imperious character.

Whence this imperious character itself comes, it is easy to understand. The older an association is, the more indestructible it will appear to us. But these associations are not, for the most part, conquests made by the individual, since we see traces of them in the newly-born infant they are conquests made by the race. The more necessary these conquests were, the more quickly they must have been brought about by natural selection.

On this account those we have been speaking of must have been among the earliest, since without them the defence of the organism would have been impossible. As soon as the cells were no longer merely in juxtaposition, as soon as they were called upon to give mutual assistance to each other, some such mechanism as we have been describing must necessarily have been organised in order that the assistance should meet the danger without miscarrying.

When a frog's head has been cut off, and a drop of acid is placed at some point on its skin, it tries to rub off the acid with the nearest foot; and if that foot is cut off, it removes it with the other foot. Here we have, clearly, that double parry I spoke of just now, making it possible to oppose an evil by a second remedy if the first fails. It is this multiplicity of parries, and the resulting co-ordination, that is space

We see to what depths of unconsciousness we have to descend to find the first traces of these spatial associations, since the lowest parts of the nervous system alone come into play. Once we have realised this, how can we be astonished at the resistance we oppose to any attempt to dissociate what has been so long associated? Now, it is this very resistance that we call the evidence of the truths of geometry. This evidence is nothing else than the repugnance we feel at breaking with very old habits with which we have always got on very well

The space thus created is only a small space that does not extend beyond what my arm can reach, and the intervention of memory is necessary to set back its limits. There are points that will always remain out of my reach, whatever effort I may make to stretch out my hand to them. If I were attached to the ground, like a sea-polyp, for instance, which can only extend its tentacles, all these points would be outside space, since the sensations we might experience from the action of bodies placed there would not be associated with the idea of any movement enabling us to reach them, or with any appropriate parry. These sensations would not seem to us to have any spatial character, and we should not attempt to locate them.

But we are not fixed to the ground like the inferior animals. If the enemy is too far off, we can advance upon him first and extend our hand when we are near enough. This is still a parry, but a long-distance parry. Moreover, it is a complex parry, and into the representation we make of it there enter the representation of the muscular sensations caused by the movement of the legs, that of the muscular sensations caused by the final movement of the arm, that of the sensations of the semi-circular canals, etc. Besides, we have to make a representation, not of a complexes of simultaneous sensations, but of a complexes of successive sensations, following one another in a determined order, and it is for this reason that I said just now that the intervention of memory is necessary

We must further observe that, to reach the same point, I can approach nearer the object to be attained, in order not to have to extend my hand so far. And how much more might be said? It is not one only, but a thousand parries I can oppose to. the same danger. All these parries are formed of sensations that may have nothing in common, and yet we regard them as defining the same point in space, because they can answer to the same danger and are one and all of them associated with the notion of that danger. It is the possibility of parrying the same blow which makes the unity of these different parries, just as it is the possibility of being parried in the same way which makes the unity of the blows of such different kinds that can threaten us from the same point in space. It is this double unity that makes t he individuality of each point in space, and in the notion of such a point there is nothing else but this.

The space I pictured in the preceding section, which I might call restricted space, was referred to axes of co-ordinates attached to my body. These axes were fixed, since my body did not move, and it was only my limbs that changed their position. What are the axes to which the extended space is naturally referred - that is to say, the new space I have just defined? We define a point by the succession of movements we require to make to reach it, starting from a certain initial position of the body. The axes are accordingly attached to this initial position of the body.

But the position I call initial may be arbitrarily chosen from among all the positions my body has successively occupied. If a more or less unconscious memory of these successive positions is necessary for the genesis of the notion of space, this memory can go back more or less into the past. Hence results a certain indeterminateness in the very definition of space, and it is precisely this indeterminateness which constitutes its relativity

Absolute space exists no longer; there is only space relative to a certain initial position of the body. For a conscious being, fixed to the ground like the inferior animals, who would consequently only know restricted space, space would still be relative, since it would be referred to his body, but this being would not be conscious of the relativity, because the axes to which he referred this restricted space would not change. No doubt the rock to which he was chained would not be motionless, since it would be involved in the motion of our planet; for us, consequently, these axes would change every moment, but for him they would not change. We have the faculty of referring our extended space at one time to the position A of our body considered as initial, at another to the position B which it occupied some moments later, which we are free to consider in its turn as initial, and, accordingly, we make unconscious changes in the co-ordinates every moment. This faculty would fail our imaginary being, and, through not having travelled, he would think space absolute. Every moment his system of axes would be imposed on him; this system might change to any extent in reality, for him it would be always the same, since it would always be the unique system. It is not the same for us who possess, each moment, several systems between which we can choose at will, and on condition of going back by memory more or less into the past.

That is not all, for the restricted space would not be homogeneous. The different points of this space could not be regarded as equivalent, since some could only be reached at the cost of the greatest efforts, while others could be reached with ease. On the contrary, our extended space appears to us homogeneous, and we say that all its points are equivalent. What does this mean?

If we start from a certain position A, we can, starting from that position, effect certain movements M, characterised by a certain complexes of muscular sensations. But, starting from another position B, we can execute movements M, which will be characterised by the same muscular sensations. Then let a be the situation of a certain point in the body, the tip of the forefinger of the right hand, for instance, in the initial position A, and let b be the position of this same forefinger when, starting from that position A, we have executed the movements M. Then let am be the situation of the forefinger in the position B, and b1 its situation when, starting from the position B, we have executed the movements M1.

Well, I am in the habit of saying that the points a and b are, in relation to each other, as the points a' and b, and that means simply that the two series of movements M and M1 are accompanied by the same muscular sensations. And as I am conscious that, in passing from the position A to the position B, my body has remained capable of the same movements, I know that there is a point in space which is to the point a' what some point b is to the point a, so that the two points a and a' are equivalent. It is this that is called the homogeneity of space, and at the same time it is for this reason that space is relative, since its properties remain the same whether they are referred to the axes A or to the axes B. So that the relativity of space and its homogeneity are one and the same thing

Now, if I wish to pass to the great space, which is no longer to serve for my individual use only, but in which I can lodge the universe I shall arrive at it by an act of imagination. I shall imagine what a giant would experience who could reach the planets in a few steps, or, if we prefer, what I should feel myself in presence of a world in miniature, in which these planets would be replaced by little balls, while on one of these little balls there would move a Lilliputian that l should call myself. But this act of imagination would be impossible for me if I had not previously constructed my restricted space and my extended space for my personal use

Now we come to the question why all these spaces have three dimensions. Let us refer to the "distribution board" spoken of above. We have, on the one side, a list of the different possible dangers - let us designate them as Am, A2, etc. - and, on the other side, the list of the different remedies, which I will call in the same way B1, B2, etc. Then we have connections between the contact studs of the first list and those of the second in such a way that when, for instance, the alarm for danger A3 works, it sets in motion or may set in motion the relay corresponding to the parry B4

As above, the centripetal or centrifugal wires, I am afraid that all I have said may be taken, not as a simple comparison, but as a description of the nervous system. Such is not my thought, and that for several reasons. Firstly, I should not presume to pronounce an opinion on the structure of the nervous system which I do not know, while those who have studied it only do so with circumspection. Secondly, because, in spite of my incompetence, I fully realise that this scheme would be far too simple. And lastly, because, on my list of parries, there appear some that are very complex, which may even, in the case of extended space, as we have seen above, consist of several steps followed by a movement of the arm. It is not a question, then, of physical connection between two real conductors, but of psychological association between two series of sensations

If Am and A2, for instance, are both of them associated with the parry B1, and if Am is similarly associated with B2, it will generally be the case that A2 and B2 will also be associated. If this fundamental law were not generally true, there would only be an immense confusion, and there would be nothing that could bear any resemblance to a conception of space or to a geometry. How, indeed, have we defined a point in space? We defined it in two ways: on the one hand, it is the whole of the alarms A which are in connection with the same parry B ; on the other, it is the whole of the parries B which are in connection with the same alarm A. If our law were not true, we should be obliged to say that Am and A2 correspond with the same point, since they are both in connection with B1 ; but we should be equally obliged to say that they do not correspond with the same point, since Am would be in connection with B2, and this would not be true of A2 - which would be a contradiction

But from another aspect, if the law were rigorously and invariably true, space would be quite different from what it is. We should have well-defined categories, among which would be apportioned the alarms A on the one side and the parries B on the other. These categories would be exceedingly numerous, but they would be entirely separated one from the other. Space would be formed of points, very numerous but discrete; it would be discontinuous. There would be no reason for arranging these points in one order rather than another, nor, consequently, for attributing three dimensions to space.

But this is not the case. May I be permitted for a moment to use the language of those who know geometry already? It is necessary that I should do so, since it is the language best understood by those to whom I wish to make myself clear. When I wish to parry the blow, I try to reach the point whence the blow comes, but it is enough if I come fairly near it. The n the parry B1 may answer to Am, and to A2 if the point which corresponds with B1 is sufficiently close both to that which corresponds with Am and to that which corresponds with A2. But it may happen that the point which corresponds with another parry B2 is near enough to the point corresponding with Am, and not near enough to the point corresponding with A2. And so the parry B2 may answer to Am and not be able to answer to A2.

For those who do not yet know geometry, this may be translated simply by a modification of the law enunciated above. Then what happens is as follows. Two parries, B1 and B2, are associated with one alarm Am, and with a very great number of alarms that we Will place in the same category as Am, and make to correspond with the same point in space. But we may find alarms A2 which are associated with B2 and not with B1, but on the other hand are associated with B3, which are not with Am, and so on in succession, so that we may write the sequence B1, Am, B2, A2, B3, A3, B4, A4, in which each term is associated with the succeeding and preceding terms, but not with those that are several places removed

It is unnecessary to add that each of the terms of these sequences is not isolated, but forms part of a very numerous category of other alarms or other parries which has the same connections as it, and may be regarded as belonging to the same point in space. Thus the fundamental law, though admitting of exceptions, remains almost always true. Only, in consequence of these exceptions, these categories, instead of being entirely separate, partially encroach upon each other and mutually overlap to a certain extent, so that space becomes continuous.

Furthermore, the order in which these categories must be arranged is no longer arbitrary, and a reference to the preceding sequence will make it clear that B2 must be placed between Am and A2, and, consequently, between B1 and B3, and that it could not be placed, for instance, between B3 and B4.

Accordingly there is an order in which our categories range themselves naturally which corresponds with the points in space, and experience teaches us that this order presents itself in the form of a three-circuit distribution board, and it is for this reason that space has three dimensions

Thus the characteristic property of space that of having three dimensions, is only a property of our distribution board, a property residing, so to speak, in the human intelligence. The destruction of some of these connections that is to say of these associations of ideas, would be sufficient to give us a different distribution board, and that might be enough to endow space with a fourth dimension.

Some people will be astonished at such a result. The exterior world, they think, must surely count for something. If the number of dimensions comes from the way in which we are made, there might be thinking beings living in our world, but made differently from us, who would think that space has more or less than three dimensions. Has not M. de Cyon said that Japanese mice, having only two pairs of semicircular canals, think that space has two dimensions? Then will not this thinking being, if he is capable of constructing a physical system, make a system of two or four dimensions, which yet, in a sense, will be the same as ours, since it will be the description of the same world in another language?

It quite seems, indeed, that it would be possible to translate our physics into the language of geometry of four dimensions. Attempting such a translation would be giving oneself a great deal of trouble for little profit, and I will content myself with mentioning Hertz's mechanics, in which something of the kind may be seen. Yet it seems that the translation would always be less simple than the text, and that it would never lose the appearance of a translation, for the language of three dimensions seems the best suited to the description of our world, even though that description may be made, in case of necessity, in another idiom

Besides, it is not by chance that our distribution board has been formed. There is a connection between the alarm Am and the parry B1, that is, a property residing in our intelligence. But why is there this connection? It is because the parry B1 enables us effectively to defend ourselves against the danger Am, and that. is a fact exterior to us, a property of the exterior world. Our distribution board, then, is only the translation of an assemblage of exterior facts; if it has three dimensions, it is because it has adapted itself to a world having certain properties, and the most important of these properties is that there exist natural solids which are clearly displaced in accordance with the laws we call laws of motion of unvarying solids. If, then, the language of three dimensions is that which enables us most easily to describe our world, we must not be surprised. This language is founded on our distribution board, and it is in order to. enable us to live in this world that this board has been established.

Hang said that we could conceive of thinking beings, living in our world, whose distribution board would have four dimensions, who would, consequently, think in hyperspaces. It is not certain, however, that such beings, admitting that,, they were born, would be able to live and defend 'themselves against the thousand dangers by which they would be assailed

There is a striking contrast between the roughness of this primitive geometry which is reduced to what I call a distribution board, and the infinite precision of the geometry of geometricians. And yet the latter is the child of the former, but not of it alone; it required to be fertilised by the faculty we have of constructing mathematical concepts, such, for instance, as that of the group. It was necessary to find among these pure concepts the one that was best adapted to this rough space, whose genesis I have tried to explain in the preceding pages, the space which is common to us and the higher animals

The evidence of certain 'geometrical postulates is only, as I have said, our unwillingness to give up very old habits. But these postulates are infinitely precise, while the habits have about them something essentially fluid. As soon as we wish to think, we are bound to have infinitely precise postulates, since this is the only means of avoiding contradiction. But among all the possible systems of postulates, there are some that we shall be unwilling to choose, because they do not accord sufficiently with our habits. However fluid and elastic these may be, they have a limit of elasticity.

It will be seen that though geometry is not an experimental science, it is a science born in connection with experience; that we have created the space it studies, but adapting it to the world in which we live. We have chosen the most convenient space, but experience guided our choice. As the choice was unconscious, it appears to be imposed upon us. Some say that it is imposed by experience, and others that we are born with our space ready-made. After the preceding considerations, it will be seen what proportion of truth and of error there is - in these two opinions

In this progressive education which has resulted in the construction of space, it is very difficult to determine what is the share of the individual and what of the race. To what extent could one of us, transported from his birth into an entirely different world, where, for instance, there existed bodies displaced in accordance with the laws of motion of non-Euclidean solids - to what extent, I say, would he be able to give up the ancestral space in order to build up an entirely new space?

The share of the race seems to preponderate largely, and yet if it is to it that we owe the rough space, the fluid space of which I spoke just now, the space of the higher animals, is it not to the unconscious experience of the individual that we owe the infinitely precise space of the geometrician? This is a question that is not easy of solution, least of mention, a fact which shows that the space bequeathed to us by our ancestors still preserves a certain plasticity. Certain hunters learn to shoot fish under the water, although the image of these fish is raised by refraction ; and, moreover, they do it instinctively. Accordingly they have learnt to modify their ancient instinct of direction, or, if you will, to substitute for the association Am, B1, another association Am, B2, because experience has shown them that the former does not succeed.

Coherence is a major contributor as player in the arena of knowledge. There are coherence theories of belief, truth and justification which combined yield to the theories of knowledge. Coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in a book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have on or upon the actualization that concludes of such things as they seem or actualized by what should be?

One answer is that the belief has a coherent place or role in a system of corelated beliefs. Perception has an influence on or upon the belief. You respond to sensory stimuli by believing that you are reading a page in a book than believing that of things elsewhere in the garden. Belief has an influence on action, you will act differently if you believe that you are reading a page if you believe something other than elsewhere. Perception and action Underdetermined the content of belief , however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plats in a network of relations of other beliefs, the role in inference and implication, for example, I refer different things from believing that I am reading a page in a book than from any other belief, just as I refer that belief from different things than I infer that other beliefs from.

The input of perception and the output of action supplement the central role of the systematic relations the belief has to other beliefs, but it is the systematic relations that give the belief the specific content it has. They are the fundamental source of the content of beliefs. That is how coherence comes in. A belief has the content that it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from strong coherence theories. Weak coherence theories affirm that coherence is one determinant of the content of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the content of belief.

When turning from belief to justification , we confront a similar group of coherence theories. What ,makes one belief justified and another not? The answer is the way it coheres with the background system of beliefs. Again, there is a distinction between weak and strong theories of coherence. Weak theories tell us that the way in which a belie coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory and intuition. Strong theories, by contrast, tell us that justification is solely a matter of how a belief coheres with a system of beliefs. There is, however, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theories (Pollock, 1986). A positive coherence theory tells us that if a belief coheres with a background system of belief, then the belief is justified. A negative coherence theory tells us that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to a positive coherence theory, coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification.

A strong coherence theory of justification is a combination of a positive and a negative theory which tells us that a belief is justified if and only if it coheres with a background system of beliefs.

Coherence theories of justification and knowledge have most often been rejected as being unable to deal with perceptual knowledge, and, therefore, it will be most appropriate to consider a perceptual example which will serve as a kind of crucial test. Suppose that a person, call her Julie, works with a scientific instrument that has a gauge for measuring the temperature of liquid in a container. The gauge is marked in degrees. She looks at the gauge and sees that the reading is 105 degrees. What is she justified in believing and why? Is she, for example, justified in believing that the liquid in the container is 105 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that,, though her belief that she sees the shape 105 is immediately justified as direct sensory evidence without appeal to a background system. The belief that the liquid in the container is 105 degrees results from coherence with a background system of beliefs affirming that the shape 105 is a reading of 105 degrees on a gauge that measures the temperature of the liquid in the container. This sort of weak coherence combines coherence with direct perceptual evidence, the foundation of justification, to account for justification of our beliefs.

A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shape 105, or even the more cautious belief that one sees a shape, results from coherence with a background system. One may argue for this strong coherence theory in a number of different ways. One line of argument would be appeal to the coherence theory of content belief. If the content of the perceptual belief results from the relations of the belief to other beliefs in a system of beliefs, then one may argue that the justification of the perceptual beliefs also results from the relations of the belief to other beliefs in the system. What is more, however, that one may argue for the strong coherence theory without assuming the coherence theory of the content of beliefs. It may be that some beliefs have the content that they do atomistical, but that our justification for believing them is the result of coherence. Consider the very cautious belief that I see a shape. How could the justification for that belief be the result of coherence with a background system of beliefs? What might the background system tell us that would justify that belief? Our background system contains a simple and primal theory about our relationship to the world. To come to the specific [point at issue, we believe that we can tell a shape when we see one, that we are trustworthy about such simple matters as whether we see a shape before us or not. We may, with experience, come to believe that sometimes we think we see a shape before us when there is nothing there at all, when we see an after-image, for example. And so we are not perfect, not beyond deception. Yet we are trustworthy for the most part. Moreover, when Julie sees the shape 105, she believes that the circumstances are not those that are deceptive about whether she sees that shape. The light is god, the numeral shapes are large, readily discernible, and so forth. These are beliefs that Julie has that tell her that her belief that she sees a shape is justified. Her belief that she sees a shape is justified because of the way it is supported by her other beliefs. It coheres with those beliefs, and so she is justified.

There are various ways of understanding the nature of this support or coherence. One way is to view Julie as inferring that her belief is true from the other beliefs. The inference might be construed as an inference to the best explanation (Harman, 1973; Goldman,1988: Lycan, 1988). Given her background beliefs, the best explanation Julie has for the existence of her belie f that she sees a shape is that she does see a shape. Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs. Since we are not aware of such inferences for the most part, the inferences must be interpreted as unconscious inferences, as information processing, based on or accessing the background system. One might object to such an account on the grounds that not all justifying inference e is explanatory and, consequently, be led to a more general account of coherence as successful competition based on a background system. The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of belief informs one that one is trustworthy and enables one to meet the objections. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in that way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer, 1990).

It is easy to show the relationship between positive and negative coherence theories in terms of the standard coherence theory. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Julie, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Julie, who has always placed her trust in the gauge, believes that the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads, her belief that the liquid in the container is at 105 degrees is not a justified belief, because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells us that she is not justified in her belief about the temperature of the contents in the container. By contrast, when the red light is not illuminated and the background system of Julie tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells us that she is justified in her belief because her belief coheres with her background system.

The forgone illustrations of coherence theories of justification have a common feature, namely, that they are what are called internalistic theories of justification. Therefore, by such is a regard by definition alone and would, however, be too broad. Nonetheless, the most general account of this distinction is that a theory of justification is ‘internalist’ if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person, internal to his cognitive perceptive and externalist, if it allows tat, at least, some of the justifying factors need not be thus accessible of te justifying factors need not be as such, that they can be external to the believer’s cognitive perceptive, beyond his realm of an otherwise displacements. However, epistemogists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

Perhaps the clearest example of an internalist would be a ‘foundationalist’, view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherentist view could also be internalist. If both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

That is to say, that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Also, on this way of drawing the distinction, a hybrid view, as according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externaist view. Obviously too, a view that was externalist in relation to a strong version of internalism, e.g., by not requiring that the believer actually be aware of all justifying factors, least of mention, the actualized awareness could still be internalist in relation to weak version, at least, by requiring that he be capable of becoming aware of them.

The most prominently recent externalist views have been versions of reliabilism, whose main requirements for justification is roughly that the belief be produced in a way or through a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access the relation of reliability in question.

An alternative to giving a externalist account of epistemic justification, one which may be more defensible while accommodating man of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies conditions as well. This makes it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: according to an internalist view of content, the content of such intentional states depends only on the non-relational, internal properties of the individual’s mind or grain, and not at all on his physical and social environment, while according to an externalist view, content is significantly affected by such external factors.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc., that motivates the views that have come to be known as ‘direct reference’ theories. Such phenomena seem at least, to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, - e.g., whether he is on Earth of elsewhere, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc.,- not just on what is going on internally in his mind or brain, however.

Putnam (1926- ) has made major contributions to the field of philosophy were made most recent, but his contention that truth is ultimately an epistemic concept, e.g., that truth and ‘ideal rational acceptability’ are interdependent concepts and his criticism of radical or evidence-transcendent scepticism. The two themes are dealt together by Putnam’s defence of what he calls ‘internal realism’. His prevailing disputation purports to show this view has no content. Nonetheless, if we abandon metaphysical realism, we should still hold to the internal or pragmatic realism as suggested by Peirce, according to Putnam. Internal realism is racism about science and language, but only as an empirical theory internal to science. It is stronger than verificationism, because true beliefs are not justified beliefs but only ideally justified beliefs, and it still maintains the priority of reference over meaning, and in this sense is realist. On the one hand, reference is seen as dependent on use and on what can be ideally verified, and since truth is tied to reference, truth too is an epistemic concept. Crudely” the only criterion for what is a fact is what it is [ideally] rational to accept, and so bivalence might not be preserved since, for certain ‘p’, it might not be ideally rational either to accept ‘p’ or to reject it. Thus, truth and justification are two separate, but interdependent notions.



In as much as it may seem, that, it is, however, an object to an externalist account of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on or upon external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account off mental content would seem to support an externalist account of justification: If part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in reflation to that content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist must insist that there are no justification relations of these sorts, and that only internally accessible content can either be justified or justify anything else. But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

Meanwhile, to say the least, there is to mention, that the coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem, nevertheless, is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences she has are mute until they are represented in the form of some perceptual belief. Beliefs are te engines that pulls the train of justification. But what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we ,might have none, that our beliefs might be the artifact of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification. That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is ideally justified for some person.

For such a person there would be no gap between justification and truth and undefeated justification. Truth would be coherence with some ideal background system of beliefs, as, perhaps, one expressing a consensus among belief systems or some convergence among belief systems or some convergence toward consensus. Such a view is theoretically attractive for the reduction it promises,. But it appears open to profound objection. One is that there is a consensus that we can all be wrong about, at least, of some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief about something, then the consensual belief system rejects the equation of truth with consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.

Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of coherence must accept the logical gap between justified belies and truth, but she may believe that her capacities suffice to close the gap to yield knowledge, least of mention. That view is, at any rate, a coherent one.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined, for example, that truth is the proper aim of scientific inquiry, that to beliefs help us to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one agues from premises to a conclusion is the mark of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. In order to assess the plausibility of such theses, and in order to refine them and to explain why they hold (if they do(, we require some view of what truth is - a theory that would account for its properties and its relations to other matters. Thus there can be little prospect of understanding our most important faculties in the absence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that ruth is some sort of ‘correspondence’ and the alleged ‘reality’ remain objectionably obscure, yet the familiar alternative suggests - that true beliefs are hose that are those that are ‘mutually coherent’, or ‘pragmatically useful’, or ‘verifiable in suitable conditions’ - have each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all - that the syntactic form of the predicate, ‘is true’, distorts its real semantic character, which is not to describe propositions but to endorse them. But this radical approach is also faced with difficulties and suggests, somewhat counterintuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that semantics, we are naturally inclined to give it. Thus truth threatens to remain one of the most enigmatic of notions: An explicit account of I t can appear to be essential yet beyond our reach. However, recent work provides some grounds for optimism.

This distinction is associated with Leibniz, who declares that thee are only two kinds of truths - truths of reason and truths of fact. The former are all either explicit identities, e.g., of the form ‘A is A’, ‘AB is B’, etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truth of reason’ because the explicit identities are self-evident a priori truths, whereas the rest can be converted to such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are ‘All eqivilateral rectangle are rectangles’ and ‘All bachelors are unmarried’: The first is already of the form ‘AB is B’ and the latter can be reduced to form by substituting ‘married man’ for ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the trued of logic, arithmetic and geometry.

Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing them is a posteriori, or by reference to the facts of the empirical world. Likewise, since their denial does not involve a contradiction, their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as prepositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless there is a reason why it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.

In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject (this holds even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who did not cross the Rubicon, would not have been Caesar). And this containment relationship - which is eternal and unalterable even by God - guarantees that every truth had a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are not necessary, surely they are all truths of reason. Leibniz responds that not every truth can be reduced to an identity in a finite analysis. But while this may entail that we cannot prove such preposition a priori, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world: If it is part of the concept of this world that it is best, how could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, e.g., it follows from God’s decision to create this world, but God is necessarily good, so haw could he have decided to do anything else? Leibniz says much more about these matters, but it is not clear whether her offers any satisfactory solutions.

Necessary truths are ones which must be true, or whose opposite is impossible. Contingent truths are those that are necessary and those opposite is therefore possible. In what follows, 1-3 are necessary, and 4-6, contingent.

1. It is not the case that is raining and not raining.

2. 2 + 2 = 4.

3. All bachelors are unmarried.

4. It seldom rains in the Sahara.

5. There are more than four states in the USA.

6. Some bachelors drive Macerates.

Plantinga (1974) characterizes the sense of necessary factors attributed in 1-3 as ‘broadly logical’. For it includes not only truths of logic, but those of mathematics, set theory, and other quasi-logical ones. Yet it is not so broad as to include matters of causal or natural necessity, such as

7. Nothing travels faster than the speed of light.

Some suppose that necessary truths are those we know a priori. But we lack a criterion for a priori truths, and there are necessary truths we don’t know at all, e.g., undiscovered mathematical ones. It would not help to say that necessary truths are ones it is possible, in the broadly sense, to know a priori, for this is circular. Finally, Kripke (1972 & Plantinga, 1974) argue that some contingent truths are knowable and depend on experience in at last two ways: (1) experience is necessary to acquire the concepts involved in the proposition: and (2) experience is necessary to entertain the proposition. For it allows that experience can provide knowledge that a thing id so and so. Hence, Kant’s observation fails to support his key claim that knowledge of mathematical propositions, such as that 7 + 5 = 12, is a priori. We can, therefore, say that without mathematical knowledge, there is no scientific knowledge - yet the epistemology (‘naturalism’) suggested by scientific knowledge seems to make mathematical knowledge impossible. Similar problems face the suggestion that necessary truths are the ones we know with certainty. We lack a criterion for certainty; there are necessary truths we don’t know, and (barring dubious arguments for scepticism) it is reasonable to suppose that we know some contingent truths with certainty.

Leibniz defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity, e.g., of the form ‘A is A’, ‘AB is B’, etc., or is reducible to an identity by successively substituting equivalent terms. As for certainty issues surrounding certainty are inextricably connected with those concerning ‘scepticism’. For many sceptics have traditionally held that knowledge requires certainty, and, of course, they claim that certain knowledge is not possible. In part, in order to avoid scepticism, the anti-sceptics have generally held that knowledge does noes require certainty.

According to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is he case. Others think this ‘entailment thesis’ can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entails psychological ‘certainty’ (Prichard , 1950 & Ayer, 1956) or conviction (Lehrer, 1974) or acceptance. Nonetheless, there are arguments against all versions of he thesis that knowledge requires having a belief-like attitude toward the known. As these arguments are given by philosophers who think that knowledge and belief, or a facsimile, are mutually incompatible (the incompatibility thesis), o r b=y ones who sa y that knowledge does not entail belief, or vice versa, so that each may also coexist (the separability thesis).

A.D. Woozley (1953) defends a version of the separability thesis. Woozley’s version, which deals with psychological certainty than belief per se, is that knowledge can exist in the absence of confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct response on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be ‘odd’ for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true, still I know it is correct’. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim, such as a claim to know something, and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example, such that. Radek has forgotten that he learned some English history priori and yet he is able to give several correct responses to questions such that ‘When did the Battle of Hastings occur’? Since he forgot that he took history, he considers his correct responses to be no ,more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the ‘belief’ that the Battle of Hastings took place in 1066. A fortiori he would deny being sure (or having the right to be sure) that 1066 was the correct date. Radford would, nonetheless, insist that Radek knows when the Battle occurred, since clearly he remembered the correct date. Radford admits that it would be inappropriate for Radek to say that he knew when the Battle of Hastings occurred, but, like Woozley, he attributes the impropriety to fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least, to believe that we have the knowledge we claim, or else behaviour is ‘intentionally misleading’.

D.M. Armstrong (1973) takes a different task against Radford. Radek does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford d that point, however, Armstrong suggests that Radek believes that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible, but no more than jus t possible with the belief that such and such is not the case. What is more, in that had Radek been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Radek’s false belief about the Battle became unconscious over time bur persisted as a memory trace that was causally responsible for his guess. Out of constancy, we must describe Radek’s ‘tyre’ belief became unconscious but persisted long enough to cause hie guess. Thus, while Radek consciously believes that the Battle did not occur in 1066, unconsciously y he does believe it occurred in 1066. So after all Radford does have a counterexample to the claim that knowledge entails belief.

All in all, it is, nonetheless, that this view is also problematic. Leibniz’s examples of reduction are too sparh to prove a claim about all necessary truths. Some of his reductions, moreover, are deficient: FrĂ©ge has pointed out, for example, that his proof of ‘2 + 2 = 4' presupposes the principle of association and do does not depend only on the principle of identity, more generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions, e.g., ‘Nothing can be red and green all over’, which do not seem to be reducible to identities and which Leibniz does not show how to reduce.

Leibniz and other s have thought of truth as a property of propositions, where the latter are conceived as things which may be expressed by, but are distinct from, linguistic items like statements. On other approach, truth is a property of linguistic entities, and the basis of necessary truth is convention. Thus , A.J. Ayer, for example, argued that the only necessary truths are analytic statements and that the latter rest entirely on our commitment to use words in certain ways: It was ‘positivism’ in its adherence to the doctrine that science is the only form of knowledge and that there is nothing in thee universe beyond what can in principle be scientifically known. It was ‘logical’ in its dependence on developments in logic ans mathematics in the early years of this century which were taken to reveal how a priori knowledge of necessary truths is compatible with a thorough-going empiricism.

The logical positivist conception of knowledge in its original and purest form sees human knowledge as a complex intellectual structure employed for the successful anticipation of future experience. It requires, on the one hand, a linguistic or conceptual framework in which to express what is to be categorized ans predicted and, in the other hand, a factual element which provides that abstract form with content. This comes, ultimately, from sense experience. No matter of fact that anyone can understand or intelligibly think to be so could go beyond the possibility of human experience, and the only reasons anyone could ever have for believing anything must come, ultimately, from actual experience.

Of course, it follows trivially from which point on or upon the normative dimensions of conceptual possessions through which the belief and other attributional contents in which they might feature. This best can be explained for which the reason by which a belief is justified must be accessible in principle to the subject holding that belief. Externalists deny this requirement, proposing that this makes knowing too difficult to achieve in most normal contexts. The internalist-externalist debate is sometimes viewed as a debate between those who think that knowledge can be naturalized (externalists) and those who do not (internalists). Naturalists hold that the evaluative notions used in epistemology can be explained in terms of non-evaluative concepts - for example, that justification can be explained in terms of something in appear reliability. They deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse. Non-naturalist deny this and hold to the essential difference between the normative and the factual: The former can never be derived from or constituted by the latter. So internalists tend to think of reason and rationality as non-explicable in natural, descriptive terms, whereas externalists think such an explanation is possible.

Most of the epistemological tradition has been internalism, with externalism emerging as a genuine option only in the twentieth century. The best was to clarify this distinction is for us to consider of which is just. So there is scope for limited relativism in externalist accounts of knowledge and justification. However, of this distinction is that a theory of justification is internalist in and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cogniively accessible to that person, intended to his cognitive perceptive: And, externalist, if it allows that , at least, some of the justifying factors need not be thud accessible, so that they can be external to the believer’s cognitive perspective. However,, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The internalist requirement of accessibility can be interpreted, in at least, of two ways: A strong version of internalism would require that the believer actually be aware of the justifying factors in order to be justified, while a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately, but without the need for any change of position, new information, etc. though the phrase ‘cognitive accessible’ suggests the weak interpretation, the main intuitive motivation for internalist, viz. The idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and requiring the strong interpretation.

Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a ‘coherence’ view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is roughly that the belief be produced in a way or via a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belied is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in accepting it. Thus such a vie arguably marks a major break from the modern epistemological tradition as stemming from Descartes, which identifies epistemic justification with having a reason, perhaps, even a conclusive reason. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned has simply changed the subject.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals etc., that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, least of mention, to show that the belief or thought content that can be properly tributed to a person is dependent on acts about his environment - e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc, - not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors pertaining to the environment, the n knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, such that if part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either be justified of justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

According to the infinite regress argument for foundationalism, in that, if every justified belief could be justified only by inferring it from some further justified belief, there would have to be an infinite regress of justification, because there can be no such regress as there must be justified beliefs that are no justified by appeal to some beliefs further justified belief. Instead, they are n-inferentially or immediately justified; they are basic or foundational, the ground on which all our other justified beliefs are to rest.

Variants of this ancient argument have persuaded and continue to persuade many philosophers that the structure of epistemic justification must be foundational. Aristotle recognized that if we are to have knowledge of the conclusion of an argument on the basis of its premisses, we must know the premisses. But if knowledge of a premisses always required knowledge of some further proposition, he argued, then in order to know the premise we would have to know each proposition in an infinite regress of propositions. Since this is impossible, there must be some propositions that are known, but not by demonstration from further propositions: There must be basic, non-demonstrable knowledge, which grounds the rest of our knowledge.

Foundationalism enthusiasm for regress arguments often overlook the fact that they have also been advanced on behalf of scepticism, relativism, fideisms, contextualism

(Annis, 1978) and coherentism. Sceptics agree with foundationalism bot h that there can be no infinite regress of justifications and that, nevertheless there must be one if every justified belief can be justified only inferentially, by appeal to some further justified belief. But sceptics think all true justification must be inferential in this way - the foundationalist’s talk of immediate justification merely obscures the lack of any rational justification properly so-called. Sceptics conclude that none of our beliefs is justified. Relativists follow essentially the same pattern of sceptical argument, concluding that our beliefs can only be justified relative to the arbitrary starting assumptions or presupposition either of an individual or of a form of life.

Fideisms also agree with Foundationalists that here can b no infinite regress and that, nevertheless, there must be one if every justified belief can be justified only inferentially. And, again, like sceptics and relativists, fideisms reject foundationalist talk of rational but immediate justification. Instead, there are beliefs (the fideist’s core religious beliefs) that are certified-hence justified, not non-rationally-by faith, where faith is usually construed a some divinely inspired act, state or faculty that yields warranted trust in the otherwise unjustified beliefs. What stops the fatal regress of justification is not belief justified by some immediate foundationalist rationalist intuition, but belief certified by non-inferential affair beyond the pale of rationality.

Sceptics and relativists see little to choose between such fideisms and Foundationalisms. They are not alone in doing so. Contextualists and coherentists are likely to agree that whether one appeals to faith or to immediacy, the effect is the same: Arbitrariness in one’s starting point, which would lie beyond responsible canons of justification a nd criticism (Annis, 1978 & BonJour, 1978).

Regress arguments are no t limited to epistemology. In ethics there is Aristotle’s regress argument (in Nichomachean Ethics) for the existence of a single final end o rational action. In metaphysics there is Aquinas’s regress argument for an unmoved mover: If everything in motion were moved only by a mover that itself is in motion, there would have to be an infinite sequence of movers each moved by a further mover, since thee can be no such sequence, there is an unmoved mover. A related argument has recently been given to show that every state of affairs can have an explanation or cause of the sort posited by principles of sufficient reason, such principles are false, for a priori reasons having to do with their concepts of explanation.

How can the same argument serve so many masters, from epistemology to ethics to metaphysics, from foundationalism to coherentism to scepticism? One reason is that the argument has the form of a reduction to absurdity of conjoined assumptions. Like all such arguments, it cannot tell us, by itself, which assumption we should reject in order to escape the absurdity. Foundationalists reject one, coherentists another, sceptics a third, and so on. Furthermore, the Same argument form can be instantiated by different subject matters, of which epistemology is but one.

What exactly is the form of the argument? Black (1988) suggests that the assumption or premiss has the form :

(∀χ) (Αy & χRy)).

That is, for every χ that has property Α, there is a y such that y has Α and χ bears relation R to y. compare: for every belief χ that is justified, there is a belief y such that y is justified and χ is justified by y (or χ is based on y, or χ is inferable from y. or y is a reason for χ). Compare also: for everything χ that is in motion, there is a y in motion that moves χ. And follows by the assumption that (∃χ)Αχ.That is, there are Α’s - there are justified beliefs, there are things in motion. Additionally, one must assume

` R is irreflexive, and

R is transitive

That is, R is irreflexive and nothing bars R to itself, and that R is transitive if χ bears R to y bears R to z, x bears R to z. for instance, if χ justifies y and y justifies z, x justifies z; if χ moves y and y moves z, χ moves z. finally, the argument assumes that

there is no infinite sequence each of whose elements has Α and R bears to its predecessor.

However, these assumptions entail a contradiction. In particular, it follows from all, that, contrarily there is an infinite sequence each of whose elements both has Α and bears R to its predecessor.

Not only do these entail of an infinite sequence but that each are necessary for the entailment (Black, 1988). For example, an infinite sequence whose elements both Α and bears R to its predecessor, is entailed by their assumptive associates of R and must be transitive, thus to regress argument for foundationalism works only if all inferential justification is transitive.

Since these formative assumptions entail a contraction, one or more of all, must be rejected. Foundationalists reject the first of these that are the relevant instantiations of itself as there are belies that are justified but not by appeal to some further justified belief. (A few Foundationalists may also reject the assumptions that R is both irreflexiuve and therefore transitive, by this of allowing some beliefs to be self-justiying.) Fideisms, likewise reject the relevance contained to the instantiations of the first assumption, but disagree with Foundationalists about the nature of the justification of the otherwise unjustified beliefs (faith verus intuition). Sceptics and relativists, on the other hand, hold to the first assumption, however reject the second, that there are no justified beliefs. Coherentists hold one thru three, but reject that R is transitive, whereby inferential justification is often a holistic affair that is non-transitive, as contextualists may also eject the same, as too, of the first in favour of contextually justified beliefs - those which are unchallenged by the relevant objectors in a given context of justification.

Few philosophers, if any seem to have rejected the relevant instantiation, as referred by ’R to its predecessor’, wherefore, opting for what we might call ‘justificational infinitism’. Nonetheless, Foundationalists and others have often argued against the infinitist option. The usual attempts to do so prove to beg the question against infinitists, typically in favour of foundationalism. For example it is often said that a regress of conditional justification would at best, provide only conditional justification for its elements, and that we must appeal to some affair outside the regress, hence, to something non-inferentially justified, so far as the resources of the regress are concerned. This is to assume just what the infinitist denies, but it now appears that a non-question-begging argument can be given, in the form of a reduction to absurdity of infinitism. Other instantiations by whose elements both has Α and bears R to its predecessor, whereby in metaphysics have often been rejected, as when philosophers argue that there can be an infinite sequence of movers or causes each moved or caused by its predecessor.

Regress arguments evidently are not the knock-down affairs their advocates have so often supposed them to be. Only id one’s favoured way out of contradiction is the only way, or at least, the best way , need such arguments persuade. But showing this has proved surprisingly difficult, requiring for us of argument and evidence that go well beyond the resources of the regress argument itself.

For example, taken to consider a regress argument for foundationalism. Suppose we grant the foundationalist that there are justified beliefs and that justification is irreflexive: This is to grant the relevant instantiation of the second of listed assumptions and that R is irreflexive. What about R is transitive? Is justification transitive? Some varieties clearly are, including deductive inferential justification, according to which χ justifies y if χ is justified and y is deductively inferable from χ. Suppose further that y justifies z in the same sense. It follows that z is justified and deducible from χ, hence that χ justifies z: Deductive inferential justification is transitive. So, that the model or ideal of deductive justification, from Aristotle’s theory of demonstrations through Euclid nearly to the present, helps explain why so many have supposed that inferential justification must be transitive.

But not all justification is deductive, for example, the justified belief ‘b’, that Sam is a bartender, inductively justifies belief ‘c’, that Sam can make a whiskey-sour. Now consider the justified belief ‘a’, that Sam is a bartender who has forgotten how to make a whiskey-sour. Belief ‘a’ justifies ‘b’ which inductively justifies ‘c’, yet obviously ‘a’ does not justify ‘c’ and defeats it; transitively apparently fails. Such related problems affect varieties of justification according to which χ justifies y only if χ confers on sufficiently high degree of probability on y.

Another variety of inferential justification is ‘inference to the best explanation’, roughly what Peirce called ‘abduction’. Is here that χ justifies y if y is the best explanation of the phenomena described by χ: If evolutionary theory best explains the fossil record, the record justifies the theory. But explanation relations may not all be transitive. As regards to inference to the best explanation, suppose y is the best explanation of χ, so that χ justifies y and z is the best explanation of y, so that y justifies z. if transitively held, z would be the best explanation of χ. Yet, this contradicts the supposition that y is the best explanation of χ, presumably there can be only one best explanation of χ.

Foundationalists are not the only ones affected by these troubles with transitivity. So are those fidests, sceptics and relativists who advance regress arguments for their distinctive views. Like Foundationalists, they must assume that justification is transitive, otherwise we are not forced to reject any of the above assumptions that characterize our topic, yet in order to escape vicious regress, it therefore seems that coherentists, who reject transitivity, are in the best position of all to advance a regress argument for their view - a situation of some irony, in light of long tradition to the contrary, from Aristotle on. But the regress argument is slippery footing even for coherentists. If all beliefs are to be justified by inferring them from other beliefs, as (1) requires, how do we break out of the circle of beliefs to make contact with the world beyond? There are good coherentist answers to this question, some having the possibly welcome effect of denying (1), but they all require support from kinds of argument and evidence that exceed anything to be found in the regress argument itself.

What makes a belief justified and what makes a true belief knowledge? It is nature to think that whether a belief deserves one of these appraisals depends on what caused the subject to have the belief. In recent decades a number of epistemologists =have pursued this plausible idea with a variety of specific proposals.

Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right sort of causal connection to the fact that ‘p’. Such a criterion can be applied only to cases whee the fact that ‘p’ is a sort that can enter into causal relations. This seems to exclude mathematical and other necessary facts and perhaps any fact expressed by a universal generalization: And proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.

For example, Armstrong (1973) proposed that a belief of the form ‘This [perceived] object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign then the perceived object is ‘F’, - that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictate that, for any subject χ and perceived object y, if χ has those properties and believes that y if F, then y is F. Dretske (1981), offers a rather similar account in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is F.

This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that magenta thinks look chartreuse to you and chartreuse things look magenta. If you fail to heed these reasons you have for thinking that your colour perceptions is away and believe of a thing that looks magenta to you that it is magenta, your belief will fail to be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being magenta in such a way as to be a completely reliable sign or, to carry the information that thing is magenta

One can fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified. But this enriched condition would still be insufficient. Suppose, for example, that in an experiment you are given a drug that in nearly all people, but not I you, as it happens, causes the aforementioned aberration in colour perception. The experimenter tells you that you’ve taken such a drug, but then says: ‘No, wait minute, the drug you took was just a placebo’. But suppose the pill you took was just a placebo, but suppose further that this last thing the experimenter tells you

is false. Her telling you this gives you justification of believing of a thing that looks magenta to you that it is magenta, but a fact about his justification that is unknown to you, in that - the experimenter’s last statement was false, and makes it the case that your true belief is not knowledge even though it satisfies Armstrong’s causal condition.

For inferential knowledge Armstrong appeals to the framework of classical deductive logic and scientific induction. The use of the latter he justifies by inference to the best explanation. A sighing of many black ravens an d no non-black ones serves to justify the generalization ‘All ravens are black’. In the sense that it is more probably given to the evidence than any alternative hypothesis, therefore by the equivalence principle, the fact that all the non-ravens is confirming evidence for the hypothesis that all ravens are black. That is, instances of white shoes, green leaves and red apples count as evidence for this hypothesis, which seems absurd, least of mention, that this emphasizes the feature characteristic of Hempel’s paradox. Thus the best explanation for the sighting of only black ravens is that all ravens are, in fact, black. Armstrong is thus opposed to a Humean scepticism concerning induction without feeling the need to align himself to any formal inductive logic in the manner of Carnap. The tradition or Humean problem of induction, often referred to simply as the problem of induction, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, e.g., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true if the corresponding premises is true - or even that their chances of truth are significantly enchanted? An alternative version of the problem may be obtained by formulating it with reference to the so-called principle of induction, whereby the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving a suppressed premiss, in which case the issue is obviously how such a premiss can be justified. Hume’s argument is then that no such justification is possible: The principle cannot be justified a priori because it is not contradictory to deny it, and it cannot be justified by appeal to its having been true in previous experience without obviously begging the question.

Armstrong continues to argues that this response to inductive scepticism follows from a belief in strong laws of nature. Armstrong conceives of laws as contingent relations between universals and called strong laws. Since laws are conceived of as more than simply induction is able to rest on or upon its shoulder s. the property of Blackness is tied to the property of Ravenhood, and that is why it is reasonable to assert that generalization that all ravens are black given a sample.

The second major strand to Armstrong’s thought in epistemology is a belief in the Moorean certainties. Like Moore, and unlike Russell, Armstrong believes that some of our beliefs are so fundamental that philosophical doubt cannot be rationally entertained. He believes, for example, that one cannot seriously entertain a rational doubt that one has a body. Any philosophical speculation designed to produce such a doubt would require an argument with some contingent premiss that is more assertable than the doubted proposition and, in this case, such a one cannot be found.

The belief in Moorean certainties is intimately related to Armstrong’s ‘realism’. The existence of the external world is a Moorean certainty, its character the object of scientific discovery, and the only entities a metaphysics should postulate are those required by good scientific explanations.

Direct realism is a view about what the objects of perception are, such that direct realism is a type of realism, since it is assumed that these objects exist independently of any kind that might perceive them: And so it hereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being a ‘direct realism’ rules out those views defended under the rubric of ‘critical realism’, o r ‘representative realism’, in which there is some non-physical intermediary - usually called a ‘sense-data’ or a ‘sense impression’ - that must first be perceived or experienced in order to perceive the object that exists independently of this perception. According to critical realism, such an intermediary need not be perceived ‘first’ in a temporal sense, but it is a necessary ingredient which suggests to the perceiver an external reality, or which offers the occasion on which to infer the existence of such a reality. Direct realism, on the other hand, denies the need for any recourse to mental go-betweeness in order to explain our perception of the physical world.

Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, than ‘mediately’ perceived. The terms are Berkeley’s, who claims that one might be said to hear a coach rattling down the street, but this is mediate perception as opposed to what is ‘in truth and strictness’ the immediate perception of a sound. Since the senses ‘make no inference’, the perceiver is then said to infer the existence of the coach, or to have it suggested to him by means of hearing the sound. Thus, for Berkeley, the distinction between mediate and immediate perception is explained in terms of whether or not either inference or suggestion is present in the perception itself.

Berkeley went on to claim that the objects of immediate perception - sounds, colours, tastes, smells, sizes and shapes - were all ‘ideas in the mind’. Yet he held that there was no further reality to be inferred from them: So, that the objects of mediate perceptions - are reduced to being simple collections of ideas. Therefore Berkeley uses the immediate-mediate distinction to defend ‘idealism’. A direct realist, however, can also make use of Berkeley’s distinction to define his own position. D.M. Armstrong does this by claiming that the objects of immediate perception are all occurrences of sensible qualities, such as colour, shapes and sounds, and these are all physical existents, and not ideas or any sort of mental intermediary at all. Physical objects, all mediately perceived, are the bearers of these properties immediately perceived.

Berkeley and Armstrong’s way of drawing the distinction between mediate and immediate perception - by reference to inference or the lack of it - houses major difficulties. We are asked to believe that some psychological element of inference or suggestion enters into our mediate perception of physical objects such as coaches and camels. But this is implausible. First, there are cases in which it is plausible to assert that someone perceived a physical object - a tree, say - even when that person was unaware of perceiving it. (We can infer from his behaviour in carefully walking around it that he did see it, even though he does not remember seeing it.) Armstrong would have to say that in such cases inference was present, because seeing a tree would be a case of mediate perception: Although here it would have to be an unconscious inference. But this seems baseless, that there is no empirical evidence that any sort of inference was made at all.

Second, it seems that whether a person infers the existence of something from what he perceives is more a question of talent and training than it is a question of what the nature of the objects inferred really is. For instances, if we have three different colour samples, a trained artist might not have to see their difference immediately. Someone with less colour sense, however, might see patches ‘A’ and ‘B’ as being the same in colour, and patches ‘B’ and ‘C’; and so inference might be present in determining differences in colour, but colour was supposed to be an object of immediate perception. On the one hand, a park ranger might not have to infer that the animal he sees is a Florida panther; he sees it to be such straightaway. Someone unfamiliar with the Everglades, however, might have to infer this from the creature’s markings. Hence, inference need not be present in cases of perceiving physical objects, yet perception of physical objects was supposed to be mediate perception.

A more straightforward way to distinguish between different objects of perception was advanced by Aristotle, in De Anima, where he spoke of objects directly or essentially perceived as opposed to those objects incidentally perceived. The former comprise perceptual properties, either those discerned by only one sense (the ‘proper sensibles’) such as colour, sound, taste, smell, and tactile qualities, or else those discerned by more than one sense, such as size, shape and motion (the ‘common sensibles’). The objects incidentally perceived are the concrete individuals which possess the perceptual properties, that s, particular physical objects.

According to Aristotle’s direct realism, we perceive physical objects incidentally - that is, only by means of the direct or essential perception of certain properties that belong to such objects. In other words, by perceiving the real properties of things, and only in this way, can we thereby be said to perceive the things themselves. These perceptual properties, though not existing independently of the objects that have them, are yet held to exist independently of the perceiving subject; and the perception of them is direct in that no mental messages have to be perceived or sensed in order to perceive these real properties.

Aristotle’s way of defining his position seems superior to the psychological account offered by Armstrong, since it is unencumbered with the extra baggage of inference or suggestion. Yet a common interpretation of Aristotelean view leads to grave difference. This interpretation identifies the property of the perceived object with a property of the perceiving sense organ. It is based on Aristotle’s saying that in perception the soul takes the form of the object perceived without its matter. On this interpretation it is easy to think of direct realism as being committed to the view that ‘colour as seen’ or ‘sound as heard’ were independently existing properties of physical objects. But such a view has been rightly disparaged by its critics and labelled as ‘naĂŻve realism: For

this is a view holding that the way things look or seem is exactly the way things are, even in the absence of perceivers to whom they appear that way.

The chief difficulty of naĂŻve realism is well presented by an argument of Bertrand Russell (1962). Russells claims that an ordinary table appears to be of different colours from different points of view and under different lighting conditions. Since each of the colours appearing has just as much right to be considered real, we should avoid favouritism and deny that the table has any one particular colour. Russell then went on to say the same sort of thing about its texture, shape and hardness. All of these qualities are what we might call ‘appearance determined’ qualities - that is, they are not real independent of how they appear to perceivers, so the real table, for Russell, was something apart from the directly perceived colours, sounds, smells and tactual qualities - all of which Russell termed ‘sense-data’. It is from these sense-data that Russell believed that we inferred the existence of physical objects.

Russell’s argument, however, only works against the ‘naĂŻve’ version of direct realism. It should first be noted that the argument does not show that the table has no real colour, shape, or texture, but only that we might not know which of the apparent properties are real properties of the table. So the most that Russell can prove with his argument is that we must remain sceptical about the real properties of the table: But this might be enough o show that we have no right to talk about its real properties at all. If we did have some way of determining which were the real properties, however, then Russell’s argument loses its sting. A step towards making this determination this can be taken by questioning Russell’s initial supposition that some perceiver-dependent properties might turn out to be real properties. To agree with this assumption is to fall into the error of naĂŻve realism. Instead, the clear-headed direct realist would be on safer ground in denying that the directly apprehended real properties are ‘colour s as seen’. ‘sound as heard’, or ‘textures as felt’; for this is to confuse real properties of things with the appearances they present to perceivers.

The direct realism should instead begin by insisting that real properties are not perceiver-dependent. This would mean that if colour is to be a real property, it must be specified in terms that do not require essential reference to the visual experience of perceivers. One way to do this would be to identify the colour of a surface with the character of the light waves emitted or reflected from that surface. This would be an empirical identification - that is,, the predicate ‘is coloured’ and the predicate ‘reflects or emits light of a certain wave length’ would refer to the one and the same property.

To say, then, that fire engines are red even at night would be to say that their surfaces, under normal conditions of illumination, would reflect light at the red of the colour spectrum. This is still compatible with saying that they are not red in the dark, in that they are not now reflecting any such light. This gets around Russell’s problem about choosing the ‘real colour’ of an object. Another way to make this point is to say that the ‘standing colour’ of fie engines remains red no matter what the conditions of illumination, whereas their ‘transient colour’ changes according to changes in such lighting conditions.

Similar reductions could be made with regard to the other sensible properties that seemed to be perceivers-dependent: Sound could be reduced to sound waves, tastes and smells to the particular shapes of the molecules that lie on the tongue or enter the nose, and tactual qualities such as roughness and smoothness to structural properties of the objects felt. All of these properties would be taken to be distinct from the perceptual experience that these properties typically give rise to when they cause changes in the perceiver’s sense organs. When critics complain that such a reduction would ‘leave out the greenness of greens and the yellowness of yellows’. However, the direct realist can answer that it is by identifying different colours with distinct light waves that we can best explain how it is that perceivers in the ame environment, with similar physical constitutions, can cite similar colour experiences of green or of yellow.

If such a general reductive programme could be made plausible, it would show that Locke’s ‘secondary qualities’ - colour, sound, taste, and smell - were really ‘primary qualities’ after all, in that they could be specified apart from their typical effects on perceivers. These finer of qualities -colour, taste, smell, and so on, are said to exist only ‘by convention’: As something that does not hold everywhere by nature, bu t is produced in or contributed by human beings in their interaction with a world which really contains only atoms of certain kinds in a void. Rather, it is only that some of the qualities which are imputed to objects, e.g., colour, sweetness, bitterness, etc., are not possessed by those objects. A direct realist could then claim that one directly perceives what is real only when there is no difference between the property proximately impinging on the sense organ and that property organ’s object which gives rise to the sense organ’s being affected. For colour, this would mean that the light waves reflected from the surface on the object must match those entering the eyes: And, for sound, it means that the sound waves entering the ear. A difference in the property at the object from that at the sense organ would result in illusion, not veridical perception. Perhaps this is simply a modern version of Aristotle’s idea that in genuine perception the soul (now the sense organ) takes in the form of the perceivers object.

If it is protested that illusion might also result from an abnormal condition of the perceiver, this can also be accepted. If one’s colour experience deviated too far from normal, even when the physical properties at the object and the sense organ were the same, then ,misperception or illusion would result. But such illusion could only be noted against a backdrop of veridical perception of real properties. Thus, the chance of illusion due to subjective factors need not lead to Democritus’s view of colour, sounds, tastes, an d smells ad existing merely ‘by convention’. The direct realist could insist that there must be a real basis in veridical perception for any such agreement to take place at all: And veridical perception is best explained in terms of the direct perception of the properties of physical objects. It is explained, in other words, when our perceptual expedience is caused in the appropriate way.

This reply on the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since our perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties. But no view of perception alone is sufficient to refute such global scepticism. For such a refutation we must go beyond our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.

Nonetheless, the classificatorial doubt remains fully consistent with fact or reality, not false or incorrect, but truthful, it is sincerely felt or expressed foreignly to the essential and exact confronting of rules and senses a governing standard, as stapled or fitted in sensing the definitive criteria of narrowly particularized possibilities in value as taken by a variable accord with reality. To position of something, as to make it balanced, level or square, that we may think of a proper alignment as something, in so, that one is certain, like trust, another derivation of the same appears on the name is etymologically, or ‘strong seers'. Conformity of fact or actuality of a statement been or accepted as true to an original or standard set theory of which is considered the supreme reality and to have the ultimate meaning, and value of existence. Nonetheless, a compound position, such as a conjunction or negation, whose they the truth-values always determined by the truth-values of the component thesis.

Moreover, science, unswerving places in the exact position of something that is very well hidden, finding to its nature in so that it is made believable, quickly and imposes on sensing and responding to the definitive qualities or state of being actual or true, such that as a person, an entity, or an event, that might be gainfully to employ the totality of all things possessing actuality, existence, or essence. In other words, in that which objectively and in fact do seem as to be about reality, in fact, actually to the satisfying factions of instinctual needs through awareness of and adjustment to environmental demands. Thus, the act of realizing or the condition of being realized is first, and utmost the resulting infraction of realizing.

Nonetheless, a declaration made to explain or justify action, or its believing desire upon which it is to act, by which the conviction underlying fact or cause, that provide logical sense for a premise or occurrence for logical, rational. Analytic mental stars have long lost in reason. Yet, the premise usually the minor premises, of an argument, use the faculty of reason that arises to the spoken exchange or open discussion, and, of course, of a dialectic way. To determining or conclude by logical thinking out a solution to the problem, would therefore persuade or dissuade someone with reason that posits of itself with the good sense or justification of reasonability. In which, good causes are simply justifiably to be considered as to think. By which humans seek or attain knowledge or truth. Mere reason is insufficient to convince ‘us' of its veracity. Still, an intuitively given certainty is perceptively welcomed by comprehension, as the truth or fact, without the use of the rational process, as one comes to assessing someone's character, it sublimely configures one consideration, and often with resulting comprehensions, in which it is assessing situations or circumstances and draw sound conclusions into the reign of judgement.

Governing by or being accorded to reason or sound thinking, in that a reasonable solution to the problem, may as well, in being without bounds of common sense and arriving to a fair use of reason, especially to form conclusions, inferences or judgements. In that, all evidential alternates of a confronting argument within the use in thinking or thought out responses to issuing the furthering argumentation to fit or join in the sum parts that are composite to the intellectual faculties, by which case human understanding or the attemptive grasp to its thought, are the resulting liberty encroaching men of zeal, well-meaningly, but without understanding. Being or occurring in fact or actually having to some verifiable existence, real objects, and a real illness. . . .'Really true and actual and not imaginary, alleged, or ideal, as people and not ghosts, from which are we to find on practical matters and concerns of experiencing the real world. The surrounding surfaces, might we, as, perhaps attest to this for the first time. Being no less than what they state, we have not taken its free pretence, or affections for a real experience highly, as many may encounter real trouble. This, nonetheless, projects of an existing objectivity in which the world despite subjectivity or conventions of thought or language is or have valuing representation, reckoned by actual power, in that of relating to, or being an image formed by light or another identifiable simulation, that converge in space, the stationary or fixed properties, such as a thing or whole having actual existence. All of which, are accorded a truly factual experience into which the actual attestations have brought to you by the afforded efforts of our very own imaginations.

Ideally, in theory r imagination, a concept of reason that is transcendent but nonempirical as to think os conception of and ideal thought, that potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel's absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, it is held fast that in the imagination the formation of a mental image of something that is or should be b perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity, yet, fanciful as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.

The totality of all things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact', and the reasoning are wrong of the ‘facts' and ‘substantive facts', as we may never know the ‘facts' of the case'. These usages may occasion qualms' among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what is or of reality should be.

Cautiously, a set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that form a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or helps comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, of, relating to, or based on conjecture, its philosophy is such to accord, i.e., the restriction to theory, not practical theoretical physics, as given to speculative theorizing. Also, the given idea, because of which formidable combinations awaiting upon the inception of an idea, showed as true or is assumed to be shown. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than practical considerations the measures its quality value.

Looking back a century, one can see a striking degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More inertly there is more in the apparent obscurity and abstruseness of the concerns, which seem at first glance to be removed from the great debates of previous centuries, between ‘realism' and ‘idealist', say, of ‘rationalists' and ‘empiricist'.

Thus, no matter what the current debate or discussion, the central issue is often ne without conceptual and/or contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this overblowing emptiness, and to relate to what we know of ourselves and the world.

Contributions to this study include the theory of ‘speech arts', and the investigation of communicable communications, especially the relationship between words and ‘ideas', and words and the ‘world'. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression that is adequately confronting an attitude for which a connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease that may be referred to by a term like ‘arthritis' or the kind of tree referred as a criterial definition of a ‘maple' of which, horticulturally I know next to nothing. This raises the possibility of imaging two persons in comparatively different environments, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation' may here include the actual objects they perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of some terms thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, no matter these differences of surroundings. Partisans of wide . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being of narrow content plus context.

Self-asserting, is that of assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity as to the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. Nevertheless, they have attacked the model, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs actually play our social lives, to undermine the Cartesian picture that functionally describes the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have a resulting subsequent for becoming known as the ‘rule following' considerations and the ‘private language argument' are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism', about the nature of mental functioning, that occurs in a language different from one's ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Chomsky), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us' of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us'. That these programs may or may not improve to conditions that are lastly to enhance of the right type of existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one's riff of necessity to humanities' abeyance to expressions in the finer of qualities.

As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person's capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons go on by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We have commonly held the view along with ‘functionalism', according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. We may think of theories as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which we can couch this theory, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory', enabling ‘us' to imply what thoughts or intentions explain their actions, but by realizing the situation ‘in their shoes' or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. We achieve understanding others when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development usually associated in the ‘Verstehen' traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond' our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. However, such anticipatory pessimism about the prospects of conformation theory, denying that we can assess the results of abduction as to probability. A process of reasoning in which a conclusion is drawn from a set of premises usually confined to cases in which the conclusions are supposed in following from the premises, i.e., the inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Moreover, as we reason we use an indefinite mode or commonsense set of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

Some ‘theories' usually emerge as an indirect design of [supposed] truths that are not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory moderately tractable since, in a sense, we have contained all truths in those few. In a theory so organized, we have called the few truths from which we have deductively inferred all others ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be investigation.

According to theory, the philosophy of science, is a generalization or set referring to unobservable entities, e.g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the ‘molecular-kinetic theory' refers to molecules and their properties, . . . although an older usage suggests the lack of adequate evidence in support of it (merely a theory), current philosophical usage does indeed follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few in that there are many for being aptly controlling of disciplinary principles. These principles were taken to be either metaphysically prior or

or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused' by them. When we took the principles as epistemologically prior, that is, as ‘axioms', we took them to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included ‘or', to such that all truths so indeed follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth

Ideally, in theory imagination, a concept of reason that is transcendent but nonempirical as to think os conception of and ideal thought, that potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel's absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, in the imagination the formation of a mental image of something that is or should be b perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.

The totality of all things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact', and the reasoning are wrong of the ‘facts' and ‘substantive facts', as we may never know the ‘facts' of the case'. These usages may occasion qualms' among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what is or of reality should be.

Seriously, a set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that form a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or helps comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, of, relating to, or based on conjecture, its philosophy is such to accord, i.e., the restriction to theory, not practical theoretical physics, as given to speculative theorizing. Also, the given idea, because of which formidable combinations awaiting upon the inception of an idea, showed as true or is assumed to be shown. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than practical considerations the measures its quality value.

Looking back a century, one can see a striking degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More inertly there is more in the apparent obscurity and abstruseness of the concerns, which seem at first glance to be removed from the great debates of previous centuries, between ‘realism' and ‘idealist', say, of ‘rationalists' and ‘empiricist'.

Thus, no matter what the current debate or discussion, the central issue is often ne without conceptual and/or contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this overblowing emptiness, and to relate to what we know of ourselves and the world.

Contributions to this study include the theory of ‘speech arts', and the investigation of communicable communications, especially the relationship between words and ‘ideas', and words and the ‘world'. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression that is adequately confronting an attitude for which a connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease that may be referred to by a term like ‘arthritis' or the kind of tree referred as a criterial definition of a ‘maple' of which, horticulturally I know next to nothing. This raises the possibility of imaging two persons in comparatively different environments, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation' may here include the actual objects they perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of some terms thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, no matter these differences of surroundings. Partisans of wide . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being of narrow content plus context.

All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity as to the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. Nevertheless, they have attacked the model, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs actually play our social lives, to undermine the Cartesian picture that functionally describes the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the ‘rule following' considerations and the ‘private language argument' are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism', about the nature of mental functioning, that occurs in a language different from one's ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Chomsky), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us' of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us'. That these programs may or may not improve to conditions that are lastly to enhance of the right type of existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one's riff of necessity to humanities' abeyance to expressions in the finer of qualities.

As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person's capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons go on by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We have commonly held the view along with ‘functionalism', according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. We may think of theories as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which we can couch this theory, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory', enabling ‘us' to imply what thoughts or intentions explain their actions, but by realizing the situation ‘in their shoes' or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. We achieve understanding others when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development usually associated in the ‘verstehen' traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond' our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. However, such anticipatory pessimism about the prospects of conformation theory, denying that we can assess the results of abduction as to probability. A process of reasoning in which a conclusion is drawn from a set of premises usually confined to cases in which the conclusions are supposed in following from the premises, i.e., the inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Moreover, as we reason we use an indefinite mode or commonsense set of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

Some ‘theories' usually emerge as an indirect design of [supposed] truths that are not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory moderately tractable since, in a sense, we have contained all truths in those few. In a theory so organized, we have called the few truths from which we have deductively inferred all others ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be investigation.

According to theory, the philosophy of science, is a generalization or set referring to unobservable entities, e.g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the ‘molecular-kinetic theory' refers to molecules and their properties, . . . although an older usage suggests the lack of adequate evidence in support of it (merely a theory), current philosophical usage does indeed follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few in that there are many for being aptly controlling of disciplinary principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused' by them. When we took the principles as epistemologically prior, that is, as ‘axioms', we took them to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included ‘or', to such that all truths so indeed follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality' has still never been articulated satisfactorily, and the nature of the alleged ‘correspondence' and the alleged ‘reality' remain objectionably obscure. Yet the familiar alternative suggestions that true beliefs are those that are ‘mutually coherent', or ‘pragmatically useful', or ‘verifiable in suitable conditions' has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, ‘is true', distorts its really semantic character, which is not to describe propositions but to endorse them. However, this radical approach is also faced with difficulties and suggests, quasi counter intuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can seem essential yet beyond our reach. However, recent work provides some grounds for optimism.

We have based a theory in philosophy of science, is a generalization or set about observable entities, i.e., atoms, quarks, unconscious wish, and so on. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the molecular-kinetic theory refers top molecules and their properties, although an older usage suggests the lack of adequate evidence in support of it (merely a theory), progressive toward its sage; the usage does not carry that connotation. Einstein's special; Theory of relativity, for example, is considered extremely well founded.

These are two main views on the nature of theories. According to the ‘received view' theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). Under which, some theories usually emerge as a set-order of categorical classification that the assigned values accede to evaluations that are [supposed] truths that are not neatly organized, making the theory difficult to survey or study as a whole. The axiomatic method is an ideal for organizing a theory (Hilbert, 1970), one tries to select from among the supposed truths a small number from which all the others can be seen to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth's in those few. In a theory so organized, they call the few truths from which they deductively infer all others ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which were used to study mathematical and physical processes, could they be made mathematical objects, so we could make axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, objects of mathematical investigation.

In the tradition (as in Leibniz, 1704), many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, we took them to be entities of such a nature that what exists is ‘caused' by them. When we took the principles as epistemologically prior, that is, as ‘axioms', we took them to be either epistemologically privileged, i.e., self-evident, not needing to be demonstrated, or again, inclusive ‘or', to be such that all truths do indeed follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part. Of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture all of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought, and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help ‘us' to achieve our goals, tat to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues from premises to a conclusion is the mark of valid reasoning, that we should not regard moral pronouncements as objectively true, and so on. To assess the plausible of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the absence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality' has still never been articulated satisfactorily: The nature of the alleged ‘correspondence' and the alleged ‘reality remains objectively obscure. Yet, the familiar alternative suggests ~. That true beliefs are those that are ‘mutually coherent', or ‘pragmatically useful', or ‘they establish by induction of each to a confronted Verifiability in some suitable conditions with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all ~. That the syntactic form of the predicate, ‘is true', distorts its really semantic character, which is not to describe propositions but to endorse them. Nevertheless, they have also faced this radical approach with difficulties and suggest, a counter intuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions, and an explicit account of it can seem essential yet, beyond our reach. However, recent work provides some grounds for optimism.

The belief that snow is white owes its truth to a certain feature of the external world, namely, to the fact that snow is white. Similarly, the belief that dogs bark is true because of the fact that dogs bark. This trivial observation leads to what is perhaps the most natural and popular account of truth, the ‘correspondence theory', according to which a belief (statement, a sentence, propositions, etc.) as true just in case there exists a fact corresponding to it (Wittgenstein, 1922). This thesis is unexceptionable just as it stands alone. However, if it is to provide a rigorous, substantial and complete theory of truth ~. If it is to be more than merely a picturesque way of asserting all equivalences to the form:

The belief that ‘p' is ‘true p'. Then we must supplement it with accounts of what facts are, and what it is for a belief to correspond to a fact, and these are the problems on which the correspondence theory of truth has foundered. For one thing, it is far form clear that reducing ‘the belief achieves any significant gain in understanding that snow is white is true' to ‘the facts that snow is white exists': For these expressions seem equally resistant to analysis and too close in meaning for one to provide an illuminating account of the other. In addition, the general relationship that holds in particular between the belief that snow is white and the fact that snow is white, between the belief that dogs bark and the fact that dogs bark, and so on, is very hard to identify. The best attempt to date is Wittgenstein's (1922) so-called ‘picture theory', under which an elementary proposition is a configuration of terms, with whatever stare of affairs it reported, as an atomic fact is a configuration of simple objects, an atomic fact corresponds to an elementary proposition (and makes it true) when their configurations are identical and when the terms in the proposition for it to the similarly-placed objects in the fact, and the truth value of each complex proposition the truth values of the elementary ones have entailed. However, eve if this account is correct as far as it goes, it would need to be completed with plausible theories of ‘logical configuration', ‘elementary proposition', ‘reference' and ‘entailment', none of which is easy to come by way of the central characteristic of truth. One that any adequate theory must explain is that when a proposition satisfies its ‘conditions of proof or verification', then it is regarded as true. To the extent that the property of corresponding with reality is mysterious, we are going to find it impossible to see what we take to verify a proposition should indicate the possession of that property. Therefore, a tempting alternative to the correspondence theory an alternative that eschews obscure, metaphysical concept over which explains quite straightforwardly why Verifiability implies, truth is simply to identify truth with Verifiability (Peirce, 1932). This idea can take on variously formed. One version involves the further assumption that verification is ‘holistic', i.e., that of a belief is justified (i.e., turn over evidence of the truth) when it is part of an entire system of beliefs that are consistent and ‘harmonious' (Bradley, 1914 and Hempel, 1935). We have known this as the ‘coherence theory of truth'. Another version involves the assumption associated with each proposition, some specific procedure for finding out whether one should believe it or not. On this account, to say that a proposition is true is to sa that the appropriate procedure would verify (Dummett, 1979, and Putnam, 1981). Through mathematics this amounts to the identification of truth with probability.

The attractions of the verificationist account of truth are that it is refreshingly clear compared with the correspondence theory, and that it succeeds in connecting truth with verification. The trouble is that the bond it postulates between these notions is implausibly strong. We do indeed take verification to indicate truth, but also we recognize the possibility that a proposition may be false in spite of there being impeccable reasons to believe it, and that a proposition may be true although we are not able to discover that it is. Verifiability and ruth are no doubt highly correlated, but surely not the same thing.

A well-known account of truth is known as ‘pragmatism' (James, 1909 and Papineau, 1987). As we have just seen, the verificationist selects a prominent property of truth and considers the essence of truth. Similarly, the pragmatist focuses on another important characteristic namely, that true belief is a good basis for action and takes this to be the very nature of truth. We have said that true assumptions were, by definition, those that provoke actions with desirable results. Again, we have an account with a single attractive explanatory feature, but again, it postulates between truth and its alleged analysand which at this point of its continuum is placed in this case, utility is implausibly close. Granted, true belief has a tendency to foster success, but it happens regularly that actions based on true beliefs lead to disaster, while false assumptions, by pure chance, produce wonderful results.

One of the few uncontroversial facts about truth is that the proposition that snow is white if and only if snow is white, the proposition that lying is wrong is true if and only if lying is wrong, and so on. Traditional theories acknowledge this fact but regard it as insufficient and, as we have seen, inflate it with some further principle of the form, ‘X is true' if and only if ‘X' has property ‘P' (such as corresponding to reality: Verifiability, or being suitable as a basis for action), which is supposed to specify what truth is. Some radical alternatives to the traditional theories result from denying the need for any such further specification (Ramsey, 1927, Strawson, 1950 and Quine, 1990). For example, ne might suppose that the basic theory of truth contains nothing more that equivalences of the form, ‘The proposition that p is true if and only if p' (Horwich, 1990).

This sort of proposal is best presented with an account of the ‘raison de Ă©tre' of our notion of truth, namely that it enables ‘us ' to express attitudes toward these propositions we can designate but not explicitly formulate. Suppose, for example, they tell you that Einstein's last words expressed a claim about physics, an area in which you think he was very reliable. Suppose that, unknown to you, his claim was the proposition whose quantum mechanics are wrong. What conclusion can you draw? Exactly which proposition becomes the appropriate object of your belief? Surely not that quantum mechanics are wrong, because you are not aware that is what he said. What we have needed is something equivalent to the infante conjunction:

If what Einstein said was that E = mc2, then E = mc2, and if that he said as that Quantum mechanics were wrong, then Quantum mechanics are wrong . . . And so on?

That is, a proposition, ‘K' with the following properties, that from ‘K' and any further premises of the form. ‘Einstein's claim was the proposition that p' you can infer p', whatever it is. Now suppose, as the deflationist's say, that our understanding of the truth predicate consists in the stimulative decision to accept any instance of the schema. ‘The proposition that p is true if and only if p', then we have solved your problem. For ‘K' is the proposition, ‘Einstein's claim is true ', it will have precisely the inferential power that we have needed. From it and ‘Einstein's claim is the proposition that quantum mechanics are wrong', you can use Leibniz's law to infer ‘The proposition that quantum mechanic is wrong is true, which given the relevant axiom of the deflationary theory, allows you to derive ‘Quantum mechanics is wrong'. Thus, one point in favour of the deflationary theory is that it squares with a plausible story about the function of our notion of truth, in that its axioms explain that function without the need for further analysis of ‘what truth ‘is'.

Not all variants of deflationism have this virtue, according to the redundancy theory of truth, and also known as minimalism, or the deflationary view of truth, which implicate a pair of sentences, ‘The proposition that ‘p' is true' and plain ‘p', has the same meaning and expresses the same statement as one and another, so it is a syntactic illusion to think that p is true' attributes any sort of property to a proposition (Ramsey, 1927 and Strawson, 1950). All the same, it becomes hard to explain why we are entitled to infer ‘The proposition that quantum mechanics are wrong is true' form ‘Einstein's claim is the proposition that quantum mechanics are wrong. ‘Einstein's claim is true'. For if truth is not property, then we can no longer account for the inference by invoking the law that if ‘X', appears identical with ‘Y' then any property of ‘X' is a property of ‘Y', and vice versa. Thus the redundancy/performative theory, by identifying rather than merely correlating the contents of ‘The proposition that p is true' and ‘p, precludes the prospect of a good explanation of one on truth's most significant and useful characteristics. So restricting our claim to the ineffectually weak, accedes of a favourable Equivalence schematic: The proposition that ‘p is true is and is only ‘p'.

Support for deflationism depends upon the possibility of showing that its axiom instances of the equivalence schema unsupplements by any further analysis, will suffice to explain all the central facts about truth, for example, that the verification of a proposition indicates its truth, and that true beliefs have a practical value. The first of these facts follows trivially from the deflationary axioms, for given a deductive assimilation to knowledge of the equivalence of ‘p' and ‘The proposition that ‘p is true', any reason to believe that ‘p' becomes an equally good reason to believe that the preposition that ‘p' is true. We can also explain the second fact as for the deflationary axioms, but not quite so easily. Consider, to begin with, beliefs of the form: (B) If I perform the act ‘A', then my desires will be fulfilled. Notice that the psychological role of such a belief is, roughly, to cause the performance of ‘A'. In other words, gave that I do have belief (B), then typically. ‘I will perform the act ‘A'. Notice also that when the belief is true then, given the deflationary axioms, the performance of ‘A' will in fact lead to the fulfilment of one's desires, i.e., If (B) is true, then if I perform ‘A', my desires will be fulfilled. Therefore: If (B) is true, then my desires will be fulfilled. So valuing the truth of beliefs of that form is quite treasonable. Nevertheless, inference derives such beliefs from other beliefs and can be expected to be true if those other beliefs are true. So valuing the truth of any belief that might be used in such an inference is reasonable.

To him extent that they can give such deflationary accounts of all the acts involving truth, then the collection will meet the explanatory demands on a theory of truth of all statements like, ‘The proposition that snow is white is true if and only if snow is white', and we will undermine the sense that we need some deep analysis of truth.

Nonetheless, there are several strongly felt objections to deflationism. One reason for dissatisfaction is that the theory has many axioms, and therefore cannot be completely written down. It can be described as the theory whose axioms are the propositions of the fore ‘p if and only if it is true that p', but not explicitly formulated. This alleged defect has led some philosophers to develop theories that show, first, how the truth of any proposition derives from the referential properties of its constituents, and second, how the referential properties of primitive constituents are determined (Tarski, 1943 and Davidson, 1969). However, assuming that all propositions including belief attributions remain controversial, law of nature and counterfactual conditionals depends for their truth values on what their constituent references really are. Moreover, there is no immediate prospect of a decent, finite theory of reference, so that it is far form clear that the infinite, that we can avoid list-like character of deflationism.

Another source of dissatisfaction with this theory is that certain instances of the equivalence schema are clearly false. Consider.

(a) THE PROPOSITION EXPRESSED BY THE SENTENCE

IN CAPITAL LETTERS IN NOT TRUE.

Substituting this into the schema one gets a version of the ‘liar' paradox: Specifically:

(b) The proposition that the proposition expressed by the sentence in capital letters is not true is true if and only if the proposition divulged by the sentence in capital letters are not true, from which a contradiction is easily derivable. (Given (b), the supposition that (a) is true implies that (a) is not true, and the supposition that it is not true that it is.) Consequently, not every instance of the equivalence schema can be included in the theory of truth, but it is no simple matter to specify the ones to be excluded. In "Naming and Necessity" (1980), Kripler gave the classical modern treatment of the topic reference, both clarifying the distinction between names and definite descriptions, and opening the door to many subsequent attempts to understand the notion of reference in terms and an original episode of attaching a name to a subject. Of course, deflationism is far from alone in having to confront this problem.

A third objection to the version of the deflationary theory presented here concerns its reliance on ‘propositions' as the basic vehicles of truth. It is widely felt that the notion of the proposition is defective and that we should not employ it in semantics. If this point of view is accepted then the natural deflationary reaction is to attempt a reformation that would appeal only to sentences, for example: ‘p' is true if and only ‘if p'.

Nevertheless, this so-called ‘disquotational theory of truth' (Quine, 1990) has trouble over indexicals, demonstratives and other terms whose referents vary with the context of use. It is not so, for example, that every instance of ‘I am hungry' is true and only if ‘I am hungry'. There is no simple way of modifying the disquotational schema to accommodate this problem. A possible way of these difficulties is to resist the critique of propositions. Such entities may exhibit an unwelcome degree of indeterminancy, and might defy reduction to familiar items, however, they do offer a plausible account of belief, as relations to propositions, and, in ordinary language at least, we indeed take them to be the primary bearers of truth. To believe a proposition is too old for it to be true. The philosophical problem includes discovering whether belief differs from other varieties of assent, such as ‘acceptance', discovering to what extent degrees of belief are possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether they have properly said that prelinguistic infants or animals have beliefs.

Additionally, it is commonly supposed that problems about the nature of truth are intimately bound up with questions as to the accessibility and autonomy of facts in various domains: Questions about whether we can know the facts, and whether they can exist independently of our capacity to discover them (Dummett, 1978, and Putnam, 1981). One might reason, for example, that if ‘T is true' means' nothing more than ‘T will be verified', then certain forms of scepticism, specifically, those that doubt the correctness of our methods of verification, that will be precluded, and that the facts will have been revealed as dependent on human practices. Alternatively, we might say that if truth were an inexplicable, primitive, non-epistemic property, then the fact that ‘T' is true would be completely independent of ‘us'. Moreover, we could, in that case, have no reason to assume that the propositions we believe actually have tis property, so scepticism would be unavoidable. In a similar vein, we might think that as special, and perhaps undesirable features of the deflationary approach, is that we have deprived truth of such metaphysical or epistemological implications.

On closer scrutiny, however, it is far from clear that there exists ‘any' account of truth with consequences regarding the accessibility or autonomy of non-semantic matters. For although we may expect an account of truth to have such implications for facts of the from ‘T is true', we cannot assume without further argument that the same conclusions will apply to the fact 'T'. For it cannot be assumed that ‘T' and ‘T are true' nor, are they equivalent to one and another, given the explanation of ‘true', from which is being employed. Of course, if we have distinguishable truth in the way that the deflationist proposes, then the equivalence holds by definition. However, if reference to some metaphysical or epistemological characteristic has defined truth, then we throw the equivalence schema into doubt, pending some demonstration that the true predicate, in the sense assumed, will secure in as far as there are thoughts to be epistemological problems hanging over ‘T's' that do not threaten ‘T is true', giving the needed demonstration will be difficult. Similarly, if we so define ‘truth' that the fact, ‘T' is felt to be more, or less, independent of human practices than the fact that ‘T is true', then again, it is unclear that the equivalence schema will hold. It seems, therefore, that the attempt to base epistemological or metaphysical conclusions on a theory of truth must fail because in any such attempt we will simultaneously rely on and undermine the equivalence schema.

The most influential idea in the theory of meaning in the past hundred yeas is the thesis that meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Frége (1848-1925), was developed in a distinctive way by the early Wittgenstein (1889-1951), and is a leading idea of Davidson (1917-). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions needs not and should not be advanced as a singular point of occupying a particular spot in space, as perhaps, a complete account of self-meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally performed by the various types of a sentence in the language, and must have some idea of the significance of various kinds of speech acts. We should moderately target the claim of the theorist of truth-conditions on the notion of content: If two indicative sentences differ in what they strictly and literally say, then the difference accounts for this difference in their truth-conditions. Most basic to truth-conditions is simply of a statement that is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some security disappears when it turns out that repeating the very same statement can only define the truth condition, as a truth condition of ‘snow is white' is that snow is white, the truth condition of ‘Britain would have capitulated had Hitler invaded' is the Britain would have capitulated had Hitler invaded. It is disputed wether. This element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. The view has sometimes opposed truth-conditional theories of meaning that to know the meaning of a statement is to be able to use it in a network of inferences.

Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to the study include the theory of ‘speech acts' and the investigation of communication and the relationship between words and ideas and the world and surrounding surfaces, by which some persons express by a sentence often depend on the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis' or the kind of tree I call a ‘birch' will be defined by criteria of which I know next to nothing. The raises the possibility of imagining two persons in differently adjoined of their environment, but in which everything appears the same to each of them, but between them they define a space of philosophical problems. They are the essential components of understanding nd any intelligible proposition that is true can be understood. Such that which an utterance or sentence expresses, the proposition or claim made about the world may by extension, the content of a predicated or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the cental concern of the philosophy of language.

In particularly, the problems of indeterminancy of translation, inscrutability of reference, language, predication, reference, rule following, semantics, translation, and the topics referring to subordinate headings associated with ‘logic'. The loss of confidence in determinate meaning (from each that is decoding is another encoding) is an element common both to postmodern uncertainties in the theory of criticism, and to the analytic tradition that follows writers such as Quine (1908-). Still it may be asked, why should we suppose that we should account fundamental epistemic notions for in behavioural terms what grounds are there for assuming ‘p knows p' is a matter of the status of its statement between some subject and some object, between nature and its mirror? The answer is that the only alternative may be to take knowledge of inner states as premises from which we have normally inferred our knowledge of other things, and without which we have normally inferred our knowledge of other things, and without which knowledge would be ungrounded. However, it is not really coherent, and does not in the last analysis make sense, to suggest that human knowledge have foundations or grounds. We should remember that to say that truth and knowledge ‘can only be judged by the standards of our own day' which is not to say, that it is less important, or ‘more ‘cut off from the world', that we had supposed. Saying is just that nothing counts as justification, unless by reference to what we already accept, and that there is no way to get outside our beliefs and our language to find some test other than coherence. Nevertheless, is that the professional philosophers have thought it might be otherwise, since the body has haunted only them of epistemological scepticism.

What Quine opposes as ‘residual Platonism' is not so much the hypostasising of nonphysical entities as the notion of ‘correspondence' with things as the final court of appeal for evaluating present practices. Unfortunately, Quine, for all that it is incompatible with its basic insights, substitutes for this correspondence to physical entities, and specially to the basic entities, whatever they turn out to be, of physical science. Nevertheless, when we have purified their doctrines, they converge on a single claim. That no account of knowledge can depend on the assumption of some privileged relations to reality. Their work brings out why an account of knowledge can amount only to a description of human behaviour.

What, then, is to be said of these ‘inner states', and of the direct reports of them that have played so important a role in traditional epistemology? For a person to feel is nothing else than for him to be able to make a certain type of non-inferential report, to attribute feelings to infants is to acknowledge in them latent abilities of this innate kind. Non-conceptual, non-linguistic ‘knowledge' of what feelings or sensations is like is attributively to be from its basis of a potential membership of our community. We comment upon infants and the more attractive animals with having feelings based on that spontaneous sympathy that we extend to anything humanoid, in contrast with the mere ‘response to stimuli' attributed to photoelectric cells and to animals about which no one feels sentimentally. Assuming moral prohibition against hurting infants is consequently wrong and the better-looking animals are; those moral prohibitions grounded' in their possession of feelings. The relation of dependence is really the other way round. Similarly, we could not be mistaken in assuming a four-year-old child has knowledge, but no one-year-old, any more than we could be mistaken in taking the word of a statute that eighteen-year-old can marry freely but seventeen-year-old cannot. (There is no more ‘ontological ground' for the distinction that may suit ‘us' to make in the former case than in the later.) Again, such a question as ‘Are robots' conscious?' Calling for a decision on our part whether or not to treat robots as members of our linguistic community. All this is a piece with the insight brought intro philosophy by Hegel (1770-1831), that the individual apart from his society is just another animal.

Willard van Orman Quine, the most influential American philosopher of the latter half the 20th century, when after the wartime period in naval intelligence, punctuating the rest of his career with extensive foreign lecturing and travel. Quine's early work was on mathematical logic, and issued in "A System of Logistic" (1934), "Mathematical Logic" (1940), and "Methods of Logic" (1950), by which it was with the collection of papers from a "Logical Point of View" (1953) that his philosophical importance became widely recognized. Quine's work dominated concern with problems of convention, meaning, and synonymy cemented by "Word and Object" (1960), in which the indeterminancy of radical translation first takes centre-stage. In this and many subsequent writings Quine takes a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. These ‘intentional idioms' resist smooth incorporation into the scientific world view, and Quine responds with scepticism toward them, not quite endorsing ‘eliminativism', but regarding them as second-rate idioms, unsuitable for describing strict and literal facts. For similar reasons he has consistently expressed suspicion of the logical and philosophical propriety of appeal to logical possibilities and possible worlds. The languages that are properly behaved and suitable for literal and true descriptions of the world happen to those within the fields that draw upon mathematics and science. We must take the entities to which our best theories refer with full seriousness in our ontologies, although an empiricist. Quine thus supposes that science requires the abstract objects of set theory, and therefore exist. In the theory of knowledge Quine associated with a ‘holistic view' of verification, conceiving of a body of knowledge as to a web touching experience at the periphery, but with each point connected by a network of relations to other points.

They have also known Quine for the view that we should naturalize, or conduct epistemology in a scientific spirit, with the object of investigation being the relationship, in human beings, between the inputs of experience and the outputs of belief. Although we have attacked Quine's approaches to the major problems of philosophy as betraying undue ‘scientism' and sometimes ‘behaviourism', the clarity of his vision and the scope of his writing made him the major focus of Anglo-American work of the past forty tears in logic, semantics, and epistemology. The works cited his writings' cover "The Ways of Paradox and Other Essays" (1966), "Ontological Relativity and Other Essays" (1969), "Philosophy of Logic" (1970), "The Roots of Reference" (1974) and "The Time of My Life: An Autobiography" (1985).

Coherence is a major player in the theatre of knowledge. There are cogence theories of belief, truth and justification, as these are to combine themselves in the various ways to yield theories of knowledge coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the beliefs that you are reading a page in a book, in so, that what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have a monster in the garden?

One answer is that the belief has a coherent place or role in a system of beliefs, perception or the having the perceptivity that has its influence on beliefs. As, you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have a monster in the garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, that its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a monster. Amount a perceptively holing is importantly accountable for the perceptivity and actions that are indeterminate to its content if its belief is the action, if stimulated by its inner and latent coherence of your belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays upon a network of relations to other beliefs, some latently causal than others that relate to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other belief, justly as I infer about other beliefs formed thereof.

The input of perceptibility and the output of an action supplement the central role of the systematic relations the belief has to other beliefs, but the systematic relations give the belief the specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the representational content by which it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is one determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.

When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? Again, there is a distinction between weak and strong theoretic principles that govern its theory of coherence. Weak theories tell ‘us' that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory, and intuitive projectio [L], its English translation from the Latin is ‘projection', however, strong theories, or dominant projections are in coherence to justification as solely a matter of how a belief coheres with a system of latent hierarchal beliefs. There is, nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories between positive and negative coherence theory (Pollock, 1986). A positive coherence theory tells ‘us' that if a belief coheres with a background system of belief, then the belief is justifiable. A negative coherence theory tells ‘us' that if a belief fails to cohere with a background system of beliefs, then the belief is not justifiable. We might put this by saying that, according to the positivity of a coherence theory, coherence has the power to produce justification, while according to its being adhered by negativity, the coherence theory has only the power to nullify justification.

Least of mention, a strong coherence theory of justification is a formidable combination by which a positive and a negative theory tell ‘us' that a belief is justifiable if and only if it coheres with a background system of inter-connectivity of beliefs. Coherence theories of justification and knowledge have most often been rejected for being unable to deal with an accountable justification toward the perceptivity upon the projection of knowledge (Audi, 1988, and Pollock, 1986), and, therefore, considering a perceptual example that will serve as a kind of crucial test will be most appropriate. Suppose that a person, call her Trust, and works with a scientific instrumentation that has a gauging measure upon temperatures of liquids in a container. The gauge is marked in degrees, she looks at the gauge and sees that the reading is 105 degrees. What is she justifiably to believe, and why? Is she, for example, justified in believing that the liquid in the container is 105 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that, though her belief that she sees the shape 105 is immediately justified as direct sensory evidence without appeal to a background system, the belief that the location in the container is 105 degrees results from coherence with a background system of latent beliefs that affirm to the shaping perceptivity that its 105 as visually read to be 105 degrees on the gauge that measures the temperature of the liquid in the container. This, nonetheless, of a weak coherence view that combines coherence with direct perceptivity as its evidence, in that the foundation of justification, is to account for the justification of our beliefs.

A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shaping to sensory data that holds accountable a measure of 105, or even the more cautious belief that one sees a shape, resulting from the perceptivity of the coherence theory, in that it coheres with a background system. One may argue for this strong coherence theory in many different ways. One line or medium through which to appeal to the coherence theory of contentual representations. If the content of the perceptual belief results from the relations of the belief to other beliefs in a network system of beliefs, then one may notably argue that justification thoroughly rests upon the resultants' findings in relation to the belief been no other than the beliefs of a furthering network system of coordinate beliefs. In face value, the argument for the strong coherence theory is that without any assumptive grasp for reason, in that the coherence theories of content are directed of beliefs and are supposing causes that only produce of a consequent, of which we already expect. Consider the very cautious belief that I see a shape. How could the justification for that perceptual belief be an existent result that they characterize of its material coherence with a background system of beliefs? What might the background system tell ‘us' that would justify that belief? Our background system contains a simple and primal theory about our relationship to the world and surrounding surfaces that we perceive as it is or should be believed. To come to the specific point at issue, we believe that we can tell a shape when we see one, completely differentiated its form as perceived to sensory data, that we are to trust of ourselves about such simple matters as wether we see a shape before ‘us' or not, as in the acceptance of opening to nature the inter-connectivity between belief and the progression through which we acquire from past experiential conditions of application, and not beyond deception. Moreover, when Julie sees the believing desire to act upon what either coheres with a weak or strong coherence of theory, she shows that its belief, as a measurable quality or entity of 105, has the essence in as much as there is much more of a structured distinction of circumstance, which is not of those that are deceptive about whether she sees that shape or sincerely does not see of its shaping distinction, however. Light is good. The numeral shapes are large, readily discernible and so forth. These are beliefs that Julie has single handedly authenticated reasons for justification. Her successive malignance to sensory access to data involved is justifiably a subsequent belief, in that with those beliefs, and so she is justified and creditable.

The philosophical; problems include discovering whether belief differs from other varieties of assent, such as ‘acceptance' discovering to what extent degrees of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether we have properly said that prelinguistic infants or animals have beliefs.

Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs, since we are not aware of such inferences for the most part, we must interpret the inferences as unconscious inferences, as information processing, based on or accessing the background system that proves most convincing of acquiring its act and use from the motivational force that its underlying and hidden desire are to do so. One might object to such an account since not all justifiable inferences are self-explanatory, and more generally, the account of coherence may, at best, is ably successful to competitions that are based on background systems (BonJour, 1985, and Lehrer, 1990). The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of beliefs informs one that one is acceptingly trustworthy and enables one to meet the objections. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in the way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer, 1990).

Illustrating the relationship between positive and negative coherence theories in terms of the standard coherence theory is easy. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Julie, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose that when she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Julie, who has always placed her trust in the gauge, believes what the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads is at 105 degrees is not a justified belief because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells ‘us' that she is not justified in her belief about the temperature of the contents in the container. By contrast, when we have not illuminated the red light and the background system of Julie tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us' that she is justified in her belief because her belief coheres with her background system of Julie lets it be known that she, under such conditions gauges a trustworthy indicant of temperature characterized or identified in respect of the liquid in the container, then she is justified. The positive coherence theory tells ‘us' that she is justified in her belief because her belief coheres with her background system continues as a trustworthy system.

The foregoing sketch and illustration of coherence theories of justification have a common feature, namely, that they are what we have called inter-naturalistic theories of justification what makes of such a view are the absence of any requirement that the person for whom the belief is justified have any cognitive access to the relation of reliability in question. Lacking such access, such a person will usually, have no reason for thinking the belief is true or likely to be authenticated, but will, on such an account, is all the same to appear epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological traditions, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

They are theories affirming that coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might have an objection, can a completely internal subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which we must ground in some connection between internal subjective conditions and external objective realities?

The answer is that it cannot and that we have required something more than justified true belief for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What we have required may be put by saying that the justification that one must be undefeated by errors in the background system of beliefs. Justification is undefeated by errors just in case any correction of such errors in the background system of belief would sustain the justification of the belief based on the corrected system. So knowledge, on this sort of positivity is acclaimed by the coherence theory, which is the true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error (Lehrer, 1990). The connection between internal subjective conditions of belief and external objectivity are from which reality's result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Julie, she believes that her internal subjectivity to conditions of sensory data in which we have connected the experience and perceptual beliefs with the external objectivity in which reality is the temperature of the liquid in the container in an accountable manner. This background belief is essential to the justification of her belief that the temperature of the liquid in the container is 105 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our beliefs that cohere with that system. For instance, such justification to convert to knowledge, that theory must be sufficiently free from error so that they have sustained the coherence in corrected versions of our background system of beliefs. The correctness of the simple background theory provides the connection between the internal condition and external reality.

The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences have been deadening til their representation has been exemplified as some perceptual belief. Beliefs are the engines that pull the train of justification. Nevertheless, what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifacts of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification (Rescher, 1973, and Rosenberg, 1980). That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is justifiable of some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps one expressing a consensus among systems or some consensus among belief systems or some convergence toward a consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objectification. One is that there is a consensus that we can all be wrong about at least some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with the consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.

With fact or reality, not false or incorrect, but truthful, it is sincerely felt or expressed foreignly to the essential and exact confronting of rules and senses a governing standard, as stapled or fitted in sensing the definitive criteria of narrowly particularized possibilities in value as taken by a variable accord with reality. To position of something, as to make it balanced, level or square, that we may think of a proper alignment as something, in so, that one is certain, like trust, another derivation of the same appears on the name is etymologically, or ‘strong seers'. Conformity of fact or actuality of a statement been or accepted as true to an original or standard set theory of which is considered the supreme reality and to have the ultimate meaning, and value of existence. Nonetheless, a compound position, such as a conjunction or negation, whose they the truth-values always determined by the truth-values of the component thesis.

Moreover, science, unswerving exactly to position of something very well hidden, its nature in so that to make it believed, is quickly and imposes on sensing and responding to the definitive qualities or state of being actual or true, such that as a person, an entity, or an event, that might be gainfully to employ the totality of all things possessing actuality, existence, or essence. In other words, in that which objectively and in fact do seem as to be about reality, in fact, actually to the satisfying factions of instinctual needs through awareness of and adjustment to environmental demands. Thus, the act of realizing or the condition of being realized is first, and utmost the resulting infraction of realizing.

Nonetheless, a declaration made to explain or justify action, or its believing desire upon which it is to act, by which the conviction underlying fact or cause, that provide logical sense for a premise or occurrence for logical, rational. Analytic mental states have long lost in reason. Yet, the premise usually the minor premises, of an argument, use the faculty of reason that arises of an awakening dialectic from which of ways is of determining or conclude by logical thinking out a solution to the problem. Therefore persuade or dissuade someone with reason that posits of itself with the good sense or justification of reasonability. In which, good causes are simply justifiably to be considered as to think. By which humans seek or attain knowledge or truth. Mere reason is insufficient to convince ‘us' of its veracity. Still, an intuitively given certainty is perceptively welcomed by comprehension, as the truth or fact, without the use of the rational process, as one comes to assessing someone's character, it sublimely configures one consideration, and often with resulting comprehensions, in which it is assessing situations or circumstances and draw sound conclusions into the reign of judgement.

Governing by or being accorded to reason or sound thinking, in that a reasonable solution to the problem, may as well, in being without bounds of common sense and arriving to a fair use of reason, especially to form conclusions, inferences or judgements. In that, all evidential alternates of a confronting argument within the use in thinking or thought out responses to issuing the furthering argumentation to fit or join in the sum parts that are composite to the intellectual faculties, by which case human understanding or the attemptive grasp to its thought, are the resulting liberty encroaching men of zeal, well-meaningly, but without understanding.

Being or occurring in fact or actually having to some verifiable existence, real objects, and a real illness. . . .'Really true and actual and not imaginary, alleged, or ideal, as people and not ghosts, from which are we to find on practical matters and concerns of experiencing the real world. The surrounding surfaces, might we, as, perhaps of attestation to this may for the first time. Being no less than what they state, we have not taken its free pretence, or affections for a real experience highly, as many may encounter real trouble. This, nonetheless, projects of an existing objectivity in which the world despite subjectivity or conventions of thought or language is or have valuing representation, reckoned by actual power, in that of relating to, or being an image formed by light or another identifiable simulation, that converge in space, the stationary or fixed properties, such as a thing or whole having actual existence. All of which, are accorded a truly factual experience into which the actual attestations have brought to you by the afforded efforts of our very own imaginations.

Ideally, in theory imagination, a concept of reason that is transcendent but non-empirical, as to think of its conception of and ideally given by means of some ideological proof that thought has a potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel's absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, in the imagination the formation of a mental image of something that is or should be perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/ still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.

The totality of all things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact', and the reasoning are wrong of the ‘facts' and ‘substantive facts', as we may never know the ‘facts' of the case'. These usages may occasion qualms' among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what is or of reality should be.

Seriously, a set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that form a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or helps comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, of, relating to, or based on conjecture, its philosophy is such to accord, i.e., the restriction to theory, not practical theoretical physics, as given to speculative theorizing. Also, the given idea, because of which formidable combinations awaiting upon the inception of an idea, showed as true or is assumed to be shown. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than practical considerations the measures its quality value.

Thinking, a century ago, one can see a striking degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More inertly there is more in the apparent obscurity and abstruseness of the concerns, which seem at first glance to be removed from the great debates of previous centuries, between ‘realism' and ‘idealist', say, of ‘rationalists' and ‘empiricist'.

Thus, no matter what the current debate or discussion, the central issue is often without conceptual and/or contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this overblowing emptiness, and to relate to what we know of ourselves and the world.

Contributions to this study include the theory of ‘speech arts', and the investigation of communicable communications, especially the relationship between words and ‘ideas', and words and the ‘world'. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression that is adequately confronting an attitude for which a connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to and may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease that may be referred to by a term like ‘arthritis' or the kind of tree referred as a criterial definition of a ‘maple' of which, horticulturally I know next to nothing. This raises the possibility of imaging two persons in comparatively different environments, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation' may here include the actual objects they perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of some terms thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, no matter these differences of surroundings. Partisans of wide . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being of narrow content plus context.

All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity as to the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. Nevertheless, they have attacked the model, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs that actually play our social lives, to undermine the Cartesian picture that functionally describe the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the ‘rule following' considerations and the ‘private language argument' are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism', about the nature of mental functioning, that occurs in a language different from one's ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Chomsky), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us' of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us'. That these programs may or may not improve to conditions that are lastly to enhance of the right type of existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one's riff of necessity to humanities' abeyance to expressions in the finer of qualities.

As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person's capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons go on by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We have commonly held the view along with ‘functionalism', according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. We may think of theories as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which we can couch this theory, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory', enabling ‘us' to imply what thoughts or intentions explain their actions, but by realizing the situation ‘in stepping within their shoes' or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. We achieve understanding others when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development usually associated in the ‘Verstehen' traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond' our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. However, such anticipatory pessimism about the prospects of conformation theory, denying that we can assess the results of abduction as to probability. A process of reasoning in which a conclusion is drawn from a set of premises usually confined to cases in which the conclusions are supposed in following from the premises, i.e., the inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Moreover, as we reason we use an indefinite mode or common-sense set of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

Some ‘theories' usually emerge as an indirect design of [supposed] truths that are not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory moderately tractable since, in a sense, we have contained all truths in those few. In a theory so organized, we have called the few truths from which we have deductively inferred all others ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be set of investigation.

According to theory, the philosophy of science, is a generalization or set referring to unobservable entities, e.g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the ‘molecular-kinetic theory' refers to molecules and their properties, . . . although an older usage suggests the lack of adequate evidence in support of it (merely a theory), current philosophical usage does indeed follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few in that there are many for being aptly controlling of disciplinary principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists as ‘caused' by them. When we took the principles as epistemologically prior, that is, as ‘axioms', we took them to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included ‘or', to such that all truths so indeed follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.

Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of Coherentism must accept the logical gap between justified belief and truth, but may believe that our capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is non-synthetically depending on what causal subject has the belief. In recent decades several epistemologists have pursed this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p' is knowledge just in case it has the right causal connection to the fact that ‘p'. Such a criterion can be applied only to cases where the fact that ‘p' is a sort that can reach causal relations, this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject's environment.

For example, Armstrong (1973) proposed that a belief of the form ‘This (perceived) object is F' is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F', that is, the fact that the object is ‘F' contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘ is to occur, and so thus a perceived object of ‘y', if ' undergoing those properties are for ‘us' to believe that ‘y' is ‘F', then ‘y' is ‘F'. Dretske, (1981) offers a similar account, in terms of the belief's being caused by a signal received by the perceiver that carries the information that the object is ‘F'.

This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief's being unjustified, and an unjustifiable belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that the substantive primary colours that are perceivable, that things look chartreuse to you and chartreuse things look magenta. If you fail to heed these reasons you have for thinking that your colour perception or sensory data is a way. Believing of a thing that looks magenta to you that it is magenta, your belief will fail to be justified and will therefore fail to be knowledge, though the thing's being magenta in such a way causes it as to be a completely reliable sign, or to carry the information, in that the thing is blush-coloured.

One could fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified, buy this enriched condition would still be insufficient. Suppose, for example, that in nearly all people, but not in you, as it happens, causes the aforementioned aberrations are colour perceptions. The experimenter tells you that you have taken such a drug but then says, ‘now wait, the pill you took was just a placebo', suppose further, that this last thing the experimenter tells you is false. Her telling you that it was a false statement, and, again, telling you this gives you justification for believing of a thing that looks as a subtractive primary colour to you that it is a sensorial primary colour, in that the fact you were to expect that the experimenters last statements were false, making it the case that your true belief is not knowledgeably correct, thought as though to satisfy its causal condition.

Goldman (1986) has proposed an importantly different causal criterion namely, that a true belief is knowledge, if it is produced by a type of process that is ‘globally' and ‘locally' reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability deals with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be casually related to the belief, and so it could in principle apply to knowledge of any kind of truth.

Goldman requires that global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because they require justification for knowledge, in what requires for knowledge but does not require for justification, which is locally reliable. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Noting that other concepts exhibit the same logical structure can motivate the relevant alternative account of knowledge. Two examples of this are the concept ‘flat' and the concept ‘empty' ( Dretske, 1981). Both might be absolute concepts-a space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is compared with a standard. In the case of ‘flat', there is a standard for what counts as a bump and in the case of ‘empty', there is a standard for what counts as a thing. To be flat is to be free of any relevant bumps and to be empty is to be without all relevant things.

What makes an alternative situation relevant? Goldman does not try to formulate examples of what he takes to be relevantly alternate, but suggests of one. Suppose, that a parent takes a child's temperature with a thermometer that the parent selected at random from several lying in the medicine cabinet. Only the particular thermometer chosen was in good working order, it correctly shows the child's temperature to be normal, but if it had been abnormal then any of the other thermometers would have erroneously shown it to be normal. A globally reliable process has caused the parent's actual true belief but, because it was ‘just luck' that the parent happened to select a good thermometer, ‘we would not say that the parent knows that the child's temperature is normal'. Goldman gives yet another example:

Suppose Sam spots Judy across the street and correctly believes that it is Judy. If it did so occur that it was Judy's twin sister, Trudy, he would be mistaken her for Judy? Does Sam? Know that it is Judy? If there is a serious possibility that the person across the street might have been Trudy, rather than Judy. . . . We would deny that Sam knows (Goldman, 1986). Goldman suggests that the reason for denying knowledge in the thermometer example, be that it was ‘just luck' that the parent did not pick a non-working thermometer and in the twin's example, the reason is that there was ‘a serious possibility' that might have been that Sam could probably have mistaken for. This suggests the following criterion of relevance: An alternate situation, under which, that the same belief is produced in the same way but is false, it is relevantly just in case at some point before the actual belief was to its cause, by which a chance that the actual belief was to have caused, in that the chance of that situation's having come about was instead of the actual situation was too converged, nonetheless, by the chemical components that constitute its inter-actual exchange by which endorphin excitation was to influence and so give to the excitability of neuronal transmitters that deliver messages, inturn, the excited endorphins gave ‘change' to ‘chance', thus it was, in that what was interpreted by the sensory data and unduly persuaded by innate capabilities that at times are latently hidden within the labyrinthine Contained of the mind, or that of the depth of an abyss so instilled within the confines of the brain and protected between its gray matter lie the cranial walls, and yet, the gray matter within will forever glimpse into its choice for a crystalline peak into the quantum world for untold and yet, unforgiving souls, as for giving to its given choice for the chance of luck.

This avoids the sorts of counterexamples we gave for the causal criteria as we discussed earlier, but it is vulnerable to one or ones of a different sort. Suppose you were to stand on the mainland looking over the water at an island, on which are several structures that look (from at least some point of view) as would ne of an actualized point or station of position. You happen to be looking at one of any point, in fact a barn and your belief to that effect are justified, given how it looks to you and the fact that you have exclusively of no reason to think nor believe otherwise. Nevertheless, suppose that the great majority of the barn-looking structures on the island are not real barns but fakes. Finally, suppose that from any viewpoint on the mainland all of the island's fake barns are obscured by trees and that circumstances made it very unlikely that you would have to a viewpoint not on the mainland. Here, it seems, your justified true belief that you are looking at a barn is not knowledge, even if there was not a serious chance that there would have developed an alternative situation, wherefore you are similarly caused to have a false belief that you are looking at a barn.

That example shows that the ‘local reliability' of the belief-producing process, on the ‘serous chance' explication of what makes an alternative relevance, yet its view-point upon which we are in showing that non-locality is in addition to sustain of some probable course of the possibility for ‘us' to believe in. Within the experience condition of application, the relationship with the sensory-data, as having a world-view that can encompass both the hidden and manifest aspects of nature would comprise of the mind, or brain that provides the excitation of neuronal ions, giving to sensory perception an accountable assessment of data and reason-sensitivity allowing a comprehensive world-view, integrating the various aspects of the universe into one magnificent whole, a whole in which we played an organic and central role. One-hundred years ago its question would have been by a Newtonian ‘clockwork universe', a theoretical account of a probable ‘I' universe that is completely mechanical. The laws of nature have predetermined everything that happens and by the state of the universe in the distant past. The freedom one feels regarding ones actions, even as for the movement of one's body, is an illusory infraction and the world-view expresses as the Newtonian one, is completely coherent.

Nevertheless, the human mind abhors a vacuum. When an explicit, coherent world-view is absent, it functions based on a tactic one. A tactic world-view is not subject to a critical evaluation, and it can easily harbour inconsistencies. Indeed, our tactic set of beliefs about the nature of reality consists of contradictory bits and pieces. The dominant component is a leftover from another period, the Newtonian ‘clock universe' still lingers as we cling to this old and tired model because we know of nothing else that can take its place. Our condition is the condition of a culture that is in the throes of a paradigm shift. A major paradigm shift is complex and difficult because a paradigm holds ‘us captive: We see reality through it, as through coloured glasses, but we do not know that, we are convinced that we see reality as it is. Hence the appearance of a new and different paradigm is often incomprehensible. To someone raised believing that the Earth is flat, the suggestion that the Earth is spherical seems preposterous: If the Earth were spherical, would not the poor antipodes fall ‘down' into the sky?

Yet, as we face a new millennium, we are forced to face this challenge. The fate of the planet is in question, and it was brought to its present precarious condition largely because of our trust in the Newtonian paradigm. As Newtonian world-view has to go, and, if one looks carefully, we can discern the main feature of the new, emergent paradigm. The search for these features is what was the influence of a fading paradigm. All paradigms include subterranean realms of tactic assumptions, the influence of which outlasts the adherence to the paradigm itself.

The first line of exploration suggests the ‘weird' aspects of the quantum theory, with fertile grounds for our feeling of which should disappear in inconsistencies with the prevailing world-view. This feeling is in replacing by the new one, i.e., opinion or information assailing of availability by means of ones part of relating to the mind or spirit, which if in the event one believes that the Earth is flat, the story of Magellan's travels is quite puzzling: How travelling due west is possible for a ship and, without changing direction. Arrive at its place of departure? Obviously, when the belief replaces the flat-Earth paradigm that Earth is spherical, we have instantly resolved the puzzle.

The founders of Relativity and quantum mechanics were deeply engaging but incomplete, in that none of them attempted to construct a philosophical system, however, that the mystery at the heart of the quantum theory called for a revolution in philosophical outlooks. During which time, the 1920's, when quantum mechanics reached maturity, began the construction of a full-blooded philosophical system that we based not only on science but on nonscientific modes of knowledge as well. As, the disappearing influences drawn upon the paradigm go well beyond its explicit claim. We believe, as the scenists and philosophers did, that when we wish to find out the truth about the universe, we can ignore nonscientific nodes of processing human experiences, poetry, literature, art, music are all wonderful, but, in relation to the quest for knowledge of the universe, they are irrelevant. Yet, it was Alfred North Whitehead who pointed out the fallacy of this speculative assumption. In this, within other aspects of thinking of some reality in which are the building blocks of reality are not material atoms but ‘throbs of experience'. Whitehead formulated his system in the late 1920s, and yet, as far as I know, the founders of quantum mechanics were unaware of it. It was not until 1963 that J. M. Burgers pointed out that its philosophy accounts very well for the main features of the quanta, especially the ‘weird ones', enabling as in some aspects of reality is ‘higher' or 'deeper' than others, and if so, what is the structure of such hierarchical divisions? What of our place in the universe? Finally, what is the relationship between the great aspiration within the lost realms of nature? An attempt to endow ‘us' with a cosmological meaning in such a universe seems totally absurd, and, yet, this very universe is just a paradigm, not the truth. When you reach its end, you may be willing to join the alternate view as accorded to which, surprisingly bestow ‘us' with what we have restored, although in a post-postmodern context.

Subjective matter's has regulated the philosophical implications of quantum mechanics, as to emphasis the connections between what I believe, in that investigation of profound inter-connectivity is subject to the anticipatorial hesitations that are exclusively held within the western traditions, however, the philosophical thinking, from Plato to Platinous had in some aspects an interpretative cognitive process of presenting her in expression of a consensus of the physical community. Some have shared and by expressive objections to other aspects (sometimes vehemently) by others. Still other aspects express my own views and convictions, as turning about to be more difficult that anticipated, discovering that a conversational mode would be helpful, but, their conversations with each other and with me in hoping that all will be not only illuminating but finding to its read may approve in them, whose dreams are dreams among others than themselves.

These examples make it seem likely that, if there is a criterion for what makes an alternative situation relevant that will save Goldman's claim about reliability and the acceptance of knowledge, it will not be simple.

The interesting thesis that counts asa causal theory of justification, in the meaning of ‘causal theory' intend of the belief that is justified just in case it was produced by a type of process that is ‘globally' reliable, that is, its propensity to produce true beliefs-that can be defined to some favourable approximations, as the proportion of the belief it produces, or would produce where it used as much as opportunity allows, that is true ~. Is sufficiently that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth? We have advanced variations of this view for both knowledge and justified belief. The first formulations of dependably an accounting measure of knowing came in the accompaniment of F.P. Ramsey 1903-30, who made important contributions to mathematical logic, probability theory, the philosophy of science and economics. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says the theoretical are alternatively something that has those properties. If we have repeated the process for all of the theoretical terms, the sentence gives the ‘topic-neutral' structure of the theory, but removes any implication that we know what the term so treated have as a meaning. It leaves open the possibility of identifying the theoretical item with whatever. It is that best fits the description provided, thus, substituting the term by a variable, and exististential qualifying into the result. Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth', which he combined its radical views of the function of many kinds of the proposition. Neither generalizations, nor causal propositions, not those treating probabilities or ethics, described facts, but each has a different specific function in our intellectual commentators on the early works of Wittgenstein, and his continuing friendship with the latter liked to Wittgenstein's return to Cambridge and to philosophy in 1929.

The most sustained and influential application of these ideas were in the philosophy of mind, or brain, as Ludwig Wittgenstein (1889-1951) whom Ramsey persuaded that remained work for him to do, the way of an apparently charismatic figure of 20th-century philosophy, living and writing with a power and intensity that frequently overwhelmed his contemporaries and readers, being a kind of picture or model has centred the early period on the ‘picture theory of meaning' according to which sentence represents a state of affairs of it. Containing elements corresponding to those of the state of affairs and structure or form that mirrors that a structure of the state of affairs that it represents. We have reduced to all logic complexity that of the ‘propositional calculus, and all propositions are ‘truth-functions of atomic or basic propositions.

In the layer period the emphasis shafts dramatically to the actions of people and the role linguistic activities play in their lives. Thus, whereas in the "Tractatus" language is placed in a static, formal relationship with the world, in the later work Wittgenstein emphasis its use through standardized social activities of ordering, advising, requesting, measuring, counting, excising concerns for each other, and so on. These different activities are thought of as so many ‘language games' that together make or a form of life. Philosophy typically ignores this diversity, and in generalizing and abstracting distorts the real nature of its subject-matter. Besides the "Tractatus"and the"investigations" collections of Wittgenstein's work published posthumously include "Remarks on the Foundations of Mathematics" (1956), "Notebooks" (1914-1916) ( 1961), "Pholosophische Bemerkungen" (1964), "Zettel" (1967), and "On Certainty" (1969).

Clearly, there are many forms of Reliabilism. Just as there are many forms of ‘Foundationalism' and ‘coherence'. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, in as far as Foundationalism and Coherentism traditionally focussed on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some precepts of either Foundationalism or Coherentism. Foundationalism says that there are ‘basic' beliefs, which acquire justification without dependence on inference, reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematicity in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematicity consequently, reliabilism could complement Foundationalism and coherence than completed with them.

These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman's claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory' intended for the belief as it is justified in case it was produced by a type of process that is ‘globally' reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently relializable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a ‘personalists theory' could be developed, based on a precise behavioural notion of preference and expectation. In the philosophy of language. Much of Ramsey's work was directed at saving classical mathematics from ‘intuitionism', or what he called the ‘Bolshevik menace of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalists theory, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship that led to Wittgenstein's return to Cambridge and to philosophy in 1929.

Ramsey's sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., ‘quark'. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral' structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. Virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar ‘external' relations between belief and truth. Closely allied to the nomic sufficiency account of knowledge, primarily dur to Dretshe (1971, 1981), A. I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that X's belief that ‘p' qualifies as knowledge just in case ‘X' believes ‘p', because of reasons that would not obtain unless ‘p's' being true, or because of a process or method that would not yield belief in ‘p' if ‘p' were not true. An enemy example, ‘X' would not have its current reasons for believing there is a telephone before it. Or would not come to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief's bing true. Determined to and the facts of counterfactual approach say that ‘X' knows that ‘p' only if there is no ‘relevant alternative' situation in which ‘p' is false but ‘X' would still believe that a proposition ‘p'; must be sufficient to eliminate all the alternatives too ‘p' where an alternative to a proposition ‘p' is a proposition incompatible with ‘p?'. That I, one's justification or evidence for ‘p' must be sufficient for one to know that every alternative too ‘p' is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for ‘us'. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. , The sceptic appears to show that every alternative is seldom. If ever, satisfied.

All the same, and without a problem, is noted by the distinction between the ‘in itself' and the; for itself' originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. ‘Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, an not the inner properties of the object in itself'. Kant applies this same distinction to the subject's cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to its' own self, it represents itself ‘as it appears to itself, not as it is'. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a knower is applied to the subject's own knowledge of itself.

Hegel (1770-1831) begins the transition of the epistemological distinct ion between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is, s it is in fact ir in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact ir in itself involves a relation to itself, or seif-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations does not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter into specific explicit relations with itself. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, the for itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken t o apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant's various organs is th plant ‘for itself'. In Hegel, then, the in itself/for itself distinction becomes universalized, in the is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and he mature plant ae on nd the same entity, the being in itself of the plan, or the plant as potential adult, is ontologically distinct from the bring for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing it is necessary y to know both the actual, explicit self-relations which mark the thing (the being for itself of the thing) and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in a knowledge of the thing as it is in and for itself.

Sartre's distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being I n itself. What is it for consciousness to be, being for itself, is marked by self relation. Sartre posits a ‘pre-reflective Cogito', such that every consciousness of necessarily involves a ‘non-positional' consciousness of the consciousness of . While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and fo itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered as it is both in itself and for itself, in Sartre, to be self related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive e ontological mark of non-conscious entities.

No comments:

Post a Comment