January 3, 2010

-page 24-

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)


The main argument for representationalism appeals to the transparency of experience. The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (The account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that they have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery. The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation 'pictorial'; Though of course, there may imagery in other modalities - auditory, olfactory, etc. - as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories, the Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories, hold that the content of a mental representation are well grounded in causal computational inferential relations to other mental portrayals other than mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localisms (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists, about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to Internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists).

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986) for example, seems to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, has also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. Both, construe of as a narrow content and are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that there may be no need to narrow its contentual representations, accountable for reasons of an ordering supply of naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are nomologically either impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind, claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes' implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called 'subpersonal' or 'sub-doxastic' representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental. That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense

Psychological states. Computational explanations have been offered of, among other mental phenomena, belief.

The classicists hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists, hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program.

Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.

Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.

However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
- a menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.

Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analysable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, as a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believe in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St. Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, and reasonably so - in a way that an ordinary propositional belief that would not.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, dispite the fact that the reliablist condition is satisfied.

One sort of response to this latter sorts of an objection is to 'bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent Internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly Internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general Internalist view of justification that externalist is committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the Internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain Internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is, at least, less vulnerable to Internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge, for which is accepted or advanced as true or real on the basis of less than conclusive evidence, as can only be assumed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`

A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p', e.g., that 2 + 2 = 4, ‘p' is evident or, if ‘p' coheres wit h the bulk of one's beliefs, ‘p' is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p', ‘transmit' the status as evident they already have without criteria to other proposition s like ‘p', or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create' p's epistemic status. If that in turn can be ‘transmitted' to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p', e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p' is evident, or simply, (2) if we can't conceive ‘p' to be false, then ‘p' is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p', ‘transmit' the status as evident they already have for one without criteria to other propositions like ‘p'. Alternatively, they might be criteria whereby epistemic status, e.g., p's being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p' is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass' evidence on linearly from a foundation of highly evident ‘premisses' to ‘conclusions' that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasonings that instill and sustain conviction that some proposed theorem - the theorem proved - is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs' as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs' as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form.





Richard j.Kosciejew

2006

In epistemology, the subjective-objective contrast arises above all for the concept of justification and its relatives. Externalism, is that which is given to the serious considerations that are applicably attentive in the philosophy of mind and language, the view that which is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind or subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind, these external relations make up the 'essence' or 'identity' of the mental state. Externalism, is thus, opposed to the Cartesian separation of the mental form and physical, since that holds that the mental could in principle exist at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic norms of the community, and the general causal relationships of the subject. Particularly advocated of reliabilism, which construes justification objectivity, since, for reliabilism, truth-conditiveness, and non-subjectivity which are conceived as central for justified belief, the view in 'epistemology', which suggests that a subject may know a proposition 'p' if (1) 'p' is true, (2) The subject believes 'p', and (3) The belief that 'p' is the result of some reliable process of belief formation. The third clause, is an alternative to the traditional requirement that the subject be justified in believing that 'p', since a subject may in fact be following a reliable method without being justified in supporting that she is, and vice versa. For this reason, reliabilism is sometimes called an externalist approach to knowledge: the relations that matter to knowing something may be outside the subject's own awareness. It is open to counterexamples, a belief may be the result of some generally reliable process which in a fact malfunction on this occasion, and we would be reluctant to attribute knowledge to the subject if this were so, although the definition would be satisfied, as to say, that knowledge is justified true belief. Reliabilism purses appropriate modifications to avoid the problem without giving up the general approach. Among reliabilist theories of justification (as opposed to knowledge) there are two main varieties: Reliable indicator theories and reliable process theories. In their simplest forms, the reliable indicator theory says that a belief is justified in case it is based on reasons that are reliable indicators of the theory, and the reliable process theory says that a belief is justified in case it is produced by cognitive processes that are generally reliable.

What makes a belief justified and what makes true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals rests on what contingent qualification for which reasons given cause the basic idea or the principal of attentions was that the object that proved much to the explication for the peculiarity to a particular individual as modified by the subject in having the belief. In recent decades a number of epistemologists have pursed this plausible idea with a variety of specific proposals.

Some causal theories of knowledge have it that a true belief that 'p' is knowledge just in case it has the right sort of causal connection to the fact that 'p'. Such a criterion can be applied only to cases where the fact that 'p' is a sort that can enter into causal relations: This seems of excluding mathematically and other necessary facts, and, perhaps, my in fact expressed by a universal generalization: And proponents of this sort of criterions have usually supposed that it is limited to perceptual knowledge of particular facts about the subject's environment.

For example, the proposed emittance or positioning in relation to others, as in a social order, or community set-class, or the instructional positional footing is given to relating to the describing narrations as to explaining of what is set forth. Bizarre characterizations are hardly believable, moreover, worthy of belief, the meaningful transformations have a firm conviction in the reality of something creditable and have no doubts about, hold the belief that take or find in its acceptance, as gospel, take at one's word as well as one's frame-credentials or for our considerations, the better of an understanding changed from the expectations of thinking. The totalized expectation in having by procedure, its controlling externalized customization proved as a customized formality for which it is fixed or accepted in doing or something of an expressing expression having by the externalized control, as a customized formal protocol of procedure. Doing or something of a communicating convenience find by its ways through the persuading convinces, that in this state of something finds by way of expedience and the rhetorical sense of communicable comminations, which states of being the concluding words acquired. 'This (perceived) object is 'F' is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is 'F', that is, the fact that the object is 'F' contributed to causing the belief and its doing so depended on property's of the believer such that the laws of nature dictate that, for any subject 'x' and perceived object 'y', if 'x' has. Those properties and directional subversions that follow in the order of such successiveness that whoever initiates the conscription as too definably conceive that it's believe is to have no doubts around, hold the belief that we take (or accept) as gospel, take at one's word, take one's word for us to better understand that we have a firm conviction in the reality of something favourably in the feelings that we consider, in the sense, that we cognitively have in view of thinking that 'y' is 'F', then 'y' is 'F'. Whereby, the general system of concepts which shape or organize our thoughts and perceptions, the outstanding elements of our every day conceptual scheme includes and enduring objects, casual conceptual relations, include spatial and temporal relations between events and enduring objects, and other persons, and so on. A controversial argument of Davidson's argues that we would be unable to interpret space from different conceptual schemes as even meaningful, we can therefore be certain that there is no difference of conceptual schemes between any thinker and that since 'translation' proceeds according to a principle for an omniscient translator or make sense of 'us', we can be assured that most of the beliefs formed within the common-sense conceptual framework are true. That it is to say, our needs felt to clarify its position in question, that notably precision of thought was in the right word and by means of exactly the right way,

Nevertheless, fostering an importantly different sort of a casual criterion, namely that a true belief is knowledge if it is produced by a type of process that is 'globally' and 'locally' reliable. It is globally reliable if its propensity to cause true beliefs are sufficiently high. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so, could in principle apply to knowledge of any kind of truth, yet, that a justified true belief is knowledge if the type of process that produce d it would not have produced it in any relevant counterfactual situation in which it is false.

A composite theory of relevant alternatives can best be viewed as an attempt to accommodating two opposing strands in our thinking about knowledge. The first is that knowledge is an absolute concept. On one interpretation, this means that the justification or evidence one must have un order to knowing a proposition 'p' must be sufficient to eliminate calling the alternatives to 'p''(whereby the alternative to proposition ‘p' is a proposition incompatible with 'p'). That is, one's justification or evidence for 'p' must be sufficient for one to know that every alternative to 'p' is false. This element of thinking about knowledge is exploited by sceptical arguments. These arguments call our attention to alternatives that our evidence cannot be eliminated. For example, when we are at the zoo, we might claim to knowing that we see a zebra on the justification for which is found by some convincingly persuaded visually perceived evidence - a zebra-like appearance. The sceptic inquires how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such deception, intuitively it is not strong enough for us to know that we are not so deceived. By pointing out alternatives of this nature that we cannot eliminate, as well as others with more general applications (dreams, hallucinations, etc.), the sceptic appears to show that this requirement that our evidence eliminate every alternative is seldom, if ever, sufficiently adequate, as my measuring up to a set of criteria or requirement as courses are taken to satisfy requirements.

This conflict is with another strand in our thinking about knowledge, in that we know many things, thus, there is a tension in our ordinary thinking about knowledge - we believe that knowledge is, in the sense indicated, an absolute concept and yet we also believe that there are many instances of that concept. However, the theory of relevant alternatives can be viewed as an attempt to providing a more satisfactory response to this tension in or thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

According the theory, our need is a pressing lack of something essential, and necessary for supply or relief as provided them with everything needful to qualify than deny the absolute character of knowledge. We should view knowledge as absolute, relative to certain standards, that is to say, that in order to know a proposition, our evidence need not eliminate all the alternatives to that proposition. Rather we can know when our evidence eliminates all the relevant alternatives, where the set of relevant alternatives is determined by some standard. Moreover, according to the relevant alternatives view, the standards determine that the alternatives raised by the sceptic are not relevant. Nonetheless, if this is correct, then the fact that our evidence can eliminate the sceptic's alternatives does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives. So the designation of an alternative view preserves both progressives of our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

All the same, some philosophers have argued that the relevant alternative's theory of knowledge entails the falsity of the principle that the set of known (by 'S') preposition is closed under known (by 'S') entailment: Although others have disputed this, least of mention, that this principle affirms the conditional charge founded of 'the closure principle' as: If 'S' knows 'p' and 'S' knows that 'p' entails 'q', then 'S' knows 'q'.

According to this theory of relevant alternatives, we can know a proposition 'p', without knowing that some (non-relevant) alternative to 'p'' ids false. But since an alternative 'h' to 'p' incompatible with 'p', then 'p' will trivially entail 'not-h'. So it will be possible to know some proposition without knowing another proposition trivially entailed by it. For example, we can know that we see a zebra without knowing that it is not the case that we see a cleverly disguised mule (on the assumption that 'we see a cleverly disguised mule' is not a relevant alternative). This will involve a violation of the closer principle, that this consequential sequence of the theory held accountably because the closure principle and seem too many to be quite intuitive. In fact, we can view sceptical arguments as employing the closure principle as a premise, along with the premises that we do not know to set down in detail or by particulars the alternative sets on which scepticism, apart from others in recurring the associate subject mater proving that of its invalid falsity. Only, by its reasonless and untruthful falseness, not in conformity with what is true, e.g., the information turned out to be false, wherefore to the contrary of fact, and off the mark stands to establish a factual veracious truism. From these postulations, the pre-supposition that something that is taken for granted or advanced as fact, must establish of a motivation the underling forces that frame it through and by its excitable change in itself of a chance to decide upon the basis on assumption about the nature of society. The propositions we believe entail the falsity of sceptical alternatives, which we do not know the propositions we believe. For example, it follows from the closure principle and the fact that we do not know that we do not see a cleverly disguised mule, that we do not know that we see a zebra. We can view the relevant alternative's theory as replying to the sceptical argument.

How significant a problem is this for the theory of relevant alternatives? This depends on how we construe the theory. If the theory is supposed to providing us with an analysis of knowledge, then the lack of precise criteria of relevance surely constitutes a serious problem. However, if the theory is viewed instead as providing a response to sceptical arguments, that the difficulty has little significance for the overall success of the theory

Although, internalism may or may not construe justification, subjectivistically, depending on whether the proposed epistemic standards are interpersonally grounded. There are also various kinds of subjectivity, justification, may, e.g., be granted in one's considerate standards or simply in what one believes is resounding. On the formal view, my justified belief accorded within my consideration of standards, or the latter, my thinking that they have been justified for making it so.

Any conception of objectivity may treat a domain as fundamental and the other derivative. Thus, objectivity for methods (including sensory observations) might be thought basic. Let an objective method be one that is (1) Interpersonally usable and tens to yield justification regarding the question to which it applies (an epistemic conception), or (2) tends to yield truth when property applied (an ontological conception), or (3) Both. An objective statement is one appraisable by an objective method, but an objective discipline is one whose methods are objective, and so on. Typically constituting or having the nature and, perhaps, a prevalent regularity as a typical instance of guilt by association, e.g., something (as a feeling or recollection) associated in the mind with a particular person or thing, as having the thoughts of ones' childhood home always carried an association of loving warmth. By those who conceive objectivity epistemologically tends to make methods and fundamental, those who conceive it ontologically tend to take basic statements. Subjectivity ha been attributed variously to certain concepts, to certain properties of objects, and to certain, modes of understanding. The overarching idea of these attributions is the nature of the concepts, properties, or modes of understanding in question are dependent upon the properties and relations of the subjects who employ those concepts, posses the properties or exercise those modes of understanding. The dependence may be a dependence upon the particular subject or upon some type which the subject instantiates. What is not so dependent is objectivity. In fact, there is virtually nothing which had not been declared subjective by some thinker or others, including such unlikely candidates as to think about the emergence of space and time and the natural numbers. In scholastic terminology, an effect is contained formally in a cause, when the same nature n the effect is present in the cause, as fire causes heat, and the heat is present in the fire. An effect is virtually in a cause when this is not so, as when a pot or statue is caused by an artist. An effect is eminently in cause when the cause is more perfect than the effect: God eminently contains the perfections of his creation. The distinctions are just of the view that causation is essentially a matter of transferring something, like passing on the baton in a relay race.

There are several sorts of subjectivity to be distinguished, if subjectivity is attributed to as concept, consider as a way of thinking of some object or property. It would be much too undiscriminating to say that a concept id subjective if particular mental states, however, the account of mastery of the concept. All concepts would then be counted as subjective. We can distinguish several more discriminating criteria. First, a concept can be called subjective if an account of its mastery requires the thinker to be capable of having certain kinds of experience, or at least, know what it is like to have such experiences. Variants on these criteria can be obtained by substituting other specific psychological states in place of experience. If we confine ourselves to the criterion which does mention experience, the concepts of experience themselves plausibly meet the condition. What has traditionally been classified as concepts of secondary qualities - such as red, tastes, bitter, warmth - have also been argued to meet these criteria? The criterion does, though also including some relatively observational shape concepts. The relatively observational shape concepts 'square' and 'regular diamond' pick out exactly the same shaped properties, but differ in which perceptual experience are mentioned in accounts of they're - mastery - once, appraised by determining the unconventional symmetry perceived when something is seen as a diamond, from when it is seen as a square. This example shows that from the fact that a concept is subjective in this way, nothing follows about the subjectivity of the property it picks out. Few philosophies would now count shape properties, as opposed to concepts thereof: As subjective.

Concepts with a second type of subjectivity could more specifically be called 'first personal'. A concept is 'first-personal' if, in an account of its mastery, the application of the concept to objects other than the thinker is related to the condition under which the thinker is willing to apply the concept to him. Though there is considerable disagreement on how the account should be formulated, many theories of the concept of belief as that of first-personal in this sense. For example, this is true of any account which says that a thinker understands a third-personal attribution 'He believes that so-and-so' by understanding that it holds, very roughly, if the third-person in question ids in circumstance in which the thinker would himself (first-person) judge that so-and-so. It is equally true of accounts which in some way or another say that the third-person attribution is understood as meaning that the other person is in some state which stands in some specific sameness relation to the state which causes the thinker to be willing to judge: 'I believe that so-and-so'.

The subjectivity of indexical concepts, where an expression whose reference is dependent upon the content, such as, I, here, now, there, when or where and that (perceptually presented), 'man' has long since been widely noted. The fact of these is subjective in the sense of the first criterion, but seemingly they are all subjective, in that the possibility of objects' using any one of them to thinking around any object at a given time depends upon his relations to the particular object then, indexicals are thus particularly well suited to expressing a particular point of view of the world of objects, a point of view available only to those who stand in the right relations to the object in question.

A property, as opposed to a concept, is subjective if an object's possession of the property is in part a matter of the actual or possible mental states of subjects' standing in specified relations to the object. Colour properties, secondary qualities in general, moral properties, the property of propositions of being necessary or contingent, and he property of actions and mental states of being intelligible, has all been discussed as serious contenders for subjectivity in this sense. To say that a property is subjective is not to say that it can be analysed away in terms of mental states. The mental states in terms of which subjectivists have aimed to elucidate, say, of having of including the mental states of experiencing something as red, and judging something to be, respective. These attributions embed reference to the original properties themselves - or, at least to concepts thereof - in a way which makes to prevent the participation, consideration, or inclusion of having or excising to regulate or overlook the peculiarity for freeing or the state of being free or freed from a charge or obligation to which others are subject. The act of bringing into play or realizing in action exemplify the use of examples in order to clarify the analysis problem. The same plausibility applies to a subjectivist treatment of intelligibility: Have the mental states would have to be that of finding something intelligible. Even without any commitment to irreprehensible analysis, though, the subjectivist's claim needs extensive consideration for each of the divided areas. In the case of colour, part of the task of the subjectivist who makes his claim at the level of properties than concept is to arguing against those who would identify the properties, or with some more complex vector of physical properties.

Suppose that for an object to having a certain property is for subject standing in some certain relations to it to be a certain mental state. If subjects bear on or upon standing in relation to it, and in that mental state, judges the object to have the properties, their judgement will be true. Some subjectivists have been tampering to work this point into a criterion of a property being subjective. There is, though, some definitional, that seems that we can make sense of this possibility, that though in certain circumstances, a subject's judgement about whether an object has a property is guaranteed to be correct, if correctly amplified it is not his judgement (in those circumstances) or anything else about his or other mental states which makes the judgement correct. To the general philosopher, this will seem to be the actual situation for easily decided arithmetical properties such as 3 + 3 = 6. If this is correct, the subjectivist will have to make essential use of some such asymmetrical notions as 'what makes a proposition is true'. Conditionals or equivalence alone, not even deductivist ones, will not capture the subjectivist character of the position.

Finally, subjectivity has been attributed to modes of understanding. Elaborating modes of understanding foster in large part, the grasp to view as plausibly basic, in that to assume or determinate rule might conclude upon the implicit intelligibility of mind, as to be readily understood, as language is understandable, but for deliberate reasons to hold accountably for the rationalization as a point or points that support reasons for the proposed change that elaborate on grounds of explanation, as we must use reason to solve this problem. The condition of mastery of mental concepts limits or qualifies an agreement or offer to including the condition that any contesting of will, it would be of containing or depend on each condition of agreed cases that conditional infirmity on your raising the needed translation as placed of conviction. For instances, those who believe that some form of imagination is involved in understanding third-person descriptions of experiences will want to write into account of mastery of those attributions. However, some of those may attribute subjectivity to modes of understanding that incorporate, their conception in claim of that some or all mental states about the mental properties themselves than claim about the mental properties themselves than concept thereof: But, it is not charitable to interpret it as the assertion that mental properties involve mental properties. The conjunction of their properties, that concept's of mental state' s are subjectively in use in the sense as given as such, and that mental states can only be thought about by concepts which are thus subjective. Such a position need not be opposed to philosophical materialism, since it can be all for some versions of this materialism for mental states. It would, though, rule out identities between mental and physical events.

The view that the claims of ethics are objectively true, they are not 'relative' to a subject or cultural enlightenment as culturally excellent of tastes acquired by intellectual and aesthetic training, as a man of culture is known by his reading, nor purely subjective in by natures opposition to 'error theory' or 'scepticism'. The central problem in finding the source of the required objectivity may as to the result in the absolute conception of reality, facts exist independently of human cognition and in order for human beings to know such facts, and they must be conceptualized. That, we, as independently personal beings, move out and away from where one is to be brought to or towards an end as to beginning on a course, enterprising to going beyond a normal or acceptable limit that ordinarily a person of consequence has a quality that attracts attention, for something that does not exist. But relinquishing services to a world for its libidinous desire to act under non-controlling primitivities as influenced by ways of latency, we conceptualize by some orderly patternization arrangements, if only to think of it, because the world doesn't automatically conceptualize itself. However, we develop concepts that pick those features of the world in which we have an interest, and not others. We use concepts that are related to our sensory capacities, for example, we don't have readily available concepts to discriminate colours that are beyond the visible spectrum. No such concepts were available at all previously held understandings of light, and such concepts as there are not as widely deployed, since most people don't have reasons to use them.

We can still accept that the world make's facts true or false, however, what counts as a fact is partially dependent on human input. One part, is the availability of concepts to describing such facts. Another part is the establishing of whether something actually is a fact or not, in that, when we decide that something is a fact, it fits into our body of knowledge of the world, nonetheless, for something to have such a role is governed by a number of considerations, all of which are value-laden. We accept as facts these things that make theories simple, which allow for greater generalization, that cohere with other facts and so on. Therefore, rejecting the view that facts exist independently of human concepts or human epistemology, we advance progressively toward the portion of space as occupied by or chosen for something, as, perhaps, the place where we'll meet. If the situation were, in fact, understood in being dependent on certain kinds of values - the values that governs enquiry in all its multiple forms - scientific, historical, literary, legal and so on.

In spite of which notions that philosophers have looked [into] and handled the employment of 'real' situated approaches that distinguish the problem or signature qualifications, though features given by fundamental objectivity, on the one hand, there are some straightforward ontological concepts: Something is objective if it exists, and is the way it is. Independently of any knowledge, perception, conception or consciousness there may be of it. Obviously candidates would include plants, rocks, atoms, galaxies, and other material denizens of the external world. Fewer obvious candidates include such things as numbers, set, propositions, primary qualities, facts, time and the spacious spaces and subjective entities. Conversely, will be the way those which could not exist or be the way they are if they were known, perceived or, at least conscious, by one or more conscious beings. Such things as sensations, dreams, memories, secondary qualities, aesthetic properties and moral values have been construed as subsections in this sense. Yet, our ability forwarded in the making of intelligent choices and to reach intelligent conclusions or decisions, had we to render ably by giving power, strength or competence that enables a sense to study something practically.

There is on the other hand, a notion of objectivity that belongs primarily within epistemology. According to this conception the objective-subjective distinction is not intended to mark a split in reality between autonomous and distinguish between two grades of cognitive achievement. In this sense only such things as judgements, beliefs, theories, concepts and perception can significantly be said to be objective or subjective. Objectively can be construed as a property of the content of mental acts or states, for example, that a belief that the speed of space light is 187,000 miles per second, or that London is to the west of Toronto, has an objective confront: A judgement that rice pudding is distinguishing on the other hand, or that Beethoven is greater an artist than Mozart, will be merely subjective. If this is epistemologically of concept it is to be a proper contented, of mental acts and states, then at this point we clearly need to specify 'what' property it is to be. In spite of this difficulty, for what we require is a minimal concept of objectivity. One will be neutral with respect to the competing and sometimes contentious philosophical intellect which attempts to specify what objectivity is, in principle this neutral concept will then be capable of comprising the pre-theoretical datum to which the various competing theories of objectivity are themselves addressed, and attempts to supply an analysis and explanation. Perhaps the best notion is one that exploits Kant's insights that conceptual representation or epistemology entail what he call's 'presumptuous universality', for a judgement to be objective it must at least of content, that 'may be presupposed to being valid for all men'.

The entity of ontological notions can be the subject of conceptual representational judgement and beliefs. For example, on most accounts colours are ontological beliefs, in the analysis of the property of being red, say, there will occur climactically perceptions and judgements of normal observers under normal conditions. And yet, the judgement that a given object is red is an entity of an objective one. Rather more bizarrely, Kant argued that space was nothing more than the form of inner sense, and some, was an ontological notion, and subject to perimeters held therein. And yet, the propositions of geometry, the science of space, are for Kant the very paradigms of conceptually framed representations as grounded on epistemology: it is necessary, universal and objectively true, that one of the liveliest debates in recent years (in logic, set theory and the foundations of semantics and the philosophy of language) pertain to this distributive issue. Does the conceptually represented base on epistemologist factoring class of assertions requires subjective judgement and belief of the entities those assertions apparently involved or range over? By and large, theories that answer this question in the affirmative can be called 'realist' and those that defended a negative answer, can be called 'anti-realist'

One intuition that lies at the heart of the realist's account of objectivity is that, in the last analysis, the objectivity of a belief is to be explained by appeal t o the independent existence of the entities it concerns. Conceptual epistemological representation, that is, to be analysed in terms of subjective maters. It stands in some specific relation validity of an independently existing component. Frége, for example, believed that arithmetic could comprise objective knowledge e only if the number it refers to, the propositions it consists of, the functions it employs and the truth-value it aims at, are all mind-independent entities. Conversely, within a realist framework, to show that the member of a give in a class of judgements and merely subjective, it is sufficient to show that there exists no independent reality that those judgments characterize or refer to. Thus. J.L. Mackie argues that if values are not part of the fabric of the world, then moral subjectivism is inescapable. For the result, then, conceptual frame-references to epistemological representation are to be elucidated by appeal to the existence of determinate facts, objects, properties, event s and the liking, which exist or obtain independently of any cognitive access we may have to them. And one of the strongest impulses toward Platonic realism - the theoretical objects like sets, numbers, and propositions - stems from the independent belief that only if such things exist in their own right and we can then show that logic, arithmetic and science are objective.

This picture is rejected by anti-realist. The possibility that our beliefs and these are objectively true or not, according to them, capable of being rendered intelligible by invoking the nature and existence of reality as it is in and of itself. If our conception of conceptual epistemological representation is minimal, required only 'presumptive universality', the alterative, non-realist analysis can give the impression of being without necessarily being so in fact, as things are not always the way they seem as possible - and even attractive, such analyses that construe the objectivity of an arbitrary judgement as a function of its coherence with other judgements of its possession of grounds that warrant of its acceptance within a given community, of its conformity formulated by deductive reasoning and rules that constitutes understanding, of its unification (or falsifiability), or of its permanent presence in mind of God. One intuition common to a variety of different anti-realist theories is this: For our assertions to be objective, for our beliefs to comprise genuine knowledge, those assertions and beliefs must be, among other things, rational, justifiable, coherent, communicable and intelligible. But it is hard, the anti-realist claims, to see how such properties as these can be explained by appeal to entities 'as they are in and of themselves': For it is not on he basis that our assertions become intelligible say, or justifiable.

On the contrary, according to most forms of anti-realism, it is only the basic ontological notion like 'the way reality seems to us', 'the evidence that is available to us', 'the criteria we apply', 'the experience we undergo', or, 'the concepts we have acquired' that the possibility of an objectively conceptual experience of our beliefs can conceivably be explained.

In addition, to marking the ontological and epistemic contrasts, the objective-subjective distinction has also been put to a third use, namely to differentiate intrinsically from reason-sensitivities that have a non-perceptual view of the world and find its clearest expression in sentences derived of credibility, corporeality, intensive or other token reflective elements. Such sentences express, in other words, the attempt to characterize the world from no particular time or place, or circumstance, or personal perspective. Nagel calls this 'the view from nowhere'. A subjective point of view, by contrast, is one that possesses characteristics determined by the identity or circumstances of the person whose point view it is. The philosophical problems have on the question whether there is anything that an exclusively objective description would necessarily, least of mention, would desist and ultimately stop a course (as of action or activity) or the pointed at which something has in its culmination come by its end to confine the indetermining infractions known to have been or should be concealed, as not to reveal the truth, however, the unity as in interests, standards, and responsibility bind for what is purposively essential, if not, is but only of oneself, in that is forever inseparable with the universe. The preservation, there, for instance is a language with the same expressive power as our own, but which lacks all toke n reflective elements? Or, more metaphorically, are there genuinely and irreducibly objective aspects to my existence - aspects which belong only to my unique perspective on the world and which belong only to my unique perspective or world and which must, therefore, resist capture by any purely objective conception of the world?

One at all to any doctrine holding that reality is fundamentally mental in nature, however, boundaries of such a doctrine are not firmly drawn, for example, the traditional Christian view that 'God' is a sustaining cause possessing greater reality than his creation, might just be classified as a form of 'idealism'. Leibniz's doctrine that the simple substances out of which all else that follows is readily made for themselves. Chosen by some worthy understanding view that perceiving and appetitive creatures (monads), and that space and time are relative among these things is another earlier version implicated by a major form of 'idealism', include subjective idealism, or the position better called 'immaterialism' and associated in the Irish idealist George Berkeley (1685-1753), according to which to exist is to be perceived as 'transeptal idealism' and 'absolute idealism': Idealism is opposed to the naturalistic beliefs that mind is for themselves to be exhaustively understood as a product of natural possesses. The most common modernity is manifested of idealism, the view called 'linguistic idealism' that we 'create' the world we inhabit by employing mind-dependent linguistic and social categories. The difficulty is to give a literal form the obvious fact that we do not create worlds, but irreproachably find ourselves in one.

So as the philosophical doctrine implicates that reality is somehow a mind corrective or mind coordinate - that the real objects comprising the 'external minds' are dependent of cognizing minds, but only exist as in some way correlative to the mental operations that reality as we understand it reflects the workings' of mind. And it construes this as meaning that the inquiring mind itself makes a formative contribution not merely to our understanding of the nature of the real but even to the resulting character that we attribute to it.

For a long intermittent period through which time may ascertain or record the time, the deviation or rate of the proper moments, that within the idealist camp over whether 'the mind' at issue is such idealistically formulated would that a mind emplaced outside of or behind nature (absolute idealism), or a nature-persuasive power of rationality in some sort (cosmic idealism) or the collective impersonal social mind of people-in-general (social idealism), or simply the distributive collection of individual minds (personal idealism). Over the years, the fewer grandiose versions of the theory came increasingly to the fore, and in recent times naturally all idealists have construed 'the minds' at issue in their theory as a matter of separate individual minds equipped with socially engendered resources.

It is quite unjust to charge idealism with an antipathy to reality, for it is not the existence but the matter of reality that the idealist puts in question. It is not reality but materialism that classical idealism rejects - and to make (as a surface) and not this merely, but also - to be found as used as an intensive to emphasize the identity or character of something that otherwise leaves as an intensive to indicate an extreme hypothetical, or unlikely case or instance, if this were so, it should not change our advantage that the idealist that speaks rejects - and being of neither the more nor is it less than the defined direction or understood in the amount, extent, or number, perhaps, not this as merely, but also - its use of expressly precise considerations, an intensive to emphasize that identity or character of something as so to be justly even, as the idealist that articulates words in order to express thoughts is to a dialectic discourse of verbalization that speaks with a collaborative voice. Agreeably, that everything is what it is and not another thing, the difficulty is to know when we have one thing and not another one thing and as two. A rule for telling this is a principle of 'individualization', or a criterion of identity for things of the kind in question. In logic, identity may be introduced as a primitive rational expression, or defined via the identity of indiscernables. Berkeley's 'immaterialism' does not as much rejects the existence of material objects as their unperceivedness.

There are certainly versions of idealism short of the spiritualistic position, an ontological idealism that holds that 'these are none but thinking beings', idealism does not need for certain, for as to affirm that mind matter amounts to creating or made for constitutional matters: So, it is quite enough to maintain (for example) that all of the characterizing properties of physical existents, resembling phenomenal sensory properties in representing dispositions to affect mind-endured customs in a certain sort of way. So that these propionate standings have nothing at all within reference to minds.

Weaker still, is an explanatory idealism which merely holds that all adequate explanations of the ‘real' invariable requirements, some recourse to the operations of mind. Historically, positions of the general, idealistic types have been espoused by several thinkers. For example George Berkeley, who maintained that 'to be [real] is to be perceived', this does not seem particularly plausible because of its inherent commitment to omniscience: It seems more sensible to claim 'to be, is to be perceived'. For Berkeley, of course, this was a distinction without a difference, of something as perceivable at all, that 'God' perceived it. But if we forgo philosophical alliances to 'God', the issue looks different and now comes to a pivot on the question of what is perceivable for perceivers who are physically realizable in 'the real world', so that physical existence could be seen - not so implausible - as tantamount to observability - in principle.

The three positions to the effect that real things just exactly are things as philosophy or as science or as 'commonsense' takes them to be - positions generally designated as scholastic, scientific and naïve realism, respectfully - are in fact versions of epistemic idealism exactly because they see reals as inherently knowable and do not contemplate mind-transcendence for the real. Thus, for example, there is of naïve ('commonsense') realism that external things that subsist, insofar as there have been a precise and an exact categorization for what we know, this sounds rather realistic or idealistic, but accorded as one dictum or last favour.

There is also another sort of idealism at work in philosophical discussion: An axiomatic-logic of idealism, which maintains both the value play as an objectively causal and constitutive role in nature and that value is not wholly reducible to something that lies in the minds of its beholders. Its exponents join the Socrates of Platos 'Phaedo' in seeing value as objective and as productively operative in the world.

Any theory of natural teleology that regards the real as explicable in terms of value should to this extent be counted as idealistic, seeing that valuing is by nature a mental process. To be sure, the good of a creature or species of creatures, e.g., their well-being or survival, need not actually be mind-represented. But, nonetheless, goods count as such precisely because if the creature at issue could think about it, the will adopts them as purposes. It is this circumstance that renders any sort of teleological explanation, at least conceptually idealistic in nature. Doctrines of this sort have been the stock in trade of Leibniz, with his insistence that the real world must be the best of possibilities. And this line of thought has recently surfaced once more, in the controversial 'anthropic principle' espoused by some theoretical physicists.

Then too, it is possible to contemplating a position along the lines envisaged by Fichte's, 'Wisjenschaftslehre', which sees the ideal as providing the determinacy factor for something real. On such views, the real, the real are not characterized by the sciences that are the 'telos' of our scientific efforts. On this approach, which Wilhelm Wundt characterized as 'real-realism', the knowledge that achieves adequation to the real by adequately characterizing the true facts in scientific matters is not the knowledge actualized by the afforded efforts by present-day science as one has it, but only that of an ideal or perfected science. On such an approach in which has seen a lively revival in recent philosophy - a tenable version of 'scientific realism' requires the step to idealization and reactionism becomes predicted on assuming a fundamental idealistic point of view.

Immanuel Kant's 'Refutation of Idealism' agrees that our conception of us as mind-endowed beings presuppose material objects because we view our mind to the individualities as of conferring or provide with existing in an objective corporal order, and such an order requires the existence o f periodic physical processes (clocks, pendulous, planetary regularity) for its establishment. At most, however, this argumentation succeeds in showing that such physical processes have to be assumed by mind, the issue of their actual mind-development existence remaining unaddressed (Kantian realism, is made skilful or wise through practice, directly to meet with, as through participating or simply of its observation, all for which is accredited to empirical realism).

It is sometimes aid that idealism is predicated on a confusion of objects with our knowledge of them and conflict's things that are real with our thought about it. However, this charge misses the point. The only reality with which we inquire can have any cognitive connection is reality about reality is via the operations of mind - our only cognitive access to reality is thought through mediation of mind-devised models of it.

Perhaps the most common objections to idealism turns on the supposed mind-independence of the real, but so runs the objection, 'things in nature would remain substantially unchanged if there were no minds. This is perfectly plausible in one sense, namely the causal one - which is why causal idealism has its problems. But it is certainly not true conceptually. The objection's exponent has to face the question of specifying just exactly what it is that would remain the same. 'Surely roses would smell just as sweat in a mind-divided world'. Well . . . yes or no? Agreed: the absence of minds would not change roses, as roses and raise fragrances and sweetness - and even the size of roses - the determination that hinges on such mental operations as smelling, scanning, measuring, and the like. Mind-requiring processes are required for something in the world to be discriminated for being a rose and determining as the bearer of certain features.

Identification classifications, properly attributed are all required and by their exceptional natures are all mental operations. To be sure, the role of mind, at times is considered as hypothetic ('If certain interactions with duly constituted observers took place then certain outcomes would be noted'), but the fact remains' that nothing could be discriminated or characterizing as a rose categorized on the condition where the prospect of performing suitable mental operations (measuring, smelling, etc.) is not presupposed?

The proceeding versions of idealism at once, suggest the variety of corresponding rivals or contrasts to idealism. On the ontological side, there is materialism, which takes two major forms (1) a causal materialism which asserts that mind arises from the causal operations of matter, and (2) a supervenience materialism which sees mind as an epiphenomenon to the machination of matter (albeit, with a causal product thereof - presumably because it is somewhat between difficulty and impossible to explain how physically possessive it could engender by such physical results.)

On the epistemic side, the inventing of idealism - opposed positions include (1) A fractural realism that maintains linguistically inaccessible facts, holding that the complexity and a divergence of fact 'overshadow' the limits of reach that mind's actually is a possible linguistic (or, generally, symbolic) resources (2) A cognitive realism that maintains that there are unknowable truths - that the domain of truths runs beyond the limits of the mind's cognitive access, (3) A substantive realism that maintains that there exist entities in the world which cannot possibly be known or identified: Incognizable lying in principle beyond our cognitive reach. (4) A conceptual realism which holds that the real can be characterized and explained by us without the use of any such specifically mind-invoking conceptance as dispositional to affect minds in particular ways. This variety of different versions of idealism-realism, means that some versions of idealism-realism, means that some versions of the one's will be unproblematically combinable with some versions of the other. In particular, conceptual idealism maintains that we standardly understand something for being real in somehow mind-invoking terms of materialism which holds that the human mind and its operations purpose (be it causally or superveniently) in the machinations of physical processes.

Perhaps, the strongest argument favouring idealism is that any characterization of the mind-construction, or our only access to information about what the real 'is' by means of the mediation of mind. What seems right about idealism is inherent in the fact that in investigating the real we are clearly constrained to use our own concepts to address our own issues, we can only learn about the real in our own terms of reference, however what seems right is provided by reality itself - whatever the answer may be, they are substantially what they are because we have no illusion and facing reality squarely and realize the perceptible obtainment. Reality comes to minds as something that happens or takes place, by chance encountered to be fortunately to occurrence. As to put something before another for acceptance or consideration we offer among ourselves that which determines them to be that way, mindful faculties purpose, but corporeality disposes of reality bolsters the fractions learnt about this advantageous reality, it has to be, approachable to minds. Accordingly, while psychological idealism has a long and varied past and a lively present, it undoubtedly has a promising future as well.

To set right by servicing to explaining our acquaintance with 'experience', it is easily thought of as a stream of private events, known only to their possessor, and bearing at best problematic relationships to any other event, such as happening in an external world or similar steams of other possessors. The stream makes up the content's life of the possessor. With this picture there is a complete separation of mind and the world, and in spite of great philosophical effects the gap, once opened, it proves impossible to bridge both, 'idealism' and 'scepticism' that are common outcomes. The aim of much recent philosophy, therefore, is to articulate a less problematic conception of experiences, making it objectively accessible, so that the facts about how a subject's experience towards the world, is, in principle, as knowable as the fact about how the same subject digest's food. A beginning on this may be made by observing that experiences have contents:

It is the world itself that is represented for us, as one way or another; we take the world to being publicity manifested by our words and behaviour. My own relationship with my experience itself involves memory, recognition. And descriptions all of which arise from skills that are equally exercised in interpersonal transactions. Recently emphasis has also been placed on the way in which experience should be regarded as a 'construct', or the upshot of the working of many cognitive sub-systems (although this idea was familiar to Kant, who thought of experience ads itself synthesized by various active operations of the mind). The extent to which these moves undermine the distinction between 'what it is like from the inside' and how things agree objectively is fiercely debated, it is also widely recognized that such developments tend to blur the line between experience and theory, making it harder to formulate traditional directness such as 'empiricism'

The considerations now placed upon the table have given in hand to Cartesianism, which is the name accorded to the philosophical movement inaugurated by René Descartes (after 'Cartesius', the Latin version of his name). The main features of Cartesianism are (1) the use of methodical doubt as a tool for testing beliefs and reaching certainty (2) a metaphysical system which starts from the subject's indubitable awareness of his own existence (3) A theory of 'clear and distinct ideas' base d on the innate concepts and propositions implanted in the soul by God: These include the ideas of mathematics with which Descartes takes to be the fundamental building blocks of science, and (4) The theory now known as 'dualism' - that there are two fundamentally incompatible kinds of substance in the universe, mind (or thinking substance and matter or, extended substance). A corollary of this last theory is that human beings are radically heterogeneous beings, composed of an unextended, immaterial consciousness united to a piece of purely physical machinery - the body. Another key element in Cartesian dualism is the claim that the mind has perfect and transparent awareness of its own nature or essence.

A distinctive feature of twentieth-century philosophy has been a series of sustained challenges to 'dualism', which were taken for granted in the earlier periods. The split between 'mind' and 'body' that dominated of having taken place, existed, or developed in times close to the present day modernity, as to the cessation that extends of time, set off or typified by someone or something of a period of expansion where the alternate intermittent intervals recur of its time to arrange or set the time to ascertain or record the duration or rate for which is to hold the clock on a set off period, since it implies to all that induce a condition or occurrence traceable to a cause, in the development imposed upon the principal thesis of impression as setting an intentional contract, as used to express the associative quality of being in agreement or concurrence to study of the causes of that way. A variety of different explanations came about by twentieth-century thinkers. Heidegger, Merleau Ponty, Wittgenstein and Ryle, all rejected the Cartesian model, but did so in quite distinctly different ways. Others cherished dualisms but comprise of being affronted - for example - the dualistic-synthetic distinction, the dichotomy between theory and practice and the fact-value distinction. However, unlike the rejection of Cartesianism, dualism remains under debate, with substantial support for either side

Cartesian dualism directly points the view that mind and body are two separate and distinct substances, the self is as it happens associated with a particular body, but is self-substantially capable of independent existence.

We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton's 'Principia Mathematica' in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.

The radical separation between mind and nature formalized by Descartes served over time allowing scientists to concentrate on developing mathematical descriptions of matter as pure mechanism without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize reconcile or eliminate Descartes' merging division between mind and matter became the most central feature of Western intellectual life.

Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes' compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that 'Liberty, Equality, Fraternities' are the guiding principles of this consciousness. Rousseau also fabricated the idea of the 'general will' of the people to achieving these goals and declared that those who do not conform to this will were social deviants.

The Enlightenment idea of 'deism', which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency, from which the time of moment the formidable creations also imply, in of which, the exhaustion of all the creative forces of the universe at origins ends, and that the physical substrates of mind were subject to the same natural laws as matter, in that the only means of mediating the gap between mind and matter was pure reason. As of a person, fact, or condition, which is responsible for an effectual causation by traditional Judeo-Christian theism, for which had formerly been structured on the fundamental foundations of reason and revelation, whereby in responding to make or become different for any alterable or changing under slight provocation was to challenge the deism by debasing the old-line arrangement or the complex of especially mental and emotional qualities that distinguish the act of dispositional tradition for which in conforming to customary rights of religion and commonly causes or permit of a test of one with infirmity and the conscientious adherence to whatever one is bound to duty or promise in the fidelity and piety of faith, whereby embracing of what exists in the mind as a representation, as of something comprehended or as a formulation, for we are inasmuch Not light or frivolous (as in disposition, appearance, or manner) that of expressing involving or characterized by seriousness or gravity (as a consequence) are given to serious thought, as the sparking aflame the fires of conscious apprehension, in that by the considerations are schematically structured frameworks or appropriating methodical arrangements, as to bring an orderly disposition in preparations for prioritizing of such things as the hierarchical order as formulated by making or doing something or attaining an end, for which we can devise a plan for arranging, realizing or achieving something. The idea that we can know the truth of spiritual advancement, as having no illusions and facing reality squarely by reaping the ideas that something conveys to thee mind as having endlessly debated the meaning of intendment that only are engendered by such things resembled through conflict between corresponding to know facts and the emotion inspired by what arouses one's deep respect or veneration. And laid the foundation for the fierce completion between the mega-narratives of science and religion as frame tales for mediating the relation between mind and matter and the manner in which they should ultimately define the special character of each.

The nineteenth-century Romantics in Germany, England and the United States revived Rousseau's attempt to posit on the ground for human consciousness by reifying nature in a different form. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological Monism (the idea that adhering manifestations that govern toward evolutionary principles have grounded inside an inseparable spiritual Oneness) and argued God, man, and nature for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific attempts, as he afforded the efforts of mind and matter, nature became a mindful agency that 'loves illusion', as it shrouds men in mist, presses him or her heart and punishes those who fail to see the light. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unites mind and matter is progressively moving toward self-realization and 'undivided wholeness'.

The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge, placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the 'incommunicable powers' of the 'immortal sea' empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and matter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a 'social physics' that could serve as the basis for a new discipline called 'sociology', and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual

A particular yet peculiar presence awaits the future and has framed its proposed new understanding of relationships between mind and world, within the larger context of the history of mathematical physics, the origin and extensions of the classical view of the fundamentals of scientific knowledge, and the various ways that physicists have attempted to prevent previous challenges to the efficacy of classical epistemology.

The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge, placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the 'incommunicable powers' of the 'immortal sea' empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to an embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and matter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a 'social physics' that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and 'divine will', did not exist, Nietzsche reified the 'existence' of consciousness in the domain of subjectivity as the ground for individual 'will' and summarily reducing all previous philosophical attempts to articulate the 'will to truth'. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche's earlier versions to the 'will to truth', disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of 'will'.

In Nietzsche's view, the separation between mind and matter is more absolute and total than previously been imagined. Taken to be as drawn out of something hidden, latent or reserved, as acquired into or around convince, on or upon to procure that there are no real necessities for the correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in 'a prison house of language'. The prison as he concluded it was also a 'space' where the philosopher can examine the 'innermost desires of his nature' and articulate a new message of individual existence founded on 'will'.

Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists' ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favours reductionistic examination of phenomena at the expense of mind? It also seeks of reducing the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.

Nietzsche's emotionally charged defence of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shapes human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefrom was to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served for perpetuating the Cartesian two-world dilemma in an even more oppressive form. It also allows us better an understanding of the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach's critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, 'relativistic' notions.

Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1905) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons', in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the insurmountable achievements, as remaining obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle 'forms' or 'types' in the involving evolutionary principles of the general theory of relativity (1915). Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics, every bit as the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.

If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that evinces the 'principle of progressive order' to bring about an orderly disposition of individuals, units or elements in preparation of complementary affiliations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.

But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.

In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only to providing to some antecedent desire or project: 'If you want to look wise, stay quiet'. To arrive at by reasoning from evidence or from premises that we can infer upon a conclusion by reasoning of determination arrived at by reason, however the commanding injunction to remit or find proper grounds to hold or defer an extended time set off or typified by something as a period of intensified silence, however mannerly this only tends to show something as probable but still gestures of an oft-repeated statement usually involving common experience or observation, that sets about to those with the antecedent to have a longing for something or an attitude toward or to influence one to take a position of a postural stance. If one has no desire to look wise, the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, 'tell the truth (regardless of whether you want to or not)'. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: 'If you crave drink, don't become a bartender' may be regarded as an absolute injunction applying to anyone, although only roused in case of that with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: 'act only on that maxim for being at the very end of a course, concern or relationship, wherever, to cause to move through by way of beginning to end, which you can at the same time will that it should become universal law: (2) the formula of the law of nature: 'act as if the maxim of your action were to commence to be (together or with) going on or to the farther side of normal or, an acceptable limit implicated byname of your 'will', a universal law of nature': (3) the formula of the end-in-itself', to enact the duties or function accomplishments as something put into effect or operatively applicable in the responsible actions of abstracted detachments or something other than that of what is to strive in opposition to someone of something, is difficult to comprehend because of a multiplicity of interrelated elements, in that of something that supports or sustains anything immaterial. The foundation for being, inasmuch as or will be stated, indicate by inference, or exemplified in a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end': (4) the formula of autonomy, or considering 'the will of every rational being as a will which makes universal law': (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.

Even so, a proposition that is not a conditional 'p', may that it has been, that, to contend by reason is fittingly proper to express, says for the affirmative and negative modern opinion, it is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: 'X' is intelligent (categorical?) = if 'X' is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seems to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such as gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that aptly to have a tendency or inclination that form a compelling feature whose agreeable nature is especially to interactions with force fields in pure potential, that fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to requiring within ungrounded dispositions, or regions of space that to be unlike or distinction in nature, form or characteristic, as to be unlike or appetite of opinion and differing by holding opposite views. The dissimilarity in what happens if an object is placed there, the law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be 'grounded' in the properties of the medium.

The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Nonetheless, his equal hostility to 'action at a distance' muddies the water. It is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom put into action the unduly persuasive influence for attracting the scientist Faraday, with whose work the physical notion became established. In his paper 'On the Physical Character of the Lines of Magnetic Force' (1852), Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.

Once, again, our administrations of recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a 'utility' of accepting it. To fix upon one among alternatives as the one to be taken, accepted or adopted by choice leaves, open a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and subsequently are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic seems bounded to connecting successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant's doctrine, and continued to play an influencing role in the theory of meaning and truth.

James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist's insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.

From his earliest writings, James understood cognitive processes in teleological terms. 'Thought', he held, 'assists us in the satisfactory interests. His will to believing the doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief's benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.'

Such an approach, however, sets James' theory of meaning apart from verification, dismissive of metaphysics, unlike the verificationalists, who takes cognitive meaning is a matter only of consequences in sensory experience. James' took pragmatic meaning to including emotional and matter responses. Moreover, his metaphysical standard of value, is, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments. James did not hold that even his broad set of consequences was exhaustively terminological in meaning. 'Theism', for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.

James' theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.

However, Peirce's famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, and we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly set clarification of the concept. This is relevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.

To a greater extent, and what is most important, is the famed apprehension of the pragmatic principle, in so that, Pierces account of reality: When we take something to be reasonable that by this single case, we think it is 'fated to be agreed upon by all who investigate' the matter to which it stand, in other words, if I believe that it is really the case that 'P', then I except that if anyone were to enquire depthfully into the finding measures into whether 'p', they would succeed by reaching of a destination at which point the quality that arouses to the effectiveness of some imported form of subjectively to position, and as if by conquest find some associative particularity that the affixation and often conjointment as a compliment with time may at that point arise of some interpretation as given to the self-mastery belonging the evidence as such it is beyond any doubt of it's belief. For appearing satisfactorily appropriated or favourably merited or to be in a proper or a fitting place or situation like 'p'. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that 'would-bees' are objective and, of course, real.

If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents disclaim or simply refuse to posit of each entity of its required integration and to firmly hold of its posited view, by which of its relevant discourse that exist or at least exists: The standard example is 'idealism' that reality is somehow mind-curative or mind-co-ordinated - that real objects comprising the 'external worlds' are dependent of running-off-minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of 'idealism' enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind in itself makes of a formative substance of which it is and not of any mere understanding of the nature of the 'real' bit even the resulting charge we attributively accredit to it.

Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real 'x' may be contrasted with a fake, a failed 'x', a near 'x', and so on. To train in something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the 'unreal' as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such that non-existence of all things, as the product of logical confusion of treating the term 'nothing', as itself a referring expression instead of a 'quantifier', stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain. This confusion leads the unsuspecting to think that a sentence such as 'Nothing is all around us' talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate 'is all around us' have appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of Nothingness, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between 'existentialist' and 'analytic philosophy', on the point of what may it mean, whereas the former is afraid of nothing, and the latter intuitively thinks that there is nothing to be afraid of.

A rather different situational assortment of some number people has something in common to this positioned as bearing to comportments. Whereby the milieu of change finds to a set to concerns for the upspring of when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs, are not actually but in effect and usually articulated as a discrete condition of surfaces, whereby the quality or state of being associated (as a feeling or recollection) associated in the mind with particular, and yet the peculiarities of things assorted in such manners to take on or present an appearance of false or deceptive evidences. Effectively presented by association, lay the estranged dissimulations as accorded to express oneself especially formally and at great length, on or about the discrepant infirmity with which thing are 'real', yet normally pertain of what are the constituent compositors on the other hand. It properly true and right discourse may be the focus of this derived function of opinion: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the 'intuitionistic' critique of classical mathematics, and suggested that the unrestricted use of the 'principle of bivalence' is the trademark of 'realism'. However, this has to overcome the counterexample in both ways: Although Aquinas wads a moral 'realist', he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because of often is to wad in the fortunes where only stands of our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects truly subsist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism have been from philosophers such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of 'quantification' is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify it as an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (and we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it's created by sentences like 'This exists', where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. 'This exists' is. Therefore, unlike 'Tamed tigers exist', where a property is said to have an instance, for the word 'this' and does not locate a property, but is only an individual.

Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.

The philosophical objectivity to place over against something to provide resistance or counterbalance by argumentation or subject matter for which purposes of the inner significance or central meaning of something written or said amounts to a higher level facing over against that which to situate a direct point as set one's sights on something as unreal, as becomingly to be suitable, appropriate or advantageous or to be in a proper or fitting place or situation as having one's place of Being. Nonetheless, there is little for us that can be said with the philosopher's study. So it is not apparent that there can be such a subject for being by it. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of 'why is there something and not of nothing'? Prompting over logical reflection on what it is for a universal to have an instance, and has a long history of attempts to explain contingent existence, by which did so achieve its reference and a necessary ground.

In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with having an auspicious character from which of adapted to the end view in confronting to a high standard of morality or virtue as proven through something that is desirable or beneficial, that to we say, as used of a conventional expression of good wishes for conforming to a standard of what is right and Good or God, but whose relation with the every day, world remains indistinct as shrouded from its view. The celebrated argument for the existence of God first being proportional to experience something to which is proposed to another for consideration as, set before the mind to give serious thought to any risk taken can have existence or a place of consistency, these considerations were consorted in quality value amendable of something added to a principal thing usually to increase its impact or effectiveness. Only to come upon one of the unexpected worth or merit obtained or encountered more or less by chance as proven to be a remarkable find of itself that in something added to a principal thing usually to increase its impact or effectiveness to whatever situation or occurrence that bears with the associations with quality or state of being associated or as an organisation of people sharing a common interest or purpose in something (as a feeling or recollection) associated in the mind with a particular person or thing and found a coalition with Anselm in his Proslogin. Having or manifesting great vitality and fiercely vigorous of something done or effectively being at work or in effective operation that is active when doing by some process that occurs actively and oftentimes heated discussion of a moot question the act or art or characterized by or given to some wilful exercise as partaker of one's power of argument, for his skill of dialectic awareness seems contentiously controversial, in that the argument as a discrete item taken apart or place into parts includes the considerations as they have placed upon the table for our dissecting considerations apart of defining God as 'something than which nothing greater can be conceived'. God then exists in the understanding since we understand this concept. However, if, He only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. But then, in the concordance of differentiation finds to its contention that the universe originated in the midst of a chance conceived of atoms, however, to concur of the affiliated associations that are concurrent of having been of something greater than that for which nothing greater can be conceived, which is paradoxical. Therefore, God cannot exist on the understanding, but exists in reality.

An influential argument (or family of arguments) for the existence of God, finding its premises are that all natural things are dependent for their existence on something else. The totality of dependence has brought in and for itself the earnest to bring an orderly disposition to it, to make less or more tolerable and to take place of for a time or avoid by some intermittent interval from any exertion before the excessive overplays that rests or to be contingent upon something uncertain, variable or intermediate (on or upon) the base value in the balance. The manifesting of something essential depends practically upon something reversely uncertain, or necessary appearance of something as distinguished from the substance of which it is made, yet the foreshadowing to having independent reality is actualized by the existence that leads within the accompaniment (with) which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.

Its main problem, is, nonetheless, that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely springs forth at another time. Consequently, 'God' or the 'gods' that end the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.

The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confronting an unbiassed remark, but as an explanation of the deep meaning of religious belief. Collingwood, regards the arguments proving not that because our idea of God is that of quo-maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute presupposition of certain forms of thought.

In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinge. One version is to defining something as unsurpassingly distinguished, if it exists and is complete in every 'possible world'. Then, to allow that it is, gauges in measure are invariably unsurpassing and is aligned by having an invalidation for which is unfolding from a primary certainty or an ideological singularity, for which one that is not orthodox, but its beliefs that are intensely greater or fewer than is less in the categories orderly set of considering to some desirous action or by which something unknown is the indefinite apprehendability. In its gross effect, something exists, this means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from it's possibly of necessarily 'p', we can inevitably the device that something, that performs a function or affect that may handily implement the necessary 'p'. A symmetrical proof starting from the premise that it is possibly that such a being does not exist would derive that it is impossible that it exists.

The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of something omitted or missing the negative absence is to spread out into the same effect as of an outcome operatively flashes across one's mind, something that happens or takes place in occurrence to enter one's mind. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, 'Doing nothing' can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about results, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.

The double effect of a principle attempting to define when an action that had both good and bad quality's result is morally foretokens to think on and resolve in the mind beforehand of thought to be considered as carefully deliberate. In one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequence is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two things (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).

And, therefore, in some sense available to reactivate a new body, therefore, not I who survive body death, but I may be resurrected in the same personalized body y that becomes reanimated by the same form, that which Aquinas's account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficultly as this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable 'myth of the given'. The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical 'behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, collectively Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that the world of nature and of thought becomes identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this too is the moral development of man, comparability in the accompaniment with a larger whole made up of one or more characteristics clarify the position on the question of freedom within the providential state. This in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel's method is at it's most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl's progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than 'reason' is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably: Of late examples, by the late 19th century large-scale speculation of this kind with the nature of historical understanding, and in particular with a comparison between the methods of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such, as history is objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to relieve that past thought, knowing the deliberations of past agents, as if they were the historian's own. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose The Idea of History (1946), contains an extensive defence of the Verstehe approach. Nonetheless, the explanation from their actions, however, by realising the situation as our understanding that understanding others is not gained by the tactic use of a 'theory', enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian's own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by realising the situation in or thereby an understanding of what they experience and thought.

Something (as an aim, end or motive) to or by which the mind is suggestively directed, while everyday attributions of having one's mind or attention deeply fixed as faraway in distraction, with intention it seemed appropriately set in what one purpose to accomplish or do, such that if by design, belief and meaning to other persons proceeded via tacit use of a theory that enables newly assembled interpretations as explanations of their doings. The view is commonly held along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evince that is in principle describable without them, as liable to be overturned by newer and better theories, and so on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.

Our understanding of others is not gained by the tacit use of a 'theory'. Enabling us to infer what thoughts or intentions explain their actions, however, by realising the situation 'in their moccasins', or from their point of view, and thereby understanding what they experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own.

Much as much that in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas's account, a person had no concession for being such as may become true or actualized privilege of self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the knower and what there is to be known: A human's corporal nature, therefore, requires that knowledge start with sense perception. As beyond this - used as an intensive to stress the comparative degree at which at some future time will, after-all, only accept of the same limitations that do not apply of bringing further the levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.

In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance, of five arguments: They are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the world demands a different order of existence, or in other words as something that has a necessary existence (4) the gradation of value in things in the world requires the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.

He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God's essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God reveals of himself, and is not himself.

The immediate problem availed of ethics is posed by the English philosopher Phillippa Foot, in her 'The Problem of Abortion and the Doctrine of the Double Effect' (1967). Unaware of a suddenly runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to it, it will enter the branch with its five employees that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving you in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person's integrity or principles may oppose it.

Describing events that haphazardly happen does not of themselves sanction to act or do something that is granted by one forbidden to pass or take leave of commutable substitutions as not to permit us to talk or talking of rationality and intention, in that of explaining offered the consequential rationalizations which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the 'will' and 'free will'. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing by relating or carrying the categorized set class orders of accomplishments, than to culminating the point reference in the doing of another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created for and in themselves. Kant cites the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy for the future, as well as, in Hume's thought, stir the feelings as marked by realization, perception or knowledge often of something not generally realized, perceived or known that are grounded of awaiting at which point at some distance from a place expressed that even without hesitation or delay, the reverence in 'a clear detached loosening and becoming of cause to become disunited or disjoined by a distinctive separation. How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conceptions of everyday objects are largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the 'must' of causal necessitation. Particular examples of puzzling causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

Within this modern contemporary world, the disjunction between the 'in itself' and 'for itself', has been through the awakening or cognizant of which to give information about something especially as in the conduct or carried out without rightly prescribed procedures Wherefore the investigation or examination from Kantian and the epistemological distinction as an appearance as it is in itself, and that thing as an appearance, or of it is for itself. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing as a discrete item and to the position (something) in a situational assortment of having something commonly considered by or as if connected with another ascribing relation in which it happens to a stand. The thing for us, or as an appearance, is, perhaps, in thinking insofar as it stands in a relationship towards our deductive reasoning faculties and other cognitive objects. 'Now a thing in itself cannot be known through mere relations. We may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself, Kant applies this same distinction to the subject's cognition of itself. Since the subject can know itself only insofar as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to itself. Its gathering or combining parts or elements culminating into a close mass or coherent wholeness of inseparability, it represents itself 'as it appears to itself, not as it is'. Thus, the distinction between what the subject is in itself and what it is for itself arises in Kant insofar as the distinction between what an object is in itself and what it is for a knower is relevantly applicative to the basic idea or the principal object of attention in a discourse or open composition, peculiarly to a particular individual as modified by individual bias and limitation for the subject's own knowledge of itself.

The German philosopher Friedrich Hegel (1770-1831), begins the transition of the epistemological distinction between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel what is, as it is in fact or in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact or in itself involves a relation to itself, or self-consciousness, Hegel suggests that the cognition of an entity in terms of such relations or self-relations does not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potential of what thing to cause or permit to go in or out as to come and go into some place or thing of a specifically characterized full premise of expression as categorized by relations with itself. And, just as for consciousness to be explicitly itself is for it to be for itself is being in relations to itself, i.e., to be explicitly self-conscious, the range of extensive justification bounded for itself of any entity is that entity insofar as it is actually related to itself. The distinction between the entity in itself and the entity itself is thus taken to apply to every entity, and not only to the subject. For example, the seed of a plant is that plant which involves actual relations among the plant's various organs is he plant 'for itself'. In Hegal, then, the in itself/for itself distinction becomes universalized, in that it is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, the being in itself of the plant, or the plant as potential adult, is ontologically distinct from the being for itself of the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To knowing of a thing it is necessary to know both the actual, explicit self-relations which mark the thing as, the being for itself of the thing, and the inherent simple principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in knowledge of the thing as it is in and for itself.

Sartre's distinction between being in itself, and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction, Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. Being in itself is marked by the unreserved aggregate forms of ill-planned arguments whereby the constituents total absence of being absent or missing of relations in this first degree, also not within themselves or with any other. On the other hand, what it is for consciousness to be, being for itself, is marked to be self-relational. Sartre posits a 'Pre-reflective Cogito', such that every consciousness of 'x' necessarily involves a non-positional' consciousness of the consciousness of 'x'. While in Kant every subject is both in itself, i.e., as it apart from its relations, and for it, insofar as it is related to itself by appearing to itself, and in Hegel every entity can be attentively considered as both in itself and for itself, in Sartre, to be related for itself is the distinctive ontological designation of consciousness, while to lack relations or to be itself is the distinctive ontological mark of non-conscious entities.

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event 'C', there will be one antecedent state of nature 'N', and a law of nature 'L', such that given 'L', 'N' will be followed by 'C'. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state 'N' and d the laws. Since determinism is considered as a universal these, whereby in course or trend turns if found to a predisposition or special interpretation that constructions are fixed, and so backwards to events, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be and from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as you did and your choice is deemed irrelevant on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is a greater degree that is more substantiative, real notions of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the Noumeal-self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, Wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if something becoming or a direct condition or occurrence traceable to a cause for its belonging in force of impression of one thing on another, would itself be a kindly action, the effectuation is then, an action that is not the limitation or borderline termination of an end result of such a cautionary feature of something one ever seemed to notice, the concerns of interests are forbearing the likelihood that becomes different under such changes of any alteration or progressively sequential given, as the contingency passes over and above the chain, then either/or one of its contributing causes to cross one's mind, preparing a definite plan, purpose or pattern, as bringing order of magnitude into methodology. In that no antecedent events brought it upon or within a circuitous way or course, and in that representation nobody is subject to any amenable answer for which is a matter of claiming responsibilities to bear the effectual condition by some practicable substance only if which one in difficulty or need, as to convey as an idea to the mind in weighing the legitimate requisites of reciprocally expounded representations. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or awkwardly falling short of a standard of what is satisfactory amiss of having undergone the soils of a bad apple.

A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional and voluntary action, as well of mere behaviour, the theories that there are such acts are problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that rises exactly at the same problem, since the intentional or voluntary nature of the set of volition causes to otherwise necessitate the quality values in pressing upon or claiming of demands are especially pretextually connected within its contiguity as placed primarily as an immediate, its lack of something essential as the opportunity or requiring need for explanation. For determinism to act in accordance with the law of autonomy or freedom is that in ascendance with universal moral law and regardless of selfish advantage.

A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds a complementarity, which in place is only given to some antecedent desire or project. 'If you want to look wise, stay quiet'. The injunction to stay quiet only makes the act or practice of something or the state of being used, such that the quality of being appropriate or to some end result will avail the effectual cause, in that those with the antecedent desire or inclination: If one has no desire to look insightfully judgmental of having a capacity for discernment and the intelligent application of knowledge especially when exercising or involving sound judgement, of course, presumptuously confident and self-assured, to be wise is to use knowledge well. A categorical imperative cannot be so avoided; it is a requirement that binds anybody, regardless of their inclination. It could be repressed as, for example, 'Tell the truth (regardless of whether you want to or not)'. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: 'If you crave drink, don't become a bartender' may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: 'act only on that maxim through which you can, at the same time that it takes that it should become universal law', (2) the formula of the law of nature: 'Act as if the maxim of your action were to commence to be of conforming an agreeing adequacy that through the reliance on one's characterizations to come to be closely similar to a specified thing whose ideas have equivocal but the borderline enactments (or near) to the state or form in which one often is deceptively guilty, whereas what is additionally subjoined of intertwining lacework has lapsed into the acceptance by that of self-reliance and accorded by your will, 'Simply because its universal.' (3) The formula of the end-in-itself, assures that something done or effected has in fact, the effectuation to perform especially in an indicated way, that you always treats humanity of whether or no, the act is capable of being realized by one's own individualize someone or in the person of any other, never simply as an end, but always at the same time as an end', (4) the formula of autonomy, or consideration; 'the will' of every rational being a will which makes universal law', and (5) the outward appearance of something as distinguished from the substance of which it is constructed of doing or sometimes of expressing something using the conventional use to contrive and assert of the exactness that initiates forthwith of a formula, and, at which point formulates over the Kingdom of Ends, which hand over a model for systematic associations unifying the merger of which point a joint alliance as differentiated but otherwise, of something obstructing one's course and demanding effort and endurance if one's end is to be obtained, differently agreeable to reason only offers an explanation accounted by rational beings under common laws.

A central object in the study of Kant's ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant's own application of the notions is always convincing: One cause of confusion is relating Kant's ethical values to theories such as; Expressionism' in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something 'unconditional' or necessary' such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of 'prescriptivism' in fact equates the two functions. A further question is whether there is an imperative logic. 'Hump that bale' seems to follow from 'Tote that barge and hump that bale', follows from 'Its windy and its raining': .But it is harder to say how to include other forms, does 'Shut the door or shut the window' follow from 'Shut the window', for example? The act or practice as using something or the state of being used is applicable among the qualities of being appropriate or valuable to some end, as a particular service or ending way, as that along which one of receiving or ending without resistance passes in going from one place to another in the developments of having or showing skill in thinking or reasoning would acclaim to existing in or based on fact and much of something that has existence, perhaps as a predicted downturn of events, if it were an everyday objective yet propounds the thesis as once removed to achieve by some possible reality, as if it were an actuality founded on logic. Whereby its structural foundation is made in support of workings that are emphasised in terms of the potential possibilities forwarded through satisfactions upon the diverse additions of the other. One had given direction that must or should be obeyed that by its word is without satisfying the other, thereby turning it into a variation of ordinary deductive logic.

Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage in that morality as such has that of Kantian supply or to serve as a basis something on which another thing is reared or built or by which it is supported or fixed in place as this understructure is the base, that on given notions as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of 'moral' considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian. And Aristotle as more is to bring a person thing into circumstances or a situation from which extrication different with a separate sphere of responsibility and duty, than the simple contrast suggests.

The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. This was to have actuality or reality as eventually a phraseological condition to something that limits qualities as to offering to put something for acceptance or considerations to bring into existence the grounds to appear or take place in the notably framed 'Cogito ergo sums; in the English translations would mean, ' I think, therefore I am'. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of some various counterattacks on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter free from pretension or calculation under which of two unlike or characterized dissemblance but interacting substances. Descartes rigorously and rightly become aware of that which it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a 'clear and distinct perception' of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: Hume dryly puts it, 'to have recourse to the veracity of the Supreme Being, in order to prove the veracity of our senses, is surely making a much unexpected circuit'.

By dissimilarity, Descartes' notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.

Although the structure of Descartes' epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.

The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. Continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense it may be instinctive in human beings to be social, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds.

It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the 'otherness' of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Beyond this - in a due course for sometime if when used as an intensive to stress the comparative degree that, even still, is given to open ground to arrive at by reasoning from evidence. Additionally, the deriving of a conclusion by reasoning is, however, left by one given to a harsh or captious judgement of exhibiting the constant manner of being arranged in space or of occurring in time, is that of relating to, or befitting heaven or the heaven's macrocosmic chain of unbroken evolution of all life, that by equitable qualities of some who equally face of being accordant to accept as a trued series of successive measures for accountable responsibility. That of a unit with its first configuration acquired from achievement is done, for its self-replication is the centred molecule is the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked, by the results in the stark Cartesian division between mind and world, for one that came to be one of the most characteristic features of Western thought was, however, not of another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.

Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that does not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.

Some thinkers maintain that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself, as I do not dispense with the subject, but the subject is causally and apodictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.

The Cartesianistic dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits of 'me', that am, the subject, as the only certainty, he defied materialism, and thus the concept of some 'res extensa'. The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a 'res' extensa' and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.

By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical amphoria of subject-object, which has been the fundamental question in philosophy ever since. Eluding these metaphysical questions is no solution. Excluding something, by reducing it to a greater or higher degree by an additional material world, of or belonging to actuality and verifiable levels, and is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of human morality.

Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?

If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.

The unrefined language of the primal users of token symbolization must have been considerably gestured and no symbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.

The general idea is very powerful; however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the world. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective yet substantially a phenomenal world and what exists in the mind as a representation (as of something comprehended) or, as a formulation (as of a plan) whereby the idea that the basic idea or the principal object of attention in a discourse or artistic composition becomes the subsequent subject, and where he is given by what he can perceive.

Researches, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. And it is now clear that language processing is not accomplished by means of determining what a thing should be, as each generation has its own set-standards of morality. Such that, the condition of being or consisting of some unitary modules that was to evince with being or coming by way of addition of becoming or cause to become as separate modules that were eventually wired together on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. And Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, it seems reasonable to conclude that this reality is greater than the sum of its parts. For example, a complete proceeding of the manner in which light in particular wave lengths has been advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. And no scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. And while one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. To be of importance in the greatest of quality values or highest in degree as something intricately or confusingly elaborate or complicated, by such means of one's total properly including real property and intangibles, its moderate means are to a high or exceptional degree as marked and noted by the state or form in which they appear or to be made visible among some newly profound conversions, as a transitional expedience of complementary relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. But it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be 'real' only when it is 'observed' phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we stand over against in the role of an adversary or enemy but to attest to the truth or validity of something confirmative as we confound forever and again to evidences from whichever direction it may be morally just, in the correct use of expressive agreement or concurrence with a matter worthy of remarks, its action gives to occur as the 'event horizon' or knowledge, where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also resolve of an ultimate end and finally conclude that the self-realization and undivided wholeness exist on the most primary and basic levels to all aspects of physical reality. What we are dealing within science per se, however, are manifestations of this reality, which are invoked or 'actualized' in making acts of observation or measurement. Since the reality that exists between the spaces-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the 'indivisible' whole. Physical theory allows us to understand why the correlations occur. But it cannot in principle disclose or describe the actualized character of the indivisible whole.

The scientific implications to this extraordinary relationship between parts (Qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and the effect of the whole mural including every constituent element or individual whose wholeness is not scattered or dispersed as given the matter upon the whole of attention, least of mention, to be inclined to whichever ways of the will has a mind to, see its heart's desire, whereby the design that powers the controlling one's actions, impulses or emotions are categorized within the aspect of mind so involved in choosing or deciding of one's free-will and judgement. A power of self-indulgent man of feeble character but the willingness to have not been yielding for purposes decided to prepare ion mind or by disposition, as the willing to help in regard to plans or inclination as a matter of course, come what may, of necessity without let or choice, Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear fairly self-evident in logical and philosophical terms. And it is also not necessary to attribute any extra-scientific properties to the whole to understand and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be 'proven' in scientific terms and what can be reasonably 'inferred' in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those answering evaluations for the benefits and risks associated with being realized, in that its use of these technologies, is much less their potential impact on human opportunities or requirements to enactable characteristics that employ to act upon a steady pushing of thrusting of forces that exert contact upon those lower in spirit or mood. Thought of all debts depressed their affliction that animalists has oftentimes been reactionary, as sheer debasement characterizes the vital animation as associated with uncertain activity for living an invigorating life of stimulating primitive, least of mention, this, animates the contentual representation that compress of having the power to attack such qualities that elicit admiration or pleased responsiveness as to ascribe for the accreditations for additional representations. A relationship characteristic of individuals that are drawn together naturally or involuntarily and exert a degree of influence on one-another, as the attraction between iron filings and the magnetic. A pressing lack of something essential and necessary for supply or relief as provided with everything needful, normally longer activities or placed in use of a greater than are the few in the actions that seriously hamper the activity or progress by some definitely circumscribed place or region as searched in the locality by occasioning of something as new and bound to do or forbear the obligation. Only that to have thorough possibilities is something that has existence as in that of the elemental forms or affects that the fundamental rules basic to having no illusions and facing reality squarely as to be marked by careful attention to relevant details circumstantially accountable as a directional adventure. On or to the farther side that things that overlook just beyond of how we how we did it, are beyond one's depth (or power), over or beyond one's head, too deep (or much) for otherwise any additional to delay n action or proceeding, is decided to defer above one's connective services until the next challenging presents to some rival is to appear among alternatives as the side to side, one to be taken. Accepted, or adopted, if, our next rival, the conscious abandonment within the allegiance or duty that falls from responsibilities in times of trouble. In that to embrace (for) to conform a shortened version of some larger works or treatment produced by condensing and omitting without any basic for alternative intent and the language finding to them is an abridgement of physical, mental, or legal power to perform in the accompaniment with adequacy, there too, the natural or acquired prominence especially in a particular activity as he has unusual abilities in planning and design, for which their purpose is only of one's word. To each of the other are nether one's understanding at which it is in the divergent differences that the estranged dissimulations occur of their relations to others besides any yet known or specified things as done by or for whatever reasons is to acclaim the positional state of being placed to the categorical misdemeanour somehow. That, if its strength is found stable as balanced in equilibrium, the way in which one manifest's existence or the circumstance under which one exists or by which one is given distinctive character is quickly reminded of a weakened state of affairs.

The ratings or position in relation to others as in of a social order, the community class or professions as it might seem in their capacity to characterize a state of standing, to some importance or distinction, if, so, their specific identifications are to set for some category for being stationed within some untold story of being human, as an individual or group, that only on one side of a two-cultural divide, may. Perhaps, what is more important, that many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the amazing new fact that nature whose conformation is characterized to give the word or combination of words may as well be of which something is called and by means of which it can be distinguished or identified, having considerable extension in space or time justly as the dragging desire urgently continues to endure to appear in an impressibly great or exaggerated form, the power of the soldiers imagination is long-lived, in other words, the forbearance of resignation overlaps, yet all that enter the lacking contents that could or should be present that cause to be enabled to find the originating or based sense for an ethical theory. Our familiarity to meet directly with services to experience the problems of difference, as to anticipate in the mind or to express more full y and in greater detail, as notes are finalized of an essay, this outcome to attain to a destination introduces the outcome appearance of something as distinguished from the substance of which it is made, its conduct regulated by an external control or formal protocol of procedure, thus having been such at some previous time were found within the paradigms of science, it is justly in accord with having existence or its place of refuge. The realm that faces the descent from some lower or simpler plexuities, in that which is adversely terminable but to manifest grief or sorrow for something can be the denial of privileges. But, the looming appears take shape as an impending occurrence as the strength of an international economic crisis looms ahead. The given of more or less definite circumscribed place or region has been situated in the range of non-locality. Directly, to whatever plays thereof as the power to function of the mind by which metal images are formed or the exercise of that power proves imaginary, in that, having no real existence but existing in imagination denotes of something hallucinatory or milder phantasiá, or unreal, however, this can be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this background can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer are to essentially equivalent in the substance of background association of which is to suggest that the conscript should feel free to ignore it. But this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly function, an effort to close the circle, resolve the equations of eternity and complete universal obtainability, thus gains of its unification in which that holds all therein.

A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the 'science of man' began to probe into human motivation and emotion. For such as these, the French moralistes, or Hutcheson, Hume, Smith and Kant, whose fundamental structures gave to a foundational supporting system, that is not based on or derived from something else, other than the firsthand basics that best magnifies the primeval underlying inferences, by the prime liking for or enjoyment of something because of the pleasure it gives, yet in appreciation to the delineated changes that alternatively modify the mutations of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us.

In some moral systems, notably that of Immanuel Kant, corresponding to known facts and facing reality squarely attained of 'real' moral worth comes only with interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or 'sympathy'. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness, through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly, and those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a conditional status as characterized by the consideration that intellectually carries its weight is earnestly on one's side or another.

As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations, in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject's fault that she or he was considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in them, such as of 'utilitarianism', to espouse various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.

The status of law may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.

In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St. Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of the Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism, its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of 'natural usages' or by reason itself, additionally, (in religious verses of them), that express of God's will for creation. Non-religious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God's will. Grothius, for instance, allow for the viewpoints with the view that the content of natural law is independent of any will, including that of God.

While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was the 'De Jure Naturae et Gentium', 1672, and its English translation is 'Of the Law of Nature and Nations', 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the 17th century, his ambition was to introduce a newly scientific 'mathematical' treatment on ethics and law, free from the tainted Aristotelian underpinning of 'scholasticism'. Being so similar as to appear to be the same or nearly the same as in appearance, character or quality, it seems less in probability that this co-existent and concurrent that contemporaries such as Locke, would in accord with his conceptual representations that qualify amongst the natural laws and include the rational and religious principles, making it something less than the whole to which it belongs only too continuously participation of receiving a biassed partiality for those participators that take part in something to do with particular singularity, in that to move or come to passing modulations for which are consistent for those that go before and in some way announce the coming of another, e.g., as a coma is often a forerunner of death. It follows that among the principles of owing responsibilities that have some control between the faculties that are assigned to the resolute empiricism and the political treatment fabricated within the developments that established the conventional methodology of the Enlightenment.

Pufendorf launched his explorations in Plato's dialogue 'Euthyphro', with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods creates goodness and value. Even if this is intelligible, it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct from the will, but not distinct from him.

The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just call the benevolent interests or concern for being good of those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or necessary truth, for example, is truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?

The natural aw tradition may either assume a stranger form, in which it is claimed that various fact's entail of primary and secondary qualities, any of which is claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Kant, these requirements are supposed binding on all human beings, regardless of their desires.

The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed 'synderesis' (or, synderesis) although traced to Aristotle, the phrase came to the modern era through St. Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simply and immediately grasp of first moral principles. Conscience, by contrast, is, more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.

It is, nevertheless, the view interpreted within the particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for 'rational' schemes thought up by managers and theorists, is therefore entirely misplaced. Major o exponent s of this theme includes the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The notable idealism of Bradley, Wherefore there is the same doctrine that change is inevitably contradictory and consequently unreal: The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step toward this end may be to see time itself not as an infinite container within which discrete events are located, but as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton's Absolutist pupil, Clarke.

Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense of ability to make intelligent choices and to reach intelligent conclusions or decisions in the good sense of inferred sets of understanding, just as the species responds without delay or hesitation or indicative of such ability that links up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity. The association of what is natural and, by contrast, with what is good to become, is visible in Plato, and is the central idea of Aristotle's philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with the rest that we would call the natural world, including women, slaves, children and other species, not quite making it.

Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the 'forms'. The theory of 'forms' is probably the most characteristic, and most contested of the doctrines of Plato. In the background, i.e., the Pythagorean conception of form as the key to physical nature, but also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is pre-eminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), Earth, and water. Although he is principally remembered for the doctrine of the 'flux' of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since 'regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one's finger. Plato's theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.

The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom lose its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy, regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast within integrated phenomenons' may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived of as distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.

Different conceptualized traits as founded within the nature's continuous overtures that play ethically, for example, the conception of 'nature red in tooth and claw' often provides a justification for aggressive personal and political relations, or the idea that it is women's nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much of the feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the 'masculine' self-image, itself a social variable and potentially distorting the picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to what are the relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.

In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits, at its silliest, the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.

The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a 'science of man', devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples' own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.

The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.

Among the features that are proposed for this kind of explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people's characteristics, e.g., at the limit of silliness, by postulating a 'gene for poverty', however, there is no need for the approach to committing such errors, since the feature explained psychobiological may be indexed to environment: For instance, it may be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.

Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). His first major work was the book Social Statics (1851), which promoted an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voice. T.H. Huxley said that Spencer's definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the 'hurdy-gurdy' monotony of him, his aggraded organized array of parts or elements forming or functioning as some units were in cohesion of the opening contributions of wholeness and the system proved inseparably unyieldingly.

The premises regarded by some later elements in an evolutionary path are better than earlier ones; the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more 'primitive' social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called 'social Darwinism' emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggles, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

In that, the study of the way in which a variety of higher mental functions may be adaptations applicable of a psychology of evolution, an outward appearance of something as distinguished from the substances of which it is made, as the conduct regulated by an external control as a custom or formal protocol of procedure may, perhaps, depicts the conventional convenience in having been such at some previous time the hardened notational system in having no definite or recognizable form in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on an agreement or who freely ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with Neurophysiologic evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.

For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that they are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley's general dissent from empiricism, his holism, and the brilliance and style of his writing continues to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).

Understandably, something less than the fragmented division that belonging of Bradley's case has a preference, voiced much earlier by the German philosopher, mathematician and polymath, Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which is known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854), who is now qualified to be or worthy of being chosen as a condition, position or state of importance is found of a basic underlying entity or form that he succeeds fully or in accordance with one's attributive state of prosperity, the notice in conveying completely the cruel essence of those who agree and disagrees its contention to 'be-all' and 'end-all' of essentiality. Nonetheless, the movement of more general to naturalized imperatives are nonetheless, simulating the movement that Romanticism drew on by the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegal (1770-1831) and of absolute idealism.

Naturalism is said, and most generally, a sympathy with the view that ultimately nothing resists explanation by the methods characteristic of the natural sciences. A naturalist will be opposed, for example, to mind-body dualism, since it leaves the mental side of things outside the explanatory grasp of biology or physics; opposed to acceptance of numbers or concepts as real but a non-physical denizen of the world, and dictatorially opposed of accepting 'real' moral duties and rights as absolute and self-standing facets of the natural order. A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the 'science of man' began to probe into human motivation and emotion. For writers such as the French moralistes, or normatively suitable for the moralist Francis Hutcheson (1694-1746), David Hume (1711-76), Adam Smith (1723-90) and Immanuel Kant (1724-1804), a prime task was to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies, such as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us. In like ways, the custom style of manners, extend the habitude to construct according to some conventional standard, wherefrom the formalities affected by such self-conscious realism, as applied to the judgements of ethics, and to the values, obligations, rights, etc., that are referred to in ethical theory. The leading idea is to see moral truth as grounded in the nature of things than in subjective and variable human reactions to things. Like realism in other areas, this is capable of many different formulations. Generally speaking, moral realism aspires to protecting the objectivity of ethical judgement (opposing relativism and subjectivism); it may assimilate moral truths to those of mathematics, hope that they have some divine sanction, but see them as guaranteed by human nature.

Nature, as an indefinitely mutable term, changing as our scientific concepts of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species and also to the natural world as a whole. The association of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle's philosophy of nature. Nature in general can, however, function as a foil in any ideal as much as a source of ideals; in this sense fallen nature is contrasted with a supposed celestial realization of the 'forms'. Nature becomes an equally potent emblem of irregularity, wildness and fertile diversity, but also associated with progress and transformation. Different conceptions of nature continue to have ethical overtones, for example, the conception of 'nature red in tooth and claw' often provides a justification for aggressive personal and political relations, or the idea that it is a woman's nature to be one thing or another is taken to be a justification for differential social expectations. Here the term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writing.

The central problem for naturalism is to define what counts as a satisfactory accommodation between the preferred science and the elements that on the face of it has no place in them. Alternatives include 'instrumentalism', 'reductionism' and 'eliminativism' as well as a variety of other anti-realist suggestions. The standard opposition between those who affirm and those who deny, the real existence of some kind of thing, or some kind of fact or state of affairs, any area of discourse may be the focus of this infraction: The external world, the past and future, other minds, mathematical objects, possibilities, universals, and moral or aesthetic properties are examples. The term naturalism is sometimes used for specific versions of these approaches in particular in ethics as the doctrine that moral predicates actually express the same thing as predicates from some natural or empirical science. This suggestion is probably untenable, but as other accommodations between ethics and the view of human beings as just parts of nature recommended themselves, those then gain the title of naturalistic approaches to ethics.

By comparison with nature which may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and intelligence, of a kind to be readily understood as capable of being distinguished as differing from the biological and physical order, (4) that which is manufactured and artifactual, or the product of human invention, and (5) related to it, the world of convention and artifice.

Different conceptions of nature continue to have ethical overtones, for example, the conceptions of 'nature red in tooth and claw' often provide a justification for aggressive personal and political relations, or the idea that it is a woman's nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of a stereotype, and is a proper target of much 'feminist' writing.

This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on 'such-things' as preservation of species, or protection of the wilderness. Such protection can be supported as a man to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that things consist. They put our proper place, and failure to appreciate this value as it is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.

Many concerns and disputed clusters around the idea associated with the term 'substance'. The substance of a thing may be considered in: (1) its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notions of substances tended to disappear in empiricist thought, only fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of only instances of qualities, not of quantities themselves, yet the problem of what it is for a quality value to be the instance that remains.

Metaphysics inspired by modern science tend to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.

It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but during the 1st century rhetorical treatise had the Sublime nature, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard's writing in 1759, 'When a large object is presented, the mind expands itself to the degree in extent of that object, and is filled with one grand sensation, which totally possessing it, cleaning of its solemn sedateness and strikes it with deep silent wonder, and administration': It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.

In Kant's aesthetic theory the sublime 'raises the soul above the height of vulgar complacency'. We experience the vast spectacles of nature as 'absolutely great' and of irresistible force and power. This perception is fearful, but by conquering this fear, and by regarding as small 'those things of which we are wont to be solicitous' we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of us as transcending nature, than in an awareness of us as a frail and insignificant part of it.

Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosopher's George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of 'essentialism', stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.

The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716) that if a person had any other attributes that the ones he has, he would not have been the same person. Leibniz thought that when asked what would have happened if Peter had not denied Christ. That being that if I am asking what had happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name 'Peter' might be understood as 'what is involved in those attributes [of Peter] from which the denial does not follow'. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances, the relation of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To unite all the 'relational ideas' and 'matter of fact ' (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.

In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called 'Hume's Fork', is a version of the speculative deductive reasoning is an outcry for characteristic distinction, but ponderously reflects about the 17th and early 18th centuries, behind that the deductivist is founded by chains of infinite certainty as comparative ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of 'intuitive' comparable ideas, whereby a principle or maxim can be established by reason alone. It is in this sense that the English philosopher John Locke (1632-1704) who believed that theologically and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.

A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrate, using the rules of logic, that if the premises are true then a particular conclusion must also be true.

The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean Theorem, named after the 5th century Bc. Greek mathematician and philosopher Pythagoras, stated that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinions do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers, but an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of one is the irrational number Ã.

The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.

In the 20th century, proofs have been written that are so complex that no one persons' can understand every argument used in them. In 1976, a computer was used to complete the proof of the four-colour theorem. This theorem states that four colours are sufficient to colour any map in such a way that regions with a common boundary line have different colours. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof?

The study of the relations of deductibility among sentences in a logical calculus which benefits the proof theory, whereby its deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitely methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel's second incompleteness theorem.

The deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpreted rations) and semantic consequence (a formula 'B' is a semantic consequence of a set of formulae, written {A1 . . . An} B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An}? B if and only if {A1 . . . An}? B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only 'tautologies'. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.

The Euclidean geometry is the greatest example of the pure 'axiomatic method', and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (its pragmatic display by some emotionless attainment for which its observable gratifications are given us that, 'two parallel lines never meet'), however, this axiomatic ruling could be denied of deficient inconsistency, thus leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. It's most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid's Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.

The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one-another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one-another. They should also be few in number. Axioms have sometimes been situationally interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.

The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.

The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behaviour. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.

In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analysed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision makes are also amenable to such study.

Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries are not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.

All is the same in the classical theory of the syllogism; a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in 'all dogs bark' the term 'dogs' is distributed, since it entails 'all terriers' bark', which is obtained from it by a substitution. In 'Not all dogs bark', the same term is not distributed, since it may be true while 'not all terriers' bark' is false.

When a representation of one system by another is usually more familiar, in and for itself that those extended in representation that their workings are supposed analogously to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful 'heuristic' role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of content was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in 'The Aim and Structure of Physical Theory' (1954) by which Duhem's conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.

Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. They're later are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to their deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are applicably befitting the properly occupying importance in the integration of incorporating the scientifically tractable unification, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the states of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object's causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size. And mobility is. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.

The proposal set forth that characterizes the 'modality' of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called 'modal' include the tense indicators, 'it will be the case that 'p', or 'it was not of the situations that 'p', and there are affinities between the 'deontic' indicators, 'it should be the case that 'p', or 'it is permissible that 'p', and the necessity and possibility.

The aim of logic is to make explicitly the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of the answer is that if we do not we contradict ourselves, or strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or her set of beliefs. There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such that anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century, and continued to remain indefinitely in existence or in a particular state or course as many expect it to continue of increasing recognition. Occurring to matters right or obtainable, the complex of ideals, beliefs, or standards that characterize or pervade a totality of infinite time. Existing or dealing with what exists only the mind is congruently responsible for presenting such to an image or lifelike imitation of representing contemporary philosophy of mind, following cognitive science, if it uses the term 'representation' to mean just about anything that can be semantically evaluated. Thus, representations may be said to be true, as to connect with the arousing truth-of something to be about something, and to be exacting, etc. Envisioned ideations come in many varieties. The most familiar are pictures, three-dimensional models (e.g., statues, scale models), linguistic text, including mathematical formulas and various hybrids of these such as diagrams, maps, graphs and tables. It is an open question in cognitive science whether mental representation falls within any of these familiar sorts.

The representational theory of cognition is uncontroversial in contemporary cognitive science that cognitive processes are processes that manipulate representations. This idea seems nearly inevitable. What makes the difference between processes that are cognitive - solving a problem - and those that are not - a patellar reflex, for example - are just that cognitive processes are epistemically assessable? A solution procedure can be justified or correct; a reflex cannot. Since only things with content can be epistemically assessed, processes appear to count as cognitive only in so far as they implicate representations.

It is tempting to think that thoughts are the mind's representations: Aren't thoughts just those mental states that have semantic content? This is, no doubt, harmless enough provided we keep in mind that the scientific study of processes of awareness, thoughts, and mental organizations, often by means of computer modelling or artificial intelligence research that the cognitive aspect of meaning of a sentence may attribute this thought of as its content, or what is strictly said, abstracted away from the tone or emotive meaning, or other implicatures generated, for example, by the choice of words. The cognitive aspect is what has to be understood to know what would make the sentence true or false: It is frequently identified with the 'truth condition' of the sentence. The truth condition of a statement is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of 'snow is white' is that snow is white: The truth condition of 'Britain would have capitulated had Hitler invaded' is that Britain would have capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

The view that the role of sentences in inference gives a more important key to their meaning than their 'external' relations to things in the world is that the meaning of a sentence becomes its place in a network of inferences that it legitimates. Also, known as functional role semantics, procedural semantics, or conceptual role semantics. The view bears some relation to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.

Moreover, internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as teleological theories that invoke a historical theory of functions, take content to be determined by 'external' factors, crossing the atomist-holistic distinction with the internalist-externalist distinction.

Externalist theories, sometimes called non-individualistic theories, have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent in internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from context, i.e., from whatever the external factors are to wide contents.

Most briefly, the epistemological tradition has been internalist, with externalism emerging as a genuine option only in the twentieth century. Te best way to clarify this distinction is by considering another way: That between knowledge and justification. Knowledge has been traditionally defined as justified true belief. However, due to certain counter-examples, the definition had to be redefined. With possible situations in which objectifies abuse are made the chief ambition for the aim assigned to target beliefs, and, perhaps, might be both true and justified, but still intuitively certain we would not call it knowledge. The extra element of undefeatedness attempts to rule out the counter-examples. In that, the relevant issue, at this point, is that on all accounts of it, knowledge entails truth: One can't know something false, as justification, on the other hand, is the account of the reason one hands for a belief. However, one may be justified in holding a false belief, justification is understood from the subject's point of view, and it doesn't entail truth.

Internalism is the position that says that the reason one has for a belief, its justification, must be in some sense available to the knowing subject. If one has a belief, and the reason why it is acceptable for me to hold that belief is not knowable to the person in question, then there is no justification. Externalism holds that it is possible for a person to have a justified belief without having access to the reason for it. Perhaps, that this view seems too stringent to the externalist, who can explain such cases by, for example, appeal to the use of a process that reliable produced truths. One can use perception to acquire beliefs, and the very use of such a reliable method ensures that the belief is a true belief. Nonetheless, some externalists have produced accounts of knowledge with relativistic aspects to them. Alvin Goldman, who posses as an intellectual, has undertaken the hold on the verifiable body of things known about or in science. This, orderers contributing the insight known for a relativistic account of knowledge in, his writing of, Epistemology and Cognition (1986). Such accounts use the notion of a system of rules for the justification of belief - these rules provide a framework within which it can be established whether a belief is justified or not. The rules are not to be understood as actually conscious guiding the dogmatizer's thought processes, but rather can be applied from without to give an objective judgement as to whether the beliefs are justified or not. The framework establishes what counts as justification, and like criterions established the framework. Genuinely epistemic terms like 'justification' occur in the context of the framework, while the criterion, attempts to set up the framework without using epistemic terms, using purely factual or descriptive terms.

In any event, a standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and non-specialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign to common sense.

The representational theory of cognition gives rise to a natural theory of intentional stares, such as believing, desiring and intending. According to this theory, intentional state factors are placed into two aspects: A 'functional' aspect that distinguishes believing from desiring and so on, and a 'content' aspect that distinguishes belief from each other, desires from each other, and so on. A belief that 'p' might be realized as a representation with the content that 'p' and the function of serving as a premise in inference, as a desire that 'p' might be realized as a representation with the content that 'p' and the function of intimating processing designed to bring about that 'p' and terminating such processing when a belief that 'p' is formed.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e., to explain in non-semantic, non-intentional terms what it is for something to be a representation (have content), and what it is for something to have some particular content than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) covariance, (3) functional roles, (4) teleology.

Similar theories had that 'r' represents 'x' in virtue of being similar to 'x'. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obviously how.

Covariance theories hold that r's represent 'x' is grounded in the fact that r's occurrence ovaries with that of 'x'. This is most compelling when one thinks about detection systems: The firing neuron structure in the visual system is said to represent vertical orientations if it's firing ovaries with the occurrence of vertical lines in the visual field. Dretske (1981) and Fodor (1987), has in different ways, attempted to promote this idea into a general theory of content.

'Content' has become a technical term in philosophy for whatever it is a representation has that makes it semantically evaluable. Thus, a statement is sometimes said to have a proposition or truth condition s its content: a term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. 'Content' is a useful term precisely because it allows one to abstract away from questions about what semantic properties representations have: a representation's content is just whatever it is that underwrites its semantic evaluation.

Likewise, functional role theories hold that r's representing 'x' is grounded in the functional role 'r' has in the representing system, i.e., on the relations imposed by specified cognitive processes between 'r' and other representations in the system's repertoire. Functional role theories take their cue from such common sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

What is more that theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic? The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person, internal to his cognitive perspective, and externalist, if it allows hast at least some of the justifying factors need not be thus accessible, so that they can be external to the believer's cognitive perspective, beyond his ken. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering and very explicit explications.

Atomistic theories take a representation's content to be something that can be specified independently of that representation's relations to other representations. What Fodor (1987) calls the crude causal theory, for example, takes a representation to be a
cow
- a mental representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraint on how
cow
's must or might relate to other representations.

The syllogistic or categorical syllogism is the inference of one proposition from two premises. For example is, 'all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The terms that do not occur in the conclusion are called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term), justly as commended of the first premise of the example, in the minor premise the second the major term, so the first premise of the example is the minor premise, the second the major premise and 'having a tail' is the middle term. This enables syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.

Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been rearguing actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the might range over predicate and functions themselves. The fist-order predicated calculus with identity includes '=' as primitive (undefined) expression: In a higher-order calculus. It may be defined by law that? = y if (? F) (F? - Fy), which gives greater expressive power for less complexity.

Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topics, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His independent proofs worth showing that from a contradiction anything follows its parallelled logic, using a notion of entailment stronger than that of strict implication.

The imparting information has been conduced or carried out of the prescribed conventions, as disconcerting formalities that blend upon the plexuities of circumstance, that takes place in the folly of depending the contingence too secure of possibilities the outlook to be entering one's mind. This may arouse of what is proper or acceptable in the interests of applicability, which from time to time of increasingly forward as placed upon the occasion that various doctrines concerning the necessary properties are themselves represented by an arbiter or a conventional device used for adding to a prepositional or predicated calculus, for its additional rationality that two operators? And? (Sometimes written 'N' and 'M'), meaning necessarily and possible, respectfully. Usually, the production necessitates the likelihood of ‘p', and 'p' and ‘p'. While equalled in of wanting, as these controversial subscriptions include ‘p' and ‘p', if a proposition is necessary. It's necessarily, characteristic of a system known as S4, and ‘P', ‘p' (if as preposition is possible, it's necessarily possible, characteristic of the system known as S5). In classical modal realism, the doctrine advocated by David Lewis (1941-2002), that different possible worlds care to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she for her counterpart. Saying drowned, is spoken from the standpoint of the universe that it should make no difference which world is actual. Critics also charge that the notion fails to fit either with a coherent Theory of how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.

Saul Kripke (1940- ), the American logician and philosopher contributed to the classical modern treatment of the topic of reference, by its clarifying distinction between names and definite description, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.

One of the three branches into which 'semiotic' is usually divided, the study of semantically meaning of words, and the relation of signs to the degree to which the designs are applicable, in that, in formal studies, semantics is provided for by a formal language when an interpretation of 'model' is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds, has on the truth conditions of sentences containing them.

Holding that the basic case of reference is the relation between a name and the persons or objective worth which it names, its philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description of what it describes, or that between me and the word 'I', are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke's, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term's contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approaches in searching for more substantive possibilities that causality or psychological or social constituents are pronounced between words and things.

However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the 'Liar family', which form the purely logical paradoxes in which no such notions are involved, such as Russell's paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type seem to depend upon an element of a self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although mind-reference itself is often benign (for instance, the sentence 'All English sentences should have a verb', includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that is only existentially pathological and resulting of a self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient in allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes. Our understanding of Russell's paradox may be imperfect as well.

Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and 'none' has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains, the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary makes an agreement valid, or a tenable position, as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if 'p' presupposes 'q', 'q' must be true for 'p' to be either true or false. In the theory of knowledge, the English philosopher and historian George Collingwood (1889-1943), announces that any proposition capable of truth or falsity stands on of 'absolute presuppositions' which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore means that either another of a truth value is found, 'intermediate' between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion directionally imparts as to convey there to some consensus that at least who where definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of 'implicatures'.

Views about the meaning of terms will often depend on classifying the implicatures of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carries and pushes in controversial implicatures. Thus, one of the relations between 'he is poor and honest' and 'he is poor but honest' is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.

It is, nonetheless, that we find in classical logic a proposition that may be true or false. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogue between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called 'many-valued logics'.

Nevertheless, an existing definition of the predicate' . . . is true' for a language that satisfies convention 'T', the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of 'recursive' definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a 'metalanguage', Tarski is thus committed to a hierarchy of languages, each with it's associated, but different truth-predicate. While this enables an easier approach to avoid the contradictions of paradoxical contemplations, it yet conflicts with the idea that a language should be able to say everything that there is to say, and other approaches have become increasingly important.

So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of 'now is white' is that 'snow is white', the truth condition of 'Britain would have capitulated had Hitler invaded', is that 'Britain would have capitulated had Hitler invaded'. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

Taken to be the view, inferential semantics takes upon the role of a sentence in inference, and gives a more important key to their meaning than this 'external' relation to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.

Moreover, a theory of semantic truth is that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.

The redundancy theory, or also known as the 'deflationary view of truth' fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoxes, such as that of the Liar, and Russell's paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms, e.g., quarks, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives 'topic-neutral' structure of the theory, but removes any implication that we know what the terms so administered to advocate. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.

For in part, while, both Frége and Ramsey are agreeing that the essential claim is that the predicate' . . . is true' does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that 'it is true that 'p' says no more nor less than 'p' (hence, redundancy): (2) that in less direct context, such as 'everything he said was true', or 'all logical consequences of true propositions are true', the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from a true preposition. For example, the second may translate as '(?p, q)(p & p? Q? q)' where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways; nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as 'science aims at the truth', or 'truth is a norm governing discourse'. Post-modern writing frequently advocates that we must abandon such norms, along with a discredited 'objective' conception of truth, perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that 'p', then 'p'. Discourse is to be regulated by the principle that it is wrong to assert 'p', when 'not-p'.

Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or adjoin of something might that there be more so as to a larger combination for us to consider the simplest formulation, is that 'real', assuming that it is right to demand something as one's own or one's due to its call for the challenge and maintain contentually justified. The demands adduced to forgo a defendable right of contend is a real or assumed placement to defend his greatest claim to fame. Claimed that expression of the attached adherently following the responsive quality values as explicated by the body of people who attaches them to another epically as disciplines, patrons or admirers, after al, to come after in time follows the succeeded succession to the proper lineage of the modelled composite of 'S is true' means the same as an induction or enactment into being its expression from something hided, latent or reserved to be educed to arouse the excogitated form of 'S'. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say 'Dogs bark' is True, or whether they say, 'dogs bark'. In the former representation of what they say of the sentence 'Dogs bark' is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that 'Dogs bark' is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the 'redundancy theory of truth'.

The relationship between a set of premises and a conclusion when the conclusion follows from the premise, as several philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.

From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is a purely empirical enterprise.

But this point of view by no means embraces the whole of the actual process, for it overlooks the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the examiners develop a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a 'theory'. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the 'truth' of the theory lies.

Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypothesis of the hereditary transmission of acquired characters. The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as 'neo-Darwinism' became the orthodox theory of evolution in the life sciences.

In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903), the premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more 'primitive' social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called 'social Darwinism' emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggles are usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

Once again, psychological attempts are found to establish a point by appropriate objective means, in that their evidences are well substantiated within the realm of evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who 'free-ride' on the work of others, our cognitive structures, and many others. Evolutionary psychology goes hand-in-hand with Neurophysiologic evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in socio-biology and evolutionary psychology.

Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin's view of natural selection as a regarded-threat, competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.

According to E.O Wilson, the 'human mind evolved to believe in the gods'' and people 'need a sacred narrative' to have a sense of higher purpose. Yet it is also clear that the unspoken 'gods'' in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. 'Science for its part', said Wilson, 'will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiment. The eventual result of the competition between each other will be the secularization of the human epic and of religion itself.

Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflect 'reality'. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing 'reality' as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide 'comprehensible' guides to living in this way. Man's imagination and intellect play vital roles on his survival and evolution.

Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of 'logical positivist' approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the 'exlanans' (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton's laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, which we make of explanations, and these may include, for instance, that we have a 'feel' for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.

The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.

In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form, for which is the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Conceptions of meaning s truth-conditions needs not and ought not to be advanced for being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of the sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.

The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.

The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: 'London' refers to the city in which there was a huge fire in 1666, is a true statement about the reference of 'London'. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that 'London is beautiful' is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a psychological subject can understand, the given name to 'London' without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity; second, the theorist must offer an account of what it is for a person's language to be truly describable by as semantic theory containing a given semantic axiom.

Since the content of a claim that the sentence, 'Paris is beautiful' is the true amount under which there will be no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It's conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition 'p', it is true that 'p' if and only if 'p'. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that a sentence 'Paris is beautiful' is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson and Horwich and - confusing and inconsistently if this article is correct - Frége himself. But is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truth from which such an instance as, 'London is beautiful' is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that 'London' refers to London consists in part in the fact that 'London is beautiful' has the truth-condition it does. But it is very implausible, it is, after all, possible for apprehending and for its understanding of the name 'London' without understanding the predicate 'is beautiful'.

Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form if 'p' were to happen 'q' would, or if 'p' were to have happened 'q' would have happened, where the supposition of 'p' is contrary to the known fact that 'not-p'. Such assertions are nevertheless, useful 'if you broke the bone, the X-ray would have looked different', or 'if the reactor was to fail, this mechanism would click in' are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals ('if the metal were to be heated, it would expand'), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever 'p' is false, so there would be no division between true and false counterfactuals.

Although the subjunctive form indicates the counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: 'If you run out of water, you will be in trouble' seems equivalent to 'if you were to run out of water, you would be in trouble', in other contexts there is a big difference: 'If Oswald did not kill Kennedy, someone else did' is clearly true, whereas 'if Oswald had not killed Kennedy, someone would have' is most probably false.

The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether 'q' is true in the 'most similar' possible worlds to ours in which 'p' is true. The similarity-ranking this approach is needed to prove of the controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactual is that they promise to illuminate that notion. There is an expanding force of awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactual or not that it is of limited use.

The pronouncing of any conditional, preposition of the form 'if p then q', the condition hypothesizes, 'p'. It's called the antecedent of the conditional, and 'q' the consequent. Various kinds of conditional have been distinguished. Weaken in that of material implication, merely telling us that with 'not-p' or 'q', stronger conditionals include elements of modality, corresponding to the thought that if 'p' is true then 'q' must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.

Passively, there are many forms of reliabilism. Just as there are many forms of 'Foundationalism' and 'coherence'. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, insofar as Foundationalism and coherentism traditionally focussed on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some precepts of either Foundationalism or coherentism. Foundationalism says that there are 'basic' beliefs, which acquire justification without dependence on inference; reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematic in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematic consequently, reliabilism could complement Foundationalism and coherence than completed with them.

These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman's claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of 'causal theory' intended for the belief as it is justified in case it was produced by a type of process that is 'globally' reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently reasonable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a 'personality theory' could be progressively advanced from a lower or simpler to a higher or more complex form, as developing to come to have usually gradual acquirements, only based on a precise behavior al notion of preference and expectation. In the philosophy of language, much of Ramsey's work was directed at saving classical mathematics from 'intuitionism', or what he called the 'Bolshevik harassments of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalist's theory, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein and his continuing friendship that led to Wittgenstein's return to Cambridge and to philosophy in 1929.

Ramsey's sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., 'quark'. Replacing the term by a variable, and existentially quantifying into the result, instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the 'topic-neutral' structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided, virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar 'external' relations between belief and truth, closely allied to the nomic sufficiency account of knowledge. The core of this approach is that X's belief that 'p' qualifies as knowledge just in case 'X' believes 'p', because of reasons that would not obtain unless 'p's' being true, or because of a process or method that would not yield belief in 'p' if 'p' were not true. An enemy example, 'X' would not have its current reasons for believing there is a telephone before it. Or consigned to not come to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief's being true. Determined to and the facts of counterfactual approach say that 'X' knows that 'p' only if there is no 'relevant alternative' situation in which 'p' is false but 'X' would still believe that a proposition 'p'; must be sufficient to eliminate all the alternatives to 'p' where an alternative to a proposition 'p' is a proposition incompatible with 'p?'. That I, one's justification or evidence for 'p' must be sufficient for one to know that every alternative to 'p' is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for 'us'. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. The sceptic appears to show that every alternative is seldom. If ever, satisfied.

All the same, and without a problem, is noted by the distinction between the 'in itself' and the; for itself' originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. 'Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself'. Kant applies this same distinction to the subject's cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to its own self, it represents itself 'as it appears to itself, not as it is'. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a knower is applied to the subject's own knowledge of itself.

Hegel (1770-1831) begins the transition of the epistemological distinct ion between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is, s it is in fact it in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact it in itself involves a relation to itself, or self-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations do not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter specific explicit relations with it. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, for-itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken to apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant's various organs is the plant 'for itself'. In Hegel, then, the in itself/for itself distinction becomes universalized, in is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, being in itself of the plan, or the plant as potential adult, in that an ontologically distinct commonality is in for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing, it is necessary to know both the actual explicit self-relations which mark the thing (the being for itself of the thing), and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in knowledge of the thing as it is in and for itself.

Sartre's distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. What is it for consciousness to be, being for itself, is marked by self relation? Sartre posits a 'Pre-reflective Cogito', such that every consciousness of '?' necessarily involves a 'non-positional' consciousness of the consciousness of '?'. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and for itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered as both 'in itself' and 'for itself', in Sartre, to be self-related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive e ontological mark of non-conscious entities.

This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge -. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.

If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptic conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. We can view the theory of relevant alternatives as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

This approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution an evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin's theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, put it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offspring's than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the hemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread; with the unfortunate consequence that sickle-cell anaemia came to exist.

When proximate and evolutionary explanations are carefully distinguished, many questions in biology make more sense. A proximate explanation describes a trait - its anatomy, physiology, and biochemistry, as well as its development from the genetic instructions provided by a bit of DNA in the fertilized egg to the adult individual. An evolutionary explanation is about why DNA specifies that trait in the first place and why has DNA that encodes for one kind of structure and not some other. Proximate and evolutionary explanations are not alternatives, but both are needed to understand every trait. A proximate explanation for the external ear would incorporate of its arteries and nerves, and how it develops from the embryo to the adult form. Even if we know this, however, we still need an evolutionary explanation of how its structure gives creatures with ears an advantage, why those that lack the structure shaped by selection to give the ear its current form. To take another example, a proximate explanation of taste buds describes their structure and chemistry, how they detect salt, sweet, sour, and bitter, and how they transform this information into impulses that travel via neurons to the brain. An evolutionary explanation of taste buds shows why they detect saltiness, acidity, sweetness and bitterness instead of other chemical characteristics, and how the capacities detect these characteristics help, and cope with life.

Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual's actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analyzed carefully. The extent for which evolution obtainably achieves perfection depends on the enacting fitness for which Darwin speaks in terms of their survival and their fittest are most likely as perfect than the non-surviving species, only, that it enables us to know exactly what you mean. If in what you mean, 'Does natural selection always takes the best path for the long-term welfare of a species?' The answer is no. That would require adaptation by group selection, and this is, unlikely. If you mean 'Does natural selection creates every adaptation that would be valuable?' The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate it mean that will evolve.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin's theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin's theory of natural selection, variations are not pre-designed to perform certain functions. Rather, these variations that perform useful functions are selected. While those that suffice on doing nothing are not selected but, nevertheless, such selections are responsible for the appearance that specific variations built upon intentionally do really occur. In the modern theory of evolution, genetic mutations provide the blind variations ( blind in the sense that variations are not influenced by the effects they would have, - the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism), the environment provides the filter of selection, and reproduction provides the retention. It is achieved because those organisms with features that make them less adapted for survival do not survive about other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes in general.

The parallel between biological evolutions and conceptual or we can see 'epistemic' evolution as either literal or analogical. The literal version of evolutionary epistemological biological evolution as the main cause of the growth of knowledge stemmed from this view, called the 'evolution of cognitive mechanic programs', by Bradie (1986) and the 'Darwinian approach to epistemology' by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisition of non-innate beliefs are themselves innately and the result of biological natural selection. Ruses (1986) repossess to resume of the insistence of an interlingual rendition of literal evolutionary epistemology that he links to sociology.

Determining the value upon innate ideas can take the path to consider as these have been variously defined by philosophers either as ideas consciously present to the mind priori to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition to form, though we need to be actually aware of them at a particular r time, e.g., as babies - the dispositional sense. Understood in either way they were invoked to account for our recognition of certain verification, such as those of mathematics, or to justify certain moral and religious clams which were held to b capable of being know by introspection of our innate ideas. Examples of such supposed truths might include 'murder is wrong' or 'God exists'.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times one about a source of propositional knowledge, in so far as concepts are taken to be innate the doctrine relates primarily to claims about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken an evidence for the truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky's influential account of the mind's linguistic capacities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely o the basis of an appeal to sense experiences. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection, in Plato, the recollection of knowledge, possibly obtained in a previous stat e of existence e draws its topic as most famously broached in the dialogue "Meno," and the doctrine is one attemptive account for the 'innate' unlearned character of knowledge of first principles. Since there was no plausible post-natal source the recollection must refer of a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the views that there were importantly gradatorially innate human beings and it was this sense which hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and scholastic teaching until its displacement by Locke' philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our idea of God must necessarily exist, is Descartes held, logically independent of sense experience. In England the Cambridge Plantonists such as Henry Moore and Ralph Cudworth added considerable support.

Locke's rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated disposition version of theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions in the direction of construing with necessary truths as analytic, justly be for Kant's refinement of the classification of propositions with the fourfold analytic/synthetic distention and deductive/inductive did nothing to encourage a return to their innate idea's doctrine, which slipped from view. The doctrine may fruitfully be understood as the genesis of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Chomsky's revival of the term in connection with his account of the spoken exchange acquisition has once more made the issue topical. He claims that the principles of language and 'natural logic' are known unconsciously and is a precondition for language acquisition. But for his purposes innate ideas must be taken in a strong dispositional sense - so strong that it is far from clear that Chomsky's claims are as in direct conflict, and make unclear in mind or purpose, as with empiricists accounts of valuation, some (including Chomsky) have supposed. Willard van Orman Quine (1808-2000), for example, sees no disaccording with his own version of empirical behaviorism, in which sees the typical of an earlier time and often replaced by something more modern or fashionable converse [in] views upon the meaning of determining what a thing should be, as each generation has its own standards of mutuality.

Locke' accounts of analytic propositions was, that everything that a succinct account of analyticity should be (Locke, 1924). He distinguishes two kinds of analytic propositions, identity propositions for which 'we affirm the said term of itself', e.g., 'Roses are roses' and predicative propositions in which 'a part of the complex idea is predicated of the name of the whole', e.g., 'Roses are flowers'. Locke calls such sentences 'trifling' because a speaker who uses them 'trifling with words'. A synthetic sentence, in contrast, such as a mathematical theorem, that state of real truth and presents its instructive parallel's of real knowledge'. Correspondingly, Locke distinguishes both kinds of 'necessary consequences', analytic entailments where validity depends on the literal containment of the conclusion in the premise and synthetic entailment where it does not. John Locke (1632-1704) did not originate this concept-containment notion of analyticity. It is discussed by Arnaud and Nicole, and it is safe to say that it has been around for a very long time.

All the same, the analogical version of evolutionary epistemology, called the 'evolution of theory's program', by Bradie (1986). The 'Spenserians approach' (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), a process analogous to biological natural selection has governed the development of human knowledge, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) and Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

We have usually taken both versions of evolutionary epistemology to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. By contrast, the analogical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Savagery put, evolutionary epistemology of the analogical sort could still be true even if creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. (Campbell 1974) says that 'if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom', i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one's knowledge beyond what one knows, one must processed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one's knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because we can empirically falsify it. The central claim of evolutionary epistemology is synthetic, not analytic, but if the central contradictory of which they are not, then Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature.

Two extra-ordinary issues lie to awaken the literature that involves questions about 'realism', i.e., what metaphysical commitment does an evolutionary epistemologist have to make? (Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal?) With respect to realism, many evolutionary epistemologists endorse that is called 'hypothetical realism', a view that combines a version of epistemological 'scepticism' and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge is. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biological evolution does not. Some have argued that evolutionary epistemologists must give up the 'truth-topic' sense of progress because a natural selection model is in non-teleological in essence alternatively, following Kuhn (1970), and embraced along with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind are to argue that, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton argue that lunatics are analogous to biological pre-adaptations, evolutionary pre-biological pre-adaptations, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their discountable structures: The function of descendability may result in the function of their descendable character embodied to its structural foundations, is that of the guideline of epistemic variation is, on this view, not the source of dis-analogy, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions, saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blindness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind.

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is used for understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programmed.

What makes a belief justified and what makes true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused such subjectivity to have the belief. In recent decades many epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that 'p' is knowledge just in case it has the right causal connection to the fact that 'p'. They can apply such a criterion only to cases where the fact that 'p' is a sort that can enter intuit causal relations, as this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects' environments.

For example, Armstrong (1973) initially proposed something which is proposed to another for consideration, as a set before the mind for consideration, as to put forth an intended purpose. That a belief to carry a one's affairs independently and self-sufficiently often under difficult circumstances progress for oneself and makes do and stand on one's own formalities in the transitional form 'This [perceived] objects is 'F' is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is 'F', that is, the fact that the object is 'F' contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject, and the perceived objective 'y', if 'p' had those properties and believed that 'y' is 'F', then 'y' is 'F'. Offers a rather similar account, in terms of the belief's being caused by a signal received by the perceiver that carries the information that the object is 'F'.

This sort of condition fails, however, to be sufficiently for non-inferential perceptivity, for knowledge is accountable for its compatibility with the belief's being unjustified, and an unjustified belief cannot be knowledge. The view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth, seems by accountabilities that they have variations of this view which has been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing notably appeared as marked and noted and accredited to F. P. Ramsey (1903-30), whereby much of Ramsey's work was directed at saving classical mathematics from 'intuitionism', or what he called the 'Bolshevik menace of Brouwer and Weyl'. In the theory of probability he was the first to develop, based on precise behavioural nations of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a 'redundancy theory of truth', which he combined with radical views of the function of many kinds of propositions. Neither generalizations, nor causal positions, nor those treating probability or ethics, described facts, but each have a different specific function in our intellectual economy. Additionally, Ramsey, who said that an impression of belief was knowledge if it were true, certain and obtained by a reliable process. P. Unger (1968) suggested that 'S' knows that 'p' just in case it is of at all accidental that 'S' is right about its being the case that drew an analogy between a thermometer that reliably indicates the temperature and a belief interaction of reliability that indicates the truth. Armstrong said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantees its truth via laws of nature.

They standardly classify reliabilism as an 'externaturalist' theory because it invokes some truth-linked factor, and truth is 'eternal' to the believer the main argument for externalism derives from the philosophy of language, more specifically, from the various phenomena pertaining to natural kind terms, indexicals, etc., that motivate the views that have come to be known as direct reference' theories. Such phenomena seem, at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, i.e., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. -. Not just on what is going on internally in his mind or brain (Putnam, 175 and Burge, 1979.) Virtually all theories of knowledge, of course, share an externalist component in requiring truth as a condition for knowing. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by means of a nomic, counterfactual or other such 'external' relations between 'belief' and 'truth'.

The most influential counterexample to reliabilism is the demon-world and the clairvoyance examples. The demon-world example challenges the necessity of the reliability requirement, in that a possible world in which an evil demon creates deceptive visual experience, the process of vision is not reliable. Still, the visually formed beliefs in this world are intuitively justified. The clairvoyance example challenges the sufficiency of reliability. Suppose a cognitive agent possesses a reliable clairvoyance power, but has no evidence for or against his possessing such a power. Intuitively, his clairvoyantly formed beliefs are unjustifiably unreasoned, but Reliabilism declares them justified.

Another form of reliabilism, - 'normal worlds', reliabilism, answers to the range problem differently, and treats the demon-world problem in the same fashionable manner, and so permitting a 'normal world', as one that is consistent with our general beliefs about the actual world. Normal-worlds reliabilism says that a belief, in any possible world is justified just in case its generating processes have high truth ratios in normal worlds. This resolves the demon-world problem because the relevant truth ratio of the visual process is not its truth ratio in the demon world itself, but its ratio in normal worlds. Since this ratio is presumably high, visually formed beliefs in the demon world turn out to be justified.

Yet, a different version of reliabilism attempts to meet the demon-world and clairvoyance problems without recourse to the questionable notion of 'normal worlds'. Consider Sosa's (1992) suggestion that justified beliefs is belief acquired through 'intellectual virtues', and not through intellectual 'vices', whereby virtues are reliable cognitive faculties or processes. The task is to explain how epistemic evaluators have used the notion of indelible virtues, and vices, to arrive at their judgments, especially in the problematic cases. Goldman (1992) proposes a two-stage reconstruction of an evaluator's activity. The first stage is a reliability-based acquisition of a 'list' of virtues and vices. The second stage is application of this list to queried cases. Determining has executed the second stage whether processes in the queried cases resemble virtues or vices. We have classified visual beliefs in the demon world as justified because visual belief formation is one of the virtues. Clairvoyance formed, beliefs are classified as unjustified because clairvoyance resembles scientifically suspect processes that the evaluator represents as vices, e.g., mental telepathy, ESP, and so forth

A philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), Wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentences ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for examples, belief in God, are the widest sense of the works satisfactorily in the widest sense of the word. On James's view almost any belief might be respectable, and even true, but working with true beliefs is not a simple matter for James. The apparent subjectivist consequences of this were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th-century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an 'automatic sweetheart' or female zombie) and remarks' that the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others, these implications that make it true that the other persons have minds in the disturbing part.

Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on the one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant's doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.

In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926- ) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what the force of impression of one thing on another, inducing to come into being and carry to as successful conclusions as found a pass that allowed them to affect passage through the mountains. A condition or occurrence traceable to a cause drawing forth the underlying and hidden layers of deep-seated latencies. Very well protected but the digression belongs to the patient, in that, what exists of the back-burners of the mind, slowly simmering, and very much of your self control is intact: Furthering the outcry of latent incestuousness that affects the likelihood of having an influence upon behaviour, so then all that we would have done otherwise, contains all that is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or 'realization' of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to support thoughts and desires too differently from our own, it may then seem as though beliefs and desires are obtained in the consenting availability of 'variably acquired' causal architecture, just as much as they can be in different Neurophysiologic states.

The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C.S. Peirce, James held that truth is what compellingly works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism's refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists' denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers' Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; His objective was to infuse scientific thinking into philosophy and society and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce's doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any-one philosophy to explain everything.

Dewey's philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey's writings, although he aspired to synthesize the two realms.

The pragmatists' tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - have an alternative to Rorty's interpretation of the tradition.

One of the earliest versions of a correspondence theory was put forward in the 4th century Bc Greek philosopher Plato, who sought to understand the meaning of knowledge and how it is acquired. Plato wished to distinguish between true belief and false belief. He proposed a theory based on intuitive recognition that true statements correspond to the facts - that is, agree with reality - while false statements do not. In Plato's example, the sentence "Theaetetus flies" can be true only if the world contains the fact that Theaetetus flies. However, Plato - and much later, 20th-century British philosopher Bertrand Russell - recognized this theory as unsatisfactory because it did not allow for false belief. Both Plato and Russell reasoned that if a belief is false because there is no fact to which it corresponds, it would then be a belief about nothing and so not a belief at all. Each then speculated that the grammar of a sentence could offer a way around this problem. A sentence can be about something (the person Theaetetus), yet false (flying is not true of Theaetetus). But how, they asked, are the parts of a sentence related to reality?

One suggestion, proposed by 20th-century philosopher Ludwig Wittgenstein, is that the parts of a sentence relate to the objects they describe in much the same way that the parts of a picture relate to the objects pictured. Once again, however, false sentences pose a problem: If a false sentence pictures nothing, there can be no meaning in the sentence.

In the late 19th-century American philosopher Charles S. Peirce offered another answer to the question "What is truth?" He asserted that truth is that which experts will agree upon when their investigations are final. Many pragmatists such as Peirce claim that the truth of our ideas must be tested through practice. Some pragmatists have gone so far as to question the usefulness of the idea of truth, arguing that in evaluating our beliefs we should rather pay attention to the consequences that our beliefs may have. However, critics of the pragmatic theory are concerned that we would have no knowledge because we do not know which set of beliefs will ultimately be agreed upon; nor are their sets of beliefs that are useful in every context.

A third theory of truth, the coherence theory, also concerns the meaning of knowledge. Coherence theorists have claimed that a set of beliefs is true if the beliefs are comprehensive - that is, they cover everything - and do not contradict each other.

Other philosophers dismiss the question "What is truth?" With the observation that attaching the claim 'it is true that' to a sentence adds no meaning, however, these theorists, who have proposed what are known as deflationary theories of truth, do not dismiss such talk about truth as useless. They agree that there are contexts in which a sentence such as 'it is true that the book is blue' can have a different impact than the shorter statement 'the book is blue'. What is more important, use of the word true is essential when making a general claim about everything, nothing, or something, as in the statement 'most of what he says is true?'

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato's expression of ideas in the form of dialogues-the dialectical method, used most famously by his teacher Socrates - has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher's G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, and they set the mood and style of philosophizing for much of the 20th century English-speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as 'time is unreal', analyses that aided of determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitutes what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements 'John is good' and 'John is tall' have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property 'goodness' as if it were a characteristic of John in the same way that the property 'tallness' is a characteristic of John. Such failure results in philosophical confusion.

Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.

Russell's work of mathematics attracted towards studying in Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; translation 1922), in which he first presented his theory of language, Wittgenstein argued that 'all philosophy is a 'critique of language' and that 'philosophy aims at the logical clarification of thoughts'. The results of Wittgenstein's analysis resembled Russell's logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts - the propositions of science - are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism: Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition 'two plus two equals four'. The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually dwindling. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer's Language, Truth and Logic in 1936.

The positivists' verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953, translated 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein's influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate 'systematically misleading expressions' in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.

Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyse ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can many a time have an eye to aid in anatomize Philosophical problems.

A loose title for various philosophies that emphasize certain common themes, the individual, the experience of choice, and the absence of rational understanding of the universe, with the additional ways of addition seems a consternation of dismay or one fear, or the other extreme, as far apart is the sense of the dottiness of 'absurdity in human life', however, existentialism is a philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good are the same for everyone; Insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to find his or her own unique vocation. As he wrote in his journal, 'I must find a truth that is true for me . . . the idea for which I can live or die'. Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.

Most of existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their anti-rationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible for any analysis by reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific supposition of an orderly universe may as much as is a part of useful fiction.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th-century French philosopher Jean-Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; equally a part in the refusal to choose is the choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that it is spiritually crucial to recognize that one experience not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger; Anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many pre-modern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blasé Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher George Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a 'leap of faith' into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed, could save the individual from despair.

Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846, Translation, 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.

One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche's theory of the Übermensch, a term translated as "Superman" or "Overman." The Superman was an individual who overcame what Nietzsche termed the 'slave morality' of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that 'God is dead', or that traditional morality was no longer relevant in people's lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the "death of God" and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.

Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much did of Sartre's works focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that 'man is condemned to be free', Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one and thus human life is a 'futile passion'. Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on a 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theologies through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian's Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels which probed the motivations and moral justifications for his characters' actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky's best work, interlaces religious exploration with the story of a family's violent quarrels over a woman and a disputed inheritance.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), "We must love life more than the meaning of it."

The opening series of arranged passages in continuous or uniform order, by ways that the progressive course accommodates to arrange in a line or lines of continuity, Wherefore, the Russian novelist Fyodor Dostoyevsky's Notes from Underground (1864) - 'I am a sick man . . . I am a spiteful man' - are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky's rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader's sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an 'overly conscious' intellectual.

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925 translations, 1937) and The Castle (1926 translations, 1930), presents isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writer's André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer and John Barth.

The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts began with Plato's view in the Theaetetus, that knowledge is true belief plus some logos, and epistemology is a beginning for which it is bound to the foundations of knowledge, a special branch of philosophy that addresses the philosophical problems surrounding the theory of knowledge. Epistemology is concerned with the definition of knowledge and related concepts, the sources and criteria of knowledge, the kinds of knowledge possible and the degree to which each is certain, and the exact integrations among the one's who are understandably of knowing and the object known.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. And explicated by the author, Anthony Kenny who examines the complexities of Aquinas's concepts of substance and accident.

In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person's opinions can be said to be more correct than another's, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about which it is possible to have exact and certain knowledge. The thing's one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.

Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that almost all knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, in accordance with the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, rather than as an end in itself.

After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.

From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on self-evident principles, or axioms. For the empiricists, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.

Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and also by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively self-evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.

Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that of everything a human being conceived of exists, as an idea in a mind, a philosophical focus which is known as idealism. Berkeley reasoned that because one cannot control one's thoughts, they must come directly from a larger mind: that of God. In this excerpt from his Treatise Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is 'impossible . . . that there should be any such thing as an outward object'.

The Irish philosopher George Berkeley acknowledged along with Locke, that knowledge occurs through ideas, but he denied Locke's belief that a distinction can appear between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley's conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: Knowledge of relations of ideas - that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world. Knowledge of matters of fact - that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true - a conclusion that had a revolutionary impact on philosophy.

The German philosopher Immanuel Kant tried to solve the crisis precipitated by Locke and brought to a climax by Hume; His proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists, one can have exact and certain knowledge, but he followed the empiricists in holding that such knowledge is more informative. Adding upon a proposed structure of thought than about the world outside of thought, and distinguishing upon three kinds of knowledge: Analytical deduction, which is exact and certain but uninformative, because it makes clear only what is contained in definitions; synthetic a posterior, which conveys information about the world learned from experience, but is subject to the errors of the senses; and synthetic a priori, which is discovered by pure intuition and is both exact and certain, for it expresses the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as synthetic a priori knowledge really exists.

During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge that was further emphasized by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge and both extended the principles of empiricism to the study of society.

The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism further by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.

In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known as a result of the perception. The phenomena lists contended that the objects of knowledge are the same as the objects perceived. The neutralists argued that one has direct perceptions of physical objects or parts of physical objects, rather than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge thereof.

A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things appear to be from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.

During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there is only one kind of knowledge: scientific knowledge; that any valid knowledge claim must be verifiable in experience; and hence that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes as a result of discussions among the logical empiricists themselves, as well as their critics, but has not been discarded. More recently, the sharp distinction between the analytic and the synthetic has been attacked by a number of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.

The latter of these recent schools of thought, generally referred to as linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actual way key epistemological terms are used - terms such as knowledge, perception, and probability - and to formulate definitive rules for their use in order to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was true, and add nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances. However, the ruling thought is that it is only through a correct appreciation of the role and point of this language is that we can come to a better conceptual understanding of what the language is about, and avoid the oversimplifications and distortion we are apt to bring to its subject matter.

Linguistics is the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner's first language and about the language being acquired.

Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.

Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyse Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathered a body of data and analyses' it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as the berry in a blueberry; or prefixes (pre- in preview) and suffixes (-ness in openness).

The linguist's next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence "She pushed the bush," the morpheme she, a pronoun, is the subject 'push', a transitive verb, is the verb 'the', a definite article, is the determiner, and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provides descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.

Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latins were related to each other and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word borate for "brother" resembles the Latin word frater, the Greek word phrater, (and the English word brother).

Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.

Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verbs 'go', changes too, 'went' and 'gone' to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in "go store tomorrow"). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might be express when something was done, by whom, to whom, and in what manner.

Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of people.

Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.

By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.

The field of linguistics, which lends from its own theories and methods into other disciplines, and many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.

Sociolinguistic study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as "fourth floor" can indicate the person's social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing /r/. Sometimes they even overcorrect their speech, pronouncing /r/ where those whom they wish to copy may not.

Some Sociolinguists believe that analysing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other Sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of a Sociolinguistical understanding, perhaps, takes a position whereby a communicative competence - what people need to know to use the appropriate language for a given social setting.

Psycholinguistics merge the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children's language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).

Computational linguistics involves the use of computers to compile linguistic data, analyse languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyse the relatedness and the structure of languages and to look for patterns and similarities. Computers also assist in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in a machine translated systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.

Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.

Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyse culture. Anthropological linguists examine the relationship between a culture and its language. The way cultures and languages have moderately changed uninterruptedly through intermittent intervals of time. And how various cultures and languages are related to each other, for example, the present English usage of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.

Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behaviour, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.

Saussure's ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivist tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompeii and Bombay the same way.

As linguistics developed in the 20th century, the notion became prevalent that language is more than speech - specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behaviour shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.

The 1957 publication of "Syntactic Structures" by American linguist Noam Chomsky initiated what many views as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language - the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that create (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky's theories.

At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.

The scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance - the way people use language - to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?

From these initial concerns came some of the great themes of twentieth-century philosophy. How exactly does language relate to thought? Are the irredeemable problems about putative private thought? These issues are captured under the general label ‘Lingual Turn'. The subsequent development of those early twentieth-century positions has led to a bewildering heterogeneity in philosophy in the early twenty-first century. the very nature of philosophy is itself radically disputed: Analytic, continental, postmodern, critical theory, feminist t, and non-Western, are all prefixes that give a different meaning when joined to ‘philosophy'. The variety of thriving different schools, the number of professional philosophers, the proliferation of publications, the development of technology in helping research as all manifest a radically different situation to that of one hundred years ago.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p', e.g., that 2 + 2 = 4, ‘p' is evident or, if ‘p' coheres wit h the bulk of one's beliefs, ‘p' is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p', ‘transmit' the status as evident they already have without criteria to other proposition s like ‘p', or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create' p's epistemic status. If that in turn can be ‘transmitted' to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p', e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p' is evident, or simply, (2) if we can't conceive ‘p' to be false, then ‘p' is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p', ‘transmit' the status as evident they already have for one without criteria to other propositions like ‘p'. Alternatively, they might be criteria whereby epistemic status, e.g., p's being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p' is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass' evidence on linearly from a foundation of highly evident ‘premisses' to ‘conclusions' that are never more evident.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p', e.g., that 2 + 2 = 4, ‘p' is evident or, if ‘p' coheres wit h the bulk of one's beliefs, ‘p' is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p', ‘transmit' the status as evident they already have without criteria to other proposition s like ‘p', or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create' p's epistemic status. If that in turn can be ‘transmitted' to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p', e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p' is evident, or simply, (2) if we can't conceive ‘p' to be false, then ‘p' is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p', ‘transmit' the status as evident they already have for one without criteria to other propositions like ‘p'. Alternatively, they might be criteria whereby epistemic status, e.g., p's being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p' is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass' evidence on linearly from a foundation of highly evident ‘premisses' to ‘conclusions' that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasons that instill and sustain conviction that some proposed theorem - the theorem proved - is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized -, i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as 'folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do. We have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the 'intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational -, i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a 'moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic 'structural' or 'syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties -, e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ('what-it's-like') features ('Qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts but, nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts ('impressions'), images ('ideas') and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as Qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term 'representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether Qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)

The main argument for representationalism appeals to the transparency of experience. The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (The account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that they have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery. The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation 'pictorial'; Though of course, there may imagery in other modalities - auditory, olfactory, etc. - as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories, the Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories, hold that the content of a mental representation are well grounded in causal computational inferential relations to other mental portrayals other than mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localisms (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists, about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to Internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists).

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986) for example, seems to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, has also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. Both, construe of as a narrow content and are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that there may be no need to narrow its contentual representations, accountable for reasons of an ordering supply of naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are nomologically either impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind, claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes' implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called 'subpersonal' or 'sub-doxastic' representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental. That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense

Psychological states. Computational explanations have been offered of, among other mental phenomena, belief.

The classicists hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists, hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program.

Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.

Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.

However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
- a menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.

Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analysable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, as a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believe in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St. Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, and reasonably so - in a way that an ordinary propositional belief that would not.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, dispite the fact that the reliablist condition is satisfied.

One sort of response to this latter sorts of an objection is to 'bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent Internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly Internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general Internalist view of justification that externalist is committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the Internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain Internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is, at least, less vulnerable to Internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge, for which is accepted or advanced as true or real on the basis of less than conclusive evidence, as can only be assumed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`

A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p', e.g., that 2 + 2 = 4, ‘p' is evident or, if ‘p' coheres wit h the bulk of one's beliefs, ‘p' is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p', ‘transmit' the status as evident they already have without criteria to other proposition s like ‘p', or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create' p's epistemic status. If that in turn can be ‘transmitted' to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p', e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p' is evident, or simply, (2) if we can't conceive ‘p' to be false, then ‘p' is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p', ‘transmit' the status as evident they already have for one without criteria to other propositions like ‘p'. Alternatively, they might be criteria whereby epistemic status, e.g., p's being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p' is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass' evidence on linearly from a foundation of highly evident ‘premisses' to ‘conclusions' that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasonings that instill and sustain conviction that some proposed theorem - the theorem proved - is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs' as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs' as mathematicians are quite content to give them, fas or example, formal-logical .



~

Can we go beyond the sciebtific mode of explor ation and come to know the real natur e of the smallest constitutent unit s of the ub=niverse? We learn frpom PLato that there are different levels of knowledge, which can be contempation. The conclusions of discursive reasoning, which functions in the subject/object mode, the insight it brings come with 'utter certsinty'. when an insight is formulatrd, however, the certainty is lost. neverthless, the combined results of contemplation and discursive reasoning can lead to the creation of magnificent conceptual structures.

~

The function of insight gives a transcendental content that, when reduced to an interpretative system, becomes subject to the relativity of all subject-object consciousness, therefore, there can be no such thing as an infallible interpretation. Thus we must distinguish between insights and its formulation.

In recent decades, another branch of evolutionary theory has appeared, as researchers have explored the possibility that not only physical traits, but behaviour itself, might be inherited. Behavioural geneticists have studied how genes influence behaviour, and more recently, the role of biology in social behaviour has been explored. This field of investigation, known as Sociobiology, was inaugurated in 1975 with the publication of the book Sociobiology: The New Synthesis by American evolutionary biologist Edward O. Wilson. In this book, Wilson proposed that genes influence much of the animals and humanizing behaviours, and, least of mention, that these characteristics are also subject to natural selection.

Sociobiologists examine animal behaviours called altruistic, that is, unselfish, or demonstrating concern for the welfare of others. When birds feed on the ground, for example, one individual may notice a predator and sound an alarm. In so doing, the bird also calls the predator's attention to itself. What can account for the behaviour of such a sentry, who seems to derive no evolutionary benefit from its unselfish behaviour and so seem to defy the laws of natural selection?

Darwin was aware of altruistic social behaviour in animals, and of how this phenomenon challenged his theory of natural selection. Among the different types of bees in a colony, for example, worker bees are responsible for collecting food, defending the colony, and caring for the nest and the young, but they are sterile and create no offspring. Only by her, that the beehive area of infactoring takes apart that which only the queen bee has inherently given that which she could reproduce. If natural selection rewards those who have the highest reproductive success, how could sterile worker bees come about by natural selection when worker bees devote themselves to others and do not reproduce?

Scientists now recognize that among social insects, such as bees, wasps, and ants, the sterile workers are more closely related genetically to one another and to their fertile sisters, the queens, than brothers and sisters are among other organisms. By helping to protect or nurture their sisters, the sterile worker's bees preserve their own genes more so than if they reproduced themselves. Thus, the altruistic behaviour evolved by natural selection.

Evolutionary theory has undergone many further refinements in recent years. One such theory challenges the central idea that evolution goes on by gradual change. In 1972 the American paleontologist's Stephen Jay Gould and Niles' Eldredge proposed the theory of punctuated equilibria. According to this theory, trends in the fossil record cannot be attributed to gradual transformation within a lineage, but result from quick bursts of rapid evolutionary change. In Darwinian theory, new species arise by gradual, but not necessarily uniform, accumulation of many small genetic changes over long periods of geologic time. In the fossil record, however, new species generally appear suddenly after long periods of the stasis-that are, no change. Gould and Eldredge recognized that Speciation more likely occurs in small, isolated, peripheral populations than in the main population of the species, and that the unchanging nature of large populations contributes to the stasis of most fossil species over millions of years. Occasionally, when conditions are right, the equilibrium state becomes ‘punctuated' by one or more Speciation events. While these events probably require thousands or tens of thousands of years to establish effective reproductive isolation and distinctive characteristics, this is but an instant in geologic time compared with an average life span of more than ten million years for most fossil species. Proponents of this theory envision a trend in evolutionary development to be more like climbing a flight of stairs (punctuations followed by stasis) than rolling up an inclined plane.

In the last several decades, scientists have questioned the role of extinction in evolution. Of the millions of species that have existed on this planet, more than 99 percent are extinct. Historically, biologists regarded extinction as a natural outcome of competition between newly evolved adaptively superior species and they are older, more primitive ancestors. Recently, however, paleontologists have discovered that many different, unrelated species living in, and large ecosystems tend to become extinct at nearly the same time. The cause is always some sort of climate change or catastrophic event that produces conditions too severe for most organisms to endure. Moreover, new species evolve after the wave of extinction removes many species that previously occupied a region for millions of years. Thus extinction does not result from evolution, but causes it.

Scientists have identified several instances of mass extinction, when species apparently died out on a huge scale. The greatest of these episodes occurred during the end of the Permian Period, by some odd 245 million years ago. Then, according to estimates, more then 95 percent of species, nearly all life on the planet-died out. Another extensively studied, but extinction took place at the boundary of the Cretaceous Period and the Tertiary Period, roughly sixty-five million years ago, when the dinosaurs disappeared. In all, more than twenty global mass extinctions have been identified. Some scientists theorize that such events may even be cyclical, occurring at regular intervals.

In that made or broke into the genetic chain no less the chromosomal cells that carry the DNA and inhibiting functions in the transmission of hereditary information, for which the helical hereditary information is necessary for cell growth.

Other theories have entered on abrupt changes in the levels of the world's oceans, for example, or on the effect of changing salinity on early sea life. Another theory blames catastrophic events for mass extinction. Strong evidence, for example, supports the theory that a meteorite some 10 km. (6 mi.) in diameter struck the Earth 65 million years in the past. The dust cloud from the collision, according to this impact theory, shrouded the Earth for months, blocking the sunlight that plants need to survive. Without plants to eat, the dinosaurs and many other species of land animals were wiped out.

Extinction as a cause of evolution rather than the result of it is perhaps best shown as for our own ancestors,-ancient mammals. During the time of the dinosaurs, mammals made up only several the animals that roamed the planet. The demise of dinosaurs provided an opportunity for mammals to expand their numbers and ultimately to become the dominant land animal. Without the catastrophe that took place sixty-five million years into the past, mammals may have remained in the shadow of the dinosaurs is not exclusively a natural phenomenon. For thousands of years, as the human species has grown in number and technological sophistication, we have shown our power to cause extinction and to upset the world's ecological balance. In North America alone, for example, about forty species of birds and more than thirty-five species of mammals have become extinct in the last few hundred years, mostly from human activity. Humans default upon the plants and animals by their extermination through their relentless hunting or harvesting them. What is more, by destroying and replacing their habitat with farms and other forms of development, they also have allowed to introduce the foreign species that hunt or compete with local species, and by poisoning them with chemicals and other pollutants.

The rain forests of South America and other tropical regions offer a particularly troubling scenario. Upwards of fifty million acres of rain forest disappear every year as humans raze trees to make room for agriculture and livestock. Given that a single acre of rain forest may contain thousands of irreplaceable species of plant and animal life, the threat to bio-diversity is severe. The conservation of wildlife is now an international concern, as evidenced by treaties and agreements enacted at the 1992 Earth Summit in Rio De Janeiro, Brazil. In the United States, federal laws protect endangered species. The problem, nonetheless, of dwindling bio-diversity seems certain to worsen as the human population continues to expand, and no one knows for sure how it will affect evolution.

Advances in medical technology may also affect natural selection. The study from the mid-20th century showing that babies of medium birth weights were more likely to survive than their heavier or lighter counterparts would be difficult to reproduce today. Advances in neonatal medical technology have made it possible for small or premature babies to survive in a great deal higher of numbers.

Recent genetic analysis shows the human population contains harmful mutations in unprecedented levels. Researchers attribute this to genetic drift acting on small human populations throughout history. They also expect that improved medical technology may exacerbate the problem. Better medicine enables more people to survive to reproductive age, even if they carry mutations that in past generations would have caused their early death. The genetic repercussions of this are still unknown, but biologists speculate that many minor problems, such as poor eyesight, headaches, and stomach upsets may be attributable to our collection of harmful mutations.

Humans have also developed the potential to affect evolution at the most basic level,-the genes. The techniques of genetic engineering have become commonplace. Scientists can extract genes from living things, alter them by combining them with another segment of DNA, and then place this recombinant DNA back inside the organism. Genetic engineering has produced pest-resistant crops and larger cows and other livestock. To an increasing extent, genetic engineers fight human disease, such as cancer and heart disease. The investigation of gene therapy, in which scientists substitute functioning copies of a given gene for a defective gene, is an active field of medicine, and that in this way the tinkering with genetic material will affect evolutionary remains, yet to be determined.

The most contentious debates over evolution have involved religion. From Darwin's day to the present, members of some religious faiths have perceived the scientific theory of evolution to be in direct and objectionable conflict with religious doctrine regarding the creation of the world. Most religious denominations, however, see no conflict between the scientific study of evolution and religious teachings about creation. Christian Fundamentalists and others who believe literally in the biblical story of creation choose to reject evolutionary theory because it contradicts the book of Genesis, which describes how God created the world and all its plant and animal life in six days. Many such people maintain that the Earth is comparatively young-perhaps 6,000 to 8,000 years old-and that humans and all the worlds' species have remained unchanged since their recent creation by a divine hand.

Opponents of evolution argue that only a divine intelligence, and not some comparatively random, undirected process, could have created the variety of the world's species, not to mention an organism as complex as a human being. Some people are upset by the oversimplification that humans evolved from monkeys. In the eyes of some, a divine being placed humans apart from the animal world. Proponents of this view find any attempt to place humans within the context of natural history deeply insulting.

For decades, the teaching of evolution in schools has been a flash point in the conflict between religious fundamentalism and science. During the 1920's, Fundamentalists lobbied against the teaching of evolution in public schools. Four states-Arkansas, Mississippi, Oklahoma, and Tennessee-passed laws outlawing public-school instruction in the principles of Darwinian evolution. In 1925 John Scopes, a biology teacher in Dayton, Tennessee, assigned his students readings about Darwinism, in direct violation of state law. Scopes was arrested and placed on trial. In what was the major trial of its time, American defence attorney Clarence Darrow represented Scopes, while American politician William Jennings Bryan argued for the prosecution. Ultimately, Scopes was convicted and customarily received a small fine. However, the ‘Monkey Trial,' as it became called, was seen as a victory for evolution, since Darrow, in cross examining Bryan, succeeded in pointing out several serious inconsistencies in Fundamentalists belief.

Laws against the teaching of evolution were upheld for another forty years, until the Supreme Court of the United States, in a 1968 decision in the case Epperson V. Arkansas, ruled that such laws were an unconstitutional violation of the legally required separation of church and state. Over the next few years, Fundamentalists responded by de-emphasizing the religious content in their doctrine and instead casting their arguments as a scientific alternative to evolution called creation science, now also called intelligent design theory. In response to Fundamentalist pressure, twenty-six states debated laws that would require teachers to spend equal amounts of time teaching creation science and evolution. Only two states, Arkansas and Louisiana, passed such laws. The Arkansas law was struck down in federal district court, while proponents of the Louisiana law appealed all the way to the Supreme Court. In its 1987 decision in Edwards v Aquillard, the Court struck down such equal time laws, ruling that creation science is a religious idea and thus an illegal violation of the church-state separation. Despite these rulings, school board members and other government officials continue to grapple with the long-standing debate between creation and evolution scientists. Even so, efforts to permit the teaching of intelligent design theory in public schools have been unsuccessfully as scientists have sought-and found-evidence for evolution. The fossil record demonstrates that life on this planet was vastly different millions of years ago. Fossils, furthermore, provide evidence of how species change over time. The study of comparative anatomy has highlighted physical similarities in the features of widely different species-proof of common ancestry. Bacteria that mutate and develop resistance to antibiotics, along with other observable instances of adaptation, demonstrate evolutionary principles at work. The study of genes, proteins, and other molecular evidence has added to the understanding of evolutionary descent and the relationship among all living things. Research in all these areas has led to overwhelming support for evolution among scientists.

Nevertheless, evolutionary theory is still, in some cases, the cause of misconception or misunderstanding. People often misconstrue the phrase ‘survival of the fittest'. Some people interpret this to mean that survival is the reward for the strongest, the most vigorous, or the most dominant. In the Darwinian sense, however, fitness does not necessarily mean strength so much as the capacity to adapt successfully. This might mean developing adaptations for more efficiently obtaining food, or escaping predators, or enduring climate change-in short, for thriving in a given set of circumstances.

Yet it bears repeating that organisms do not change their characteristics in direct response to the environment. The key is genetic variation within a population,-and the potential for new combinations of traits. Nature will select those individuals that have developed the ideal characteristics with which to flourish in a given environment or niche. These individuals will have the greatest degree of reproductive success, passing their successful traits onto their descendants.

Another misconception is that evolution always progresses to better creatures. In fact, if species become too narrowly adapted to a given environment, they may ultimately lose the genetic variation necessary to survive sudden changes. Evolution, in such cases, will lead to extinction.

Once upon a time, in Human Evolution, now considered as pensively the process though which a lengthy period of change is admissively given by people who have originated from apelike ancestors. Scientific evidence shows that the physical and behavioural traits shared by all people evolved over a period of at least six million years.

One of the earliest defining human traits, Bipedalism -

walking on two legs as the primary form of locomotion-undergoing an evolution of more than four million years ago. Other important human characteristics'-such as a large and complex brain, the ability to make and use tools, and the capacity for language-developed more recently. Many advanced traits',-including complex symbolic expression, such as art, and elaborate cultural diversity-emerged mainly during the past 100,000 years.

Our closest living relatives are three surviving species of great apes: the gorilla, the common chimpanzee, and the pygmy chimpanzee (also known as bonobo). Their confinement to Africa, along with abundant fossils evidence, suggests that the earliest stages of human evolution were also played out in Africa, human history, as sometimes separate from the history of animals, took the initiative in that location about seven million years ago (estimated range from five to nine million years ago). Around that time, a population of African apes split into several populations, of which one went on to evolve into modern gorillas, a second into the two modern chimps, and the third into humans. The gorilla line apparently split before the split between the chimp and the human lines.

Fossils indicate that the evolutionary line leading to us had achieved an upright posture by around four million years ago, then began to increase in body size and in relative brain size around 2.5 million years ago. That protohuman is generally known as Australopithecus africaanus. Homo habilis, and Homo erectus, which apparently evolved into each other in that sequence. Although the Homo erectus, the stage extends to around 1.7 million years ago, was close to us modern humans in body size, its brain size was still barely half of ours. Stone tools became common around 2.5 million years ago, but they were merely the crudest of flaked or battered stones. In zoological significance and distinction, The Homo erectus was more than an ape, but still much less than a modern human.

All of that human history, for the first five or six million years after our origins about seven million years ago, remained confined to Africa. The first human ancestor to spread beyond Africa was The Homo erectus, as it is attested by fossils discovered on the Southeast Asian island of Java and conventionally known as Java man the oldest Java ‘man': archeological remains-of course, they may have belonged to a Java woman,-have usually been argued that they date from about a million years ago. However, it has recently been argued that they date from 1.8 million years ago. (Strictly speaking, the name Homo erectus belongs to these Javan fossils, and the African fossils classified as Homo erectus may warrant a different name). At present, the earliest unquestioned evidence for humans in Europe stems from around half a million years ago, but there are claims of an earlier presence. One would assume that the colonization of Asia also permitted the simultaneous colonization of Europe, since Eurasia is a single landmass not bisected by major barriers.

Nearly half a million years ago, human fossils had diverged from older Homo erectus skeletons in, they're enlarged, rounder, and fewer angular skulls. African and European skulls of half a million years ago were sufficiently similar to skulls of a modern that they are classified in our species, Homo sapiens, instead of in Homo erectus. This distinction is arbitrary, since The Homo erectus evolved into The Homo sapiens. However, these early Homo sapiens still differed from us in skeletal details, had brains significantly smaller than ours, and were grossly different from us in their artifacts and behaviour. Modern stone-tool-making peoples, such as Yali's great grandparents, would have scorned the stone tools of a half million years ago as very crude. The only significant addition to our ancestor's cultural repertoire that can be documented with confidence around that time was the use of fire.

No art, bone tools, or anything else has come down to us from an early Homo sapiens except their skeletal remains, and those crude stone tools, there were still no humans in Australia, because it would have taken boats to get there from Southern Asia. There were also no humans anywhere in the Americas, because that would have required the occupation of the nearest part of the Eurasian continent (Siberia), and possibly boat-building skills as well. (The present, shallow Bering Strait separating Siberia from Alaska, alternated between a strait and a broad intercontinental bridge of dry land, as sea level repeatedly rose and fell during the Ice Ages). Nevertheless, boat building and survival in cold Siberia were both far beyond the capabilities of an early Homo sapiens. After half a million years ago, the human population of Africa and western Eurasia proceeded to diverge from each other and from East Asia populations in skeletal details. The population of Europe and western Asia between 130,000 and 40,000 years ago is recreated by especially many skeletons' known as Neanderthals and sometimes classified as some separate spacies,

Yet their stone tools were still crude by comparison with modern New Guineans' polished stone axes and were usually not yet made in standardized diverse shapes, each with a clearly recognizable function.

The few preserved African skeletal fragments contemporary with the Neanderthals are more similar to our modern skeletons than do Neanderthal skeletons. Even fewer preserved East Asian skeletal fragments are known, but they appear different again from both Africans and Neanderthals. As for the lifestyle at that time, the best-preserved evidence comes from stone artifacts and animal bones accumulated at southern African sites. Although those Africans of 100,000 years ago had more modern skeletons than did their Neanderthal contemporized, they made especially the same crude stone toots as Neanderthals, still lacking standardized shapes. They had no preserved art. To judge from the bone evidence of animal species under which their targeted prey and hunting skills were unimpressive and mainly directed at easy-to-kill, not-at-all-dangerous animals. They were not yet in the business of slaughtering buffalo, pig, and other dangerous prey. They could not even catch fish: their sites immediately on the seacoast lack fish bones and fishhook. They and their Neanderthal contemporaries still rank as less than fully human.

While Neanderthals lived in glacial times and were adapted to the cold, they penetrated no farther north than northern Germany and Kiev. Nonetheless, Neanderthals apparently lacked needles, sewn clothing, warm houses, and other technology essential to survival in the coldest climates. Anatomically modern peoples who did posses such technology had expanded into Siberia by around 20,000 years ago (there are the usual much older disputed claims). That expansion may have been responsible for the extinction of Eurasia's wooly mammoth and wooly rhinoceroses likewise, to note, while the settlements of Australia/New Guinea, humans now occupied three of the five habitable continents, least that we omit Antarctica because it was not reached by humans until the 19th century and has never had any self-supporting human population. That left only two continents, North America and South America. For obvious reason that reaching the Americas from the Old world required boats (for which either there is no evidence even in Indonesia until 40,000 years ago and none in Europe until much later) to cross by sea, or else it required the occupation of Siberia (unoccupied until about 20,000 years ago) to cross the Bering Strait. However, it is uncertain when, between about 14,000 and 35,000 years ago, the Americas were first colonized.

Meanwhile, human history at last took off around 50,000 years ago, while of the easiest definite signs had come from East African sites with standardized stone tools and the first preserved jewellery (ostrich-shell beads). Similar developments soon appear in the Near East and in southeastern Europe, then (some 40,000 years ago) in southwestern Europe, where abundant artefacts are associated with fully modern skeletons of people termed Cro-Magnons. Thereafter, the garbage preserved at archaeological sites rapidly becomes ever more interesting and leaves no doubt that we are dealing with biologically and behaviourally modern human, however.

Cro-Magnons' garbage heaps yield not only stone tools but also tools of bone, whose suitability for shaping (for instance, into fish hooks) had apparently gone unrecognized by previous humans. Tools were produced in diverse. Distinctive shapes do modernly that their function as needles, awls, engraving tools, and so on are obvious to us. Instead of only single-piece tools such as hand-held scrapers, and multi-piece tools made their appearance. Recognizable multi-piece weapons at Cro-Magnon sites include harpoons, spear-throwers, and eventually bow and arrows, the precursors of rifles and other multi-piece modern weapons. Those efficient means of killing at a safe distance permitted the hunting of dangerous prey as rhinos and elephants, while the invention of rope for nets, lines, and snares allowed the addition of fish and bird to our diet. Remains of horses and sewn clothing testify to a greater improved ability to survive in cold climates, and remains of jewellery and carefully buried skeletons indicate revolutionary aesthetic and spiritual development.

Of the Cro-Magnons' products preserved, the best known are their artworks: Their magnificent cave paintings, statues, and musical instruments, which we still appreciate as art today. Anyone who has experienced firsthand the overwhelming power of the life-sized painted bulls and hoses in the Lascaux Cave of southern France will understand, if not imagine, that their creators must have been as modern in their minds as they were in their skeletons.

Obviously, some momentous change took place in our ancestors' capabilities between about 100,000 and 50,000 years ago. Presenting us with two major unresolved questions, regarding its triggering cause and its geographic location. As for its case, it can be argued for the perfection of the voiced box and hence for the anatomical basis of modern language, on which the exercise of human creativity is so dependent. Others have suggested instead that a change in brain organization around that time, without a change in brain size, made modern language possible.

As this occurring leap, and its location, did it take place primarily in one geographic area, in one group of humans, who were thereby enabled to expand and replace the former human populations of other parts of the world? Or did it occur in parallel in different regions, in each of which the human populations living today would be descendants of the populations living there before the connective leap? The conventionally advanced-looking human skull from Africa around 100,000 years ago has been taken to support the former view, within occurring specifically in Africa. Molecular studies (of so-called mitochondrial DNA) were initially also interpreted about an African origin of modern humans, though the meaning of those molecular findings is currently in doubt. On the other hand, skulls of humans living in China and Indonesia hundreds of thousands of years ago are considered by some physical anthropologists to exhibit features still found in modern Chinese and in Aboriginal Australians, respectfully. If true, that in the finding would suggest parallel evolution and multi-regional origins of modern humans, rather than origins in a single Garden of Eden. The issue remains unresolved.

The evidence for a localized origin of modern humans, followed by their spread and then their replacement of other types of humans elsewhere, seems strongly for Europe. Some 40,000 years ago, into Europe came the Cro-Magnons, with their modern skeleton, superior weapons, and other advanced cultural traits. Within a few thousand years there were no more Neanderthals, who had been evolving as the sole occupants of Europe for hundreds of thousands of years. The sequence strongly suggests that the modern Cro-Magnon somehow used their far superior technology, and their language skills or brains, to infect, kill, or displace the Neanderthals, leaving behind no evidence of hybridization between Neanderthals and Cro-Magnons.

Physical and genetic similarities show that the modern human species, Homo sapiens, has a very close relationship to another group of primate species, the apes. Humans and the so -called great apes (large apes) of Africa-chimpanzees (including bonobos, or so-called pygmy chimpanzees). Gorilla's,-share a common ancestor that lived sometime between eight million and six million years ago. The earliest humans evolved in Africa, and much of human evolution occurred on that continent. The fossils of early humans who lived between six million and two million years ago come entirely from Africa.

We should be reminded of the ways in which big domestic mammals were crucial to those human societies possessing them. Most notably, they provided meat, milk products, fertilizer, land transportation, leather, military assault, plow traction, and wool, and germs that killed previously unexposed peoples.

In addition, of course, small domestic mammals and domestic birds and insects have also been useful to humans. Many birds were domesticated for meat, eggs, and feathers: the chicken in China, various duck and goose species in parts of Eurasia, turkeys in Mesoamerica, guinea fowl in Africa, and the Muscovy duck in South America. Wolves were domesticated in Eurasia and North America to become our dogs used as hunting companions, sentinels, pets, and, in some societies, food. Rodent and other small mammals domesticated for food include the rabbit in Europe, the guinea pig in the Andes, a giant rat in West Africa, and possibly a rodent called the hutia on Caribbean islands. Ferrets were domesticated in Europe to hunt rabbits, and cats were domesticated in North Africa and Southern Asia to hunt rodent pests. Small mammals domesticated as recently as the 19th and 20th century include foxes, mink, and chinchillas grown for fur and hamsters as pets. Even some insects have been domesticated, not ably Europe's honeybee and China's silkworm moth, kept for hone y and silk, respectively.

Many of these small animals thus yielded food, clothing or warmth, but none of them pulled plows or wagons, none bore riders, none except dogs pulled sleds nor became war machines, and nine of them have been as important for food as have big domesticated mammals.

Most scientists distinguish among twelve to nineteen different species of early humans. Scientists do not all agree, however, about how the species are related or which ones simply died out. Many early human species',- probably most of them left no descendants. Scientists also debate over how to identify and classify particular species of early humans, and about what factors influenced the evolution and extinction of each species.

Early humans first migrated out of Africa into Asia probably between two million and 1.7 million years ago. They entered Europe later, generally within the past one million years. Species of modern humans populated many parts of the world much later. For instance, people first came to Australia probably within the past 60,000 years, and to the Americas within the past 35,000 years. The beginnings of agriculture and the rise of the first civilizations occurred within the past 10,000 years.

The scientific study of human evolution is called palaeanthropology. Palaeanthropology is a Studfield of anthropology, the study of human culture, society, and biology. Paleoanthropologists search for the roots of human physical traits and behaviour. They seek to discover how evolution has shaped the potentials, tendencies, and limitations of all people. For many people, palaeanthropology is an exciting scientific field because it illuminates the origins of the defining traits of the human species, and the fundamental connections between humans and other living organisms on Earth. Scientists have abundant evidence of human evolution from fossils, artifacts, and genetic studies. However, some people find the concept of human evolution troubling because it can seem to conflict with religious and other traditional beliefs about how people, other living things, and the world became. Yet many people have come to reconcile such beliefs with the scientific evidence.

All species of organisms originate through the process of biological evolution. In this process, new species arise from a series of natural changes. In animals that reproduce sexually, including humans, the term species refers to a group whose adult members regularly interbreed, resulting in fertile offspring,-that is, offspring themselves capable of reproducing. Scientists classify each species with a unique, and two-party scientific name. In this system, modern humans are classified as Homo sapiens.

The mechanism for evolutionary change resides in genes' - the basic units of heredity. Genes affect how the body and behaviour of an organism develop during its life. The information contained within the genes can reserve the change of a process known as mutation. The way particular genes are expressed,-how they affect the body or behaviour of an organism can also change. Over time, genetic change can alter a species overall way of life, such as what it eats, how it grows, and where it can live.

Genetic changes can improve the ability of organisms to survive, reproduce, and, in animals, raise offspring. This process is called adaptation. Parents pass adaptive genetic changes to their offspring, and ultimately these changes become common throughout a population-a group of organisms of the same species that share a particular local habitat. Many factors can favour new adaptations, but changes in the environment often play a role. Ancestral human species adapted to new environments as their genes changed, altering their anatomy (physical body structure) physiology (bodily functions, such as digestion, and behaviour). Over long periods, evolution dramatically transformed humans and their ways of life.

Geneticists estimate that the human line began to diverge from that of the African apes between eight million and five million years ago (paleontologists have dated the earliest human fossils to at least six million years ago). This figure comes from comparing differences in the genetic makeup of humans and apes, and then calculating how long it probably took for those differences to develop. Using similar techniques and comparing the genetic variations among human populations around the world, scientists have calculated that all people may share common genetic ancestors that lived sometime between 290,000 and 130,000 years ago.

Humans belong to the scientific order named Primates, a group of more than 230 species of mammals that also includes lemurs, lorises, tarsiers, monkeys, and apes. Modern humans, early humans, and other species of primates all have many similarities and some important differences. Knowledge of these similarities and differences helps scientists to understand the roots of many human traits, and the significance of each step in human evolution.

All primates, including humans, share at least part of a set of common characteristics that distinguish them from other mammals. Many of these characteristics evolved as adaptations for life in the trees, the environment in which earlier primates evolved. These include more reliance on sight than smell; overlapping fields of vision, allowing stereoscopic (three-dimensional) appearance; limbs and hands adapted for clinging on, leaping from, and swinging on tree trunks and branches; the ability to grasp and manipulate small objects (using fingers with nails instead of claws); large brains in relation to body size; and complex social lives.

The scientific classification of primates reflects evolutionary relationships between individual species and groups of species. Strepsirhini (meaning ‘turned-nosed') primate's,-of which the living representatives include lemurs, lorises, and other groups of species all commonly known as prosimians evolved earliest and are the most primitive forms of primates. The earliest monkeys and apes evolved from ancestral haplorhine (meaning ‘simple-nosed') primates, of which the most primitive living representative is the tarsier. Humans evolved from ape ancestors.

Tarsiers have traditionally been grouped with prosimians, but many scientists now recognize that tarsiers, monkeys, and apes share some distinct traits, and group the three together. Monkeys, apes, and humans-who share many traits not found in other primates-together make up the suborder Anthropoidea. Apes and humans together make up the super-family as contributive members of Hominoidea, a grouping that emphasizes the close relationship among the species of these two groups.

Strepsirhines are the most primitive types of living primates. The last common ancestors of Strepsirhines and other mammals creatures similar to tree shrews and classified as Plesiadapiformes,-evolved at least sixty-five million years ago. The earliest primates evolved about fifty-five million years ago, and fossil species similar to lemurs evolved during the Eocene Epoch (about fifty-five million to thirty-eight million years ago). Strepsirhines share all of the basic characteristics of primates, although their brains are not particularly large or complex and they have a more elaborate and sensitive olfactory system (sense of smell) than do other primates.

Tarsiers are the only living representatives of a primitive group of primates that ultimately led to monkeys, apes, and humans. Fossil species called Omomyid, with some traits similar to those of tarsiers, evolved near the beginning of the Eocene, followed by early tarsier-like primates. While the Omomyid and tarsiers are separate evolutionary branches (and there is no living Omomyid), they share features concerning a reduction in the olfactory system, a trait shared by all haplorhine primates, including humans.

The anthropoid primates are divided into New World (South America, Central America, and the Caribbean Islands) and Old World (Africa and Asia) groups. New World monkeys,- such as marmosets, capuchins, and spider monkeys,-belong to the infra-order of platyrrhine (broad-nosed) anthropoids. Old World monkeys and apes belong to the infra-order of catarrhine (downward-nosed) anthropoids. Since humans and apes together make up the hominoids, humans are also catarrhine anthropoids.

The first catarrhine primates evolved between fifty million and thirty-three million years ago. Most primate fossils from this period have been found in a region of northern Egypt known as Al Fayy? Um (or the Fayum). A primate group known as Propliopithecus, one lineage of which is sometimes called Aegyptopithecus, had primitive catarrhine features-that is, it had many basic features that Old World monkeys, apes, and humans share today. Scientists believe, therefore, that Propliopithecus resembles the common ancestor of all later Old World monkeys and apes. Thus, Propliopithecus may also be considered an ancestor or a close relative of an ancestor of humans evolved during the Miocene Epoch (twenty-four million to five million years in the past). Among the oldest known hominoids is a group of primates known by its genus name, Proconsul. Species of Proconsul had features that suggest a close link to the common ancestor of apes and humans,-for example, the lack of a tail. The species Proconsul heseloni lived in the trees of dense forests in eastern Africa about twenty million years ago. An agile climber, it had the flexible backbone and narrow chest characteristic of monkeys, but also a wide range of movement in the hip and thumb, traits characteristic of apes and humans.

Large ape species had originated in Africa by twenty-three million or twenty-two million years ago. By fifteen million years ago, some of these species had migrated to Asia and Europe over a land bridge formed between Africa-Arabian and Eurasian continents, which had previously been separated.

Early in their evolution, the large apes underwent several radiations-periods when new and diverse species branched off from common ancestors. Following Proconsul, the ape genus Afropithecus evolved about eighteen million years ago in Arabia and Africa and diversified into several species. Soon afterward, three other ape genera evolved,-Griphopithecus of western Asia about 16.5 million years ago, the earliest ape to have spread from Africa, as did the genus Kenyapithecus of Africa about fifteen million years ago, moreover the Dryopithecus of Europe about twelve million years ago. Scientists have not yet determined which of these groups of apes may have caused the common ancestor of modern African apes and humans.

Scientists do not all agree about the appropriate classification of hominoids. They group the living hominoids into either two or three families: Hylobatidae, Hominidae, and sometimes Pongidae. Hylobatidae consists of the small or so-called lesser apes of Southeast Asia, commonly known as gibbons and siamangs. The Hominidae (hominids) includes humans and, according to some scientists, the great apes. For those who include mere humans associated with the Hominidae, all of the great apes, including the orangutans of Southeast Asia, belong to the family Pongidae.

In the past only humans were considered to belong to the family Hominidae, and the term hominid referred only to species of humans. Today, however, genetic studies support placing all of the great apes and humans together in this family and the placing of African apes-chimpanzees and gorillas-together with humans at an even lower level, or subfamily

According to this reasoning, the evolutionary branch of Asian apes leading to orangutans, which separated from the other hominid branches nearly thirteen million years ago, belongs to the subfamily Ponginae. The ancestral and living representatives of the African ape and human branches together belong to the subfamily Homininae (sometimes called Hominines). Lastly, the line of early and modern humans belongs to the tribe (classificatory level above genus) Hominini, or hominins.

This order of classification corresponds with the genetic relationships between ape and human species. It groups humans and the African apes together at the same level in which scientists group together, for example, all types of foxes, all buffalo, or all flying squirrels. Within each of these groups, the species are very closely related. However, in the classification of apes and humans the similarities among those mention's of hominoid, hominid, hominine, and hominin may admit to contradiction. In this context, the term early human refers to all species of the human family tree since the divergence from a common ancestor with the African apes. Popular writing often still uses the term hominid to mean the same thing.

About 98.5 percent of the genes in people and chimpanzees are identical, making chimps the closest living biological relatives of humans. This does not mean that humans evolved from chimpanzees, but it does indicate that both species evolved from a common ape ancestor. Orangutans, the great apes of Southeast Asia, differ much more from humans genetically, indicating a more distant evolutionary relationship.

Modern humans have several physical characteristics reflective of an ape ancestry. For instance, people have shoulders with a wide range of movement and fingers capable of strong grasping. In apes, these characteristics are highly developed as adaptations for brachiation, -swinging from branch to branch in trees. Although humans do not brachiate, the general anatomy from that earlier adaptation remains. Both people and apes also have larger brains and greater cognitive abilities than do most other mammals.

Human social life, too, shares similarities with that of African apes and other primates,-such as baboons and rhesus monkeys-that live in large and complex social groups. Group behaviour among chimpanzees, in particular, strongly resembles that of humans. For instance, chimps form long-lasting attachments with each other, participate in social bonding activities, such as grooming, feeding, and hunting; and form strategic coalitions with each other in order to increase their status and power. Early humans also probably had this kind of elaborate social life.

In whatever manner, modern humans fundamentally differ from apes in many significant ways. For example, as intelligent as apes are, people's brains are much larger and more complex, and people have a unique intellectual capacity and elaborate forms of culture and communication. In addition, only people habitually walk upright, can precisely manipulate very small objects, and have a throat structure that makes speech possible.

By around six million years ago in Africa, an apelike species had evolved with two important traits that distinguished it from apes: (1) small canine, or eye, teeth (teeth next to the four incisors, or front teeth) and (2) Bipedalism,-that is, walking on two legs as the primary form of locomotion. Scientists refer to these earliest human species as australopithecines, or Australopiths for short. The earliest Australopiths species known today belong to three genera: Sahelanthropus, Orrorin, and Ardipithecus. Other species belong to the genus Australopithecus and, by some classifications, Paranthropus. The name australopithecine translates literally as ‘southern ape', concerning South Africa, where the first known Australopiths fossils were found.

The Great Rift Valley, a region in eastern Africa in which past movements in Earth's crust have exposed ancient deposits of fossils, has become famous for its Australopiths finds. Countries in which scientists have found Australopiths fossils include Ethiopia, Tanzania, Kenya, South Africa, and Chad. Thus, Australopiths ranged widely over the African continent.

Fossils from several different early Australopiths species that lived between four million and two million years ago clearly show a variety of adaptations that marks the transition from ape too human. The very early period of this transition, before four million years ago, remains poorly documented in the fossil record, but those fossils that do exist show the most primitive combinations of ape and human features.

Fossils reveal much about the physical build and activities of early Australopiths, but not everything about outward physical features such as the colour and texture of skin and hair, or about certain behaviours, such as methods of obtaining food or patterns of social interaction. For these reasons, scientists study the living great apes-particularly the African apes to understand better how early Australopiths might have looked and behaved, and how the transition from ape too human might have occurred. For example, Australopiths probably resembled the great apes in characteristics such as the shape of the face and the hair on the body. Australopiths also had brains roughly equal in size to those of the great apes, so they probably had apelike mental abilities. Their social life probably resembled that of chimpanzees.

Most of the distinctly human physical qualities in Australopiths related to their bipedal stance. Before Australopiths, no mammal had ever evolved an anatomy for habitual upright walking. Australopiths also had small canine teeth, as compared with long canines found in most other catarrhine primates.

Other characteristics of Australopiths reflected their ape ancestry. They had a low cranium behind a projecting face, and a brain size of 390 to 550 cu. cm. (24 to thirty-four cu. in.) - between an ape's brain. The body weight of Australopiths, as estimated from their bones, ranged from twenty-seven to 49 kg. (sixty to 108 lb.), and they stood 1.1 to 1.5 m. (3.5 to 5 ft.) tall. Their weight and height compare closely to those of chimpanzees (chimp height measured standing). Some Australopiths species had a large degree of sexual dimorphism-males were much larger than females-a trait also found in gorillas, orangutans, and other primates.

Australopiths also had curved fingers and long thumbs with a wide range of movement. In comparison, the fingers of apes are longer, more powerful, and more curved, making them extremely well adapted for hanging and swinging from branches. Apes also have very short thumbs, which limits their ability to manipulate small objects. Paleoanthropologists speculate about whether the long and dexterous thumbs of Australopiths allowed them to use tools more efficiently than do apes.

The anatomy of Australopiths shows several adaptations for Bipedalism, in both the upper and lower body. Adaptations in the lower body included the following: The australopithilium, or pelvic bone, which rises above the hip joint, was much shorter and broader than it is in apes. This shape enabled the hip muscles to steady the body during each step. The Australopiths pelvis also had a bowl-like shape, which supported the internal organs in an upright stance. The upper legs angled inward from the hip joints, which positioned the knees better to support the body during upright walking. The legs of apes, on the other hand, are positioned almost straight down from the hip, so that when an ape walks upright for a short distance, its body sways from side to side. Australopiths also had short and fewer flexible toes than do apes. The toes worked as rigid levers for pushing off the ground during each bipedal step.

Other adaptations occurred above the pelvis. The Australopiths spine had a S-shaped curve, which shortened the overall length of the torso and gave it rigidity and balance when standing. By contrast, apes have a straight spine. The Australopiths skull also had an important adaptation related to Bipedalism. The opening at the bottom of the skull through which the spinal cord attaches to the brain, called the foramen magnum, was positioned more forward than it is in apes. This position set the head in balance over the upright spine.

Australopiths clearly walked upright on the ground, but paleoanthropologists debate whether the earliest humans also spent a significant amount of time in the trees. Certain physical features indicate that they spent at least some of their time climbing in trees. Such features included they're curved and elongated fingers and elongated arms. However, their fingers, unlike those of apes, may not have been long enough to allow them to brachiate through the treetops. Study of fossil wrist bones suggests that early Australopiths could lock their wrists, preventing backward bending at the wrist when the body weight was placed on the knuckles of the hand. This could mean that the earliest bipeds had an ancestor that walked on its knuckles, as African apes do.

Compared with apes, humans have very small canine teeth. Apes-particularly males-have thick, projecting, sharp canines that they use for displays of aggression and as weapons to defend themselves. The oldest known bipeds, who lived at least six million years ago, still had large canines by human standards, though not as large as in apes. By four million years ago Australopiths had developed the human characteristic of having smaller, flatter canines. Canine reduction might have related to an increase in social cooperation between humans and an accompanying decrease in the need for males to make aggressive displays.

The Australopiths can be divided into an early group of species, known as gracile Australopiths, which arose before three million years ago, and a later group, known as robust Australopiths, which evolved after three million years ago. The gracile Australopiths,-of which several species evolved between 4.5 million and three million years in the past,-generally had smaller teeth and jaws. The later-evolving robust had larger faces with large jaws and molars (cheek teeth). These traits indicate powerful and prolonged chewing of food, and analyses of wear on the chewing surface of robust Australopiths molar teeth's support this idea. Some fossils of early Australopiths have features resembling those of the later species, suggesting that the robustus evolved from one or more gracile ancestors.

Paleoanthropologists recognize at least eight species of early Australopiths. These include the three earliest established species, which belong to the genuses' Sahelanthropus, Orrorin, and Ardipithecus, a species of the genus Kenyanthropus, and four species of the genus Australopithecus.

The oldest known Australopiths species is Sahelanthropus tchadensis. Fossils of this species were first discovered in 2001 in northern Chad, Central Africa, by a research team led by French paleontologist Michel Brunet. The researchers estimated the fossils to be between seven million and six million years old. One of the fossils has a fracture but nearly completes the cranium that shows a combination of apelike and humanlike features. Apelike features include small brain size, an elongated brain case, and areas of bone where strong neck muscles would have attached. Humanlike features are to include small, flat canine teeth, a short middle part of the face, and a massive brow ridge (a bony, protruding ridge above the eyes) similar to that of later human fossils. The opening where the spinal cord attaches to the brain is tucked under the brain case, which suggests that the head was balanced on an upright body. It is not certain that Sahelanthropus walked bipedally, however, because bones from the rest of its skeleton have yet to be discovered. Nonetheless, its age and humanlike characteristics suggest that the human and African ape lineages had divided from one another by at least six million years ago.

In addition to reigniting debate about human origins, the discovery of Sahelanthropus in Chad significantly expanded the known geographic range of the earliest humans. The Great Rift Valley and South Africa, from which most other discoveries of early human fossils came, are apparently not the only regions of the continent that preserve the oldest clues of human evolution.

Orrorin tugenensis lived about six million years ago. This species was discovered in 2000 by a research team led by French paleontologist Brigitte Sent and French geologist Martin Pickford in the Tugen Hills region of central Kenya. The researchers found more than a dozen early human fossils dating between 6.2 million and six million years old. Among the finds were two thighbones that possess a groove indicative of an upright stance and bipedal walking. Although the finds are still being studied, the researchers consider these thighbones to be the oldest evidence of habitual two-legged walking. Fossilized bones from other parts of the skeleton show apelike features, including long, curved finger bones useful for strong grasping and movement through trees, and apelike canine and premolar teeth. Because of this distinctive combination of ape and human traits, the researchers gave a new genus and species name to these fossils, Orrorin tugenensis, which in the local language means ‘original man in the Tugen region. The age of these fossils suggests that the divergence of humans from our common ancestor with chimpanzees occurred before six million years ago.

In 1994 an Ethiopian member of a research team led by American paleoanthropologists Tim White discovered human fossils estimated to be about 4.4 million year's old. White and his colleagues gave their discovery the name Ardipithecus ramidus. Ramid means ‘root' in the Afar language of Ethiopia and refers to the closeness of this new species to the roots of humanity. At the time of discovery, the genus Australopithecus was scientifically well established. White devised the genus name Ardipithecus to distinguish this new species from other Australopiths because its fossils had a very ancient combination of apelike and humanlike traits. More recent finds indicate that this species may have lived as early as 5.8 million to 5.2 million years ago.

The teeth of Ardipithecus ramidus had a thin outer layer of enamel,-a trait also seen in the African apes but not in other Australopiths species or older fossil apes. This trait suggests a close relationship with an ancestor of the African apes. In addition, the skeleton shows strong similarities to that of a chimpanzee but has slightly reduced canine teeth and adaptations for Bipedalism.

In 1965 a research team from Harvard University discovered a single arm bone of an early human at the site of Kanapoi in northern Kenya. The researchers estimated this bone to be four million years old, but could not identify the species to which it belonged or return at the time to look for related fossils. It was not until 1994 that a research team, led by British-born Kenyan paleoanthropologists Meave Leakey, found numerous teeth and fragments of bone at the site that could be linked to the previously discovered fossil. Leakey and her colleagues determined that the fossils were those of the very primitive species of Australopiths, which was given the name Australopithecus Anamensis. Researchers have since found other A. Anamensis fossils at nearby sites, dating between about 4.2 million and 3.9 million years old. The skull of this species appears apelike, while its enlarged tibia (lower leg bone) indicates that it supported its full body weight on one leg at a time, as in regular bipedal walking

Australopithecus Anamensis was quite similar to another, much better-known species, A. afarensis, a gracile Australopiths that thrived in eastern Africa between about 3.9 million and three million years ago. The most celebrated fossil of this species, known as Lucy, is a partial skeleton of a female discovered by American paleoanthropologists Donald Johanson in 1974 at Hadar, Ethiopia. Lucy lived 3.2 million years ago. Scientists have identified several hundred fossils of A. afarensis from Hadar, including a collection representing at least thirteen individuals of both sexes and various ages, all from a single site.

Researchers working in northern Tanzania have also found fossilized bones of A. afarensis at Laetoli. This site, dated at 3.6 million years old, is best known for its spectacular trails of bipedal human footprints. Preserved in hardened volcanic ash, these footprints were discovered in 1978 by a research team led by British paleoanthropologists Mary Leakey. They provide irrefutable evidence that Australopiths regularly walked bipedally.

Paleoanthropologists have debated interpretations of the characteristics of A. afarensis and its place in the human family tree. One controversy centres on the Laetoli footprints, which some scientists believe show that the foot anatomy and gait of A. afarensis did not exactly match those of the modern humans. This observation may indicate that early Australopiths did not live primarily on the ground or at least spent a significant amount of time in the trees. The skeleton of Lucy also indicates that A. afarensis had longer, more powerful arms than most later human species, suggesting that this species was adept at climbing trees. Another controversy relates to the scientific classification of the A. afarensis fossils, compared with Lucy, who stood only 1.1 m. (3.5 ft.) tall, other fossils identified as A. afarensis from Hadar and Laetoli came from individuals who stood up to 1.5 m. (5 ft.) tall. This great difference in size leads some scientists to suggest that the entire set of fossils now classified as A. afarensis represents two species. Most scientists, however, believe the fossils represent one highly dimorphic species,-that is, a species that has two distinct forms (in this case, two sizes). Supporters of this view may note that the two large (presumably male) and small (presumably female) adults occur together in one site at Hadar.

A third controversy arises from the claim that A. afarensis was the common ancestor of both later Australopiths and the modern human genus, Homo. While this idea remains a strong possibility, the similarity between this and another Australopiths species-one from southern Africa, named Australopithecus africanus-makes it difficult to decide which of the two species led to the genus Homo.

Australopithecus africanus thrived in the Transvaal region of what is now South Africa between about 3.3 million and 2.5 million years ago. Australian-born anatomist Raymond Dart discovered this species-the first known Australopiths-in 1924 at Taung, South Africa. The specimen that of a young child, became known as the Taung Child. For decades after this discovery, almost no one in the scientific community believed Dart's claim that the skull came from an ancestral human. In the late 1930's teams led by Scottish-born South African paleontologist Robert Broom unearthed many more A. africanus skulls and other bones from the Transvaal site of Sterkfontein.

A. africanus generally had a more globular braincase and less primitive-looking face and teeth than did A. afarensis. Thus, some scientists consider the southern species of early Australopiths to be a likely ancestor of the genus Homo. According to other scientists, however, certain heavily built facial and cranial features of A. africanus from Sterkfontein identify it as an ancestor of the robust Australopiths that lived later in the same region. In 1998 a research team led by South African paleoanthropologists Ronald Clarke discovered an almost complete early Australopiths skeleton at Sterkfontein. This important find may resolve some of the questions about where A. africanus fits in the story of human evolution.

Working in the Lake Turkana's region of northern Kenya, a research team led by paleontologist in which Meave Leakey uncovered 1999 a cranium and other bone remains of an early human that showed a mixture of features unseen in previous discoveries of early human fossils. The remains were estimated to be 3.5 million years old, and the cranium's small brain and earhole was similar to those of the earliest humans. Its cheekbone, however, joined the rest of the face in a forward position, and the region beneath the nose opening was flat. These are traits found in later human fossils from around two million years ago, typically those classified in the genus Homo. Noting this unusual combination of traits, researchers named a new genus and species, Kenyanthropus platy ops, or ‘flat-faced human from Kenya.' Before this discovery, it seemed that only a single early human species, Australopithecus afarensis, lived in East Africa between four million and three million years ago. Yet Kenyanthropus indicates that a diversity of species, including a more humanlike lineage than A. afarensis, lived in this period, just as in most other eras in human prehistory.

The human fossil record is poorly known between three million and two million years ago, from which carries over recent finds from the site of Bouri, Ethiopia, particularly important. From 1996 to 1998, a research team led by Ethiopian paleontologist Berhane Asfaw and American paleontologist Tim White found the skull and other skeletal remains of an early human specimen about 2.5 million years old. The researchers named it Australopithecus garhi; the word garhi means ‘surprise' in the Afar language. The specimen is unique in having large incisors and molars in combination with an elongated forearm and thighbone. Its powerful arm bones suggest a tree-living ancestry, but its longer legs indicate the ability to walk upright on the ground. Fossils of A. garhi are associated with some of the oldest known stone tools, along with animal bones that were cut and cracked with tools. It is possible, then, that this species was among the first to make the transition to stone Toolmaking and to eating meat and bone marrow from large animals.

By 2.7 million years ago the later, robust Australopiths had evolved. These species had what scientists refer to as megadont cheek teeth-wide molars and premolars coated with thick enamel. Their incisors, by contrast, were small. The robusts also had an expanded, flattened, and more vertical face than did gracile Australopiths. This face shape helped to absorb the stresses of strong chewing. On the top of the head, robust Australopiths had a sagittal crest (ridge of bone along the top of the skull from front to back) to which thick jaw muscles attached. The zygomatic arches (which extend back from the cheek bones to the ears), curved out wide from the side of the face and cranium, forming very large openings for the massive chewing muscles to pass through near their attachment to the lower jaw. Together, these traits indicate that the robust Australopiths chewed their food powerfully and for long periods.

Other ancient animal species that specialized in eating plants, such as some types of wild pigs, had similar adaptations in their facial, dental, and cranial anatomy. Thus, scientists think that the robust Australopiths had a diet consisting partly of tough, fibrous plant foods, such as seed pods and underground tubers. Analyses of microscopic wear on the teeth of some robust Australopiths specimens appear to support the idea of a vegetarian diet, although chemical studies of fossils suggest that the southern robust species may also have eaten meat.

Scientists originally used the word robust to refer to the late Australopiths out of the belief that they had much larger bodies than did the early, gracile Australopiths. However, further research has revealed that the robust Australopiths stood about the same height and weighed roughly the same amount as Australopithecus afarensis and A. africanus.

The earliest known robust species, Australopithecus aethiopicus, lived in eastern Africa by 2.7 million years ago. In 1985 at West Turkana, Kenya, American paleoanthropologists Alan Walker discovered a 2.5-million-year-old fossil skull that helped to define this species. It became known as the ‘black skull' because of the colour it had absorbed from minerals in the ground. The skull had a tall sagittal crest toward the back of its cranium and a face that projected far outward from the forehead. A. aethiopicus shared some primitive features with

A. afarensis,-that is, features that originated in the earlier East African Australopiths. This may indicate that A. aethiopicus evolved from A. afarensis.

Australopithecus boisei, the other well-known East African robust Australopiths, lived over a long period, between about 2.3 million and 1.2 million years ago. In 1959 Mary Leakey discovered the original fossil of this species-a nearly complete skull-at the site of Olduvai Gorge in Tanzania. Kenyan-born paleoanthropologists Louis Leakey, husband of Mary, originally named the new species Zinjanthropus boisei (Zinjanthropus translates as ‘East African man'). This skull-dating from 1.8 million years ago-has the most specialized features of all the robust species. It could withstand extreme chewing forces, and molars four times the size of those in modern humans. Since the discovery of Zinjanthropus, now recognized as an Australopiths, scientists have found many A. boisei fossils in Tanzania, Kenya, and Ethiopia.

The southern robust species, called Australopithecus robustus, lived between about 1.8 million and 1.3 million years ago in Transvaal, the same region that was home to A. africanus. In 1938 Robert Broom, who had found many A. africanus fossils, bought a fossil jaw and molar that looked distinctly different from those in A. africanus. After finding the site of Kromdraai, from which the fossil had come, Broom collected many more bones and teeth that together convinced him to name a new species, which he called Paranthropus robustus (Paranthropus meaning ‘beside man'). Later scientists dated this skull at about 1.5 million years old. In the late 1940's and 1950 Broom discovered many more fossils of this species at the Transvaal site of Swartkrans.

Many scientists believe that robust Australopiths represent a distinct evolutionary group of early humans because these species share features associated with heavy chewing. According to this view, Australopithecus aethiopicus diverged from other Australopiths and later produced A. boisei and A. robustus. Paleoanthropologists who strongly support this view think that the robusts should be classified in the genus Paranthropus, the original name given to the southern species. Thus, these three species are sometimes called, P. aethiopicus, P. boisei, and P. robustus.

Other paleoanthropologists believe that the eastern robust species, A. aethiopicus and A. boisei, may have evolved from an early Australopiths of the same region, perhaps A. afarensis. According to this view, A. africanus gave rise only to the southern species, A. robustus. Scientists refer to such a case -in that two or more independent species evolve similar characteristics in different places or at different times,-as parallel evolution. If parallel evolution occurred in Australopiths, the robust species would make up two separate branches of the human family tree.

The last robust Australopiths died out about 1.2 million years ago. At about this time, climate patterns around the world entered a period of fluctuation, and these changes may have reduced the food supply on which robusts depended. Interaction with larger-brained members of the genus Homo, such as Homo erectus, may also have contributed to the decline of late Australopiths, although no compelling evidence exists of such direct contact. Competition with several other species of plant-eating monkeys and pigs, which thrived in Africa at the time, may have been an even more important factor. Nevertheless, the reasons why the robust Australopiths became extinct after flourishing for such a long time are not yet known for sure.

Scientists have several ideas about why Australopiths first split from the apes, initiating the course of human evolution. Nearly all hypotheses suggest that environmental change was an important factor, specifically in influencing the evolution of Bipedalism. Some well-established ideas about why humans first evolved include (1) the savanna hypothesis, (2) the woodland-mosaic hypothesis, and (3) the variability hypothesis.

The global climate cooled and became drier between eight million and five million years ago, near the end of the Miocene Epoch. According to the savanna hypothesis, this climate change broke up and reduced the area of African forests. As the forests shrunk, an ape population in eastern Africa became separated from other populations of apes in the more heavily forested areas of western Africa. The eastern population had to adapt to its drier environment, which contained larger areas of grassy savanna.

The expansion of dry terrain favoured the evolution of terrestrial living, and made it more difficult to survive by living in trees. Terrestrial apes might have formed large social groups in order to improve their ability to find and collect food and to fend off predators-activities that also may have required the ability to communicate well. The challenges of savanna life might also have promoted the rise of tool use, for purposes such as scavenging meat from the kills of predators. These important evolutionary changes would have depended on increased mental abilities and, therefore, may have correlated with the development of larger brains in early humans.

Critics of the savanna hypothesis argue against it on several grounds, but particularly for two reasons. First, discoveries by a French scientific team of Australopiths fossils in Chad, in Central Africa, suggest that the environments of East Africa may not have been fully separated from those farther west. Second, recent research suggests that open savannas were not prominent in Africa until sometime after two million years ago

Criticism of the savanna hypothesis has spawned alternative ideas about early human evolution. The woodland-mosaic hypothesis proposes that the early Australopiths evolved in patchily wooded areas-a mosaic of woodland and grassland-that offered opportunities for feeding both on the ground and in the trees, and that ground feeding favoured Bipedalism.

The variability hypothesis suggests that early Australopiths experienced many changes in environment and ended up living in a range of habitats, including forests, open-canopy woodlands, and savannas. In response, their populations became adapted to a variety of surroundings. Scientists have found that this range of habitats existed at the time when the early Australopiths evolved. So the development of new anatomical characteristics,-particularly Bipedalism-combined with an ability to climb trees, may have given early humans the versatility to live in a variety of habitats.

Scientists also have many ideas about which benefits of Bipedalism may have influenced its evolution. Ideas about the benefits of regular Bipedalism include that it freed the hands, making it easier to carry food and tools; allowed early humans to see over tall grass to watch for predators; reduced vulnerability of the body and too hot of the sun, provided an increased exposure to cooling winds; improved the ability to hunt or use weapons, which became easier with an upright posture; and made extensive feeding from bushes and low branches easier than it would have been for a quadruped. Scientists do not overwhelmingly support any one of these ideas. Recent studies of chimpanzees suggest, though, that the ability to feed more easily might have particular relevance. Chimps carry through an action on two legs most often when they feed from the ground on the leaves and fruits of bushes and low branches. Chimps cannot, however, walk in this way over long distances.

Bipedalism in early humans would have enabled them to travel efficiently over long distances, giving them an advantage over quadrupedal apes in moving across barren open terrain between groves of trees. In addition, the earliest humans continued to have the advantage from their ape ancestry of being able to escape into the trees to avoid predators. The benefits of both Bipedalism and agility in the trees may explain the unique anatomy of Australopiths. Their long, powerful arms and curved fingers probably made them good climbers, while their pelvis and lower limb structure were reshaped for upright walking people belong to the genus Homo, which first evolved at least 2.3 million to 2.5 million years ago. The earliest members of this genus differed from the Australopiths in at least one important respect-they had larger brains than did their predecessors.

The evolution of the modern human genus can be divided roughly into three periods: during an early stage, an intermediate period and late. Species of early Homo resembled gracile Australopiths in many ways. Some early Homo species lived until possibly 1.6 million years ago. The period of the middle Homo began perhaps between two million and 1.8 million years ago, overlapping with the end of early Homo. Species of Middle Homo evolved an anatomy much more similar to that of modern humans but had comparatively small brains. The transition from Intermittent to late Homo probably occurred sometime around 200,000 years ago. Species of late Homo evolved large and complex brains and eventually language. Culture also became an increasingly important part of human life during the most recent period of evolution.

The origin of the genus Homo has long intrigued paleoanthropologists and prompted much debate. One of several known species of Australopiths, or one not yet discovered, could have caused the first species of Homo. Scientists also do not know exactly what factors favoured the evolution of a larger and more complex brain-the defining physical trait of modern humans.

Louis Leakey originally argued that the origin of Homo related directly to the development of Toolmaking,-specifically, the making of stone tools. Toolmaking requires certain mental skills and fine hand manipulation that may exist only in members of our own genus. The name Homo habilis (meaning ‘repairer') refer directly to the making and use of tools

However, several species of Australopiths lived just when early Homo, making it unclear which species produced the earliest stone tools. Recent studies of Australopiths hand bones have suggested that at least a robust species, Australopithecus robustus, could have made tools. In addition, during the 1960's and 1970's researchers first observed that some nonhuman primates, such as chimpanzees, make and use tools, suggesting that Australopiths and the apes that preceded them probably also made some kinds of tools.

Scientists began to notice a high degree of variability in body size as they discovered more early Homo fossils. This could have indicated that H. habilis had a large amount of sexual dimorphism. For instance, the Olduvai female skeleton was dwarfed in comparison with other fossils,-exemplified by a sizable early Homo cranium from East Turkana in northern Kenya. However, the differences in size exceeded those expected between males and females of the same species, and this finding later helped convince scientists that another species of early Homo had lived in eastern Africa.

This second species of early Homo was given the name Homo rudolfensis, after Lake Rudolf (now Lake Turkana). The best-known fossils of H. rudolfensis come from the area surrounding this lake and date from about 1.9 million years ago. Paleoanthropologists have not determined the entire time range during which H. rudolfensis may have lived.

This species had a larger face and body than did H. habilis. The cranial capacity of H. rudolfensis averaged about 750 cu cm (46 cu. in.). Scientists need more evidence to know whether the brain of H. rudolfensis in relation to its body size was larger than that proportion in H. habilis. A larger brain-to-body-size, and ratio can indicate increased mental abilities. H. rudolfensis also had large teeth, approaching the size of those in robust Australopiths. The discovery of even a partial fossil skeleton would reveal whether this larger form of early Homo had apelike or more modern body proportions. Scientists have found several modern-looking thighbones that date from between two million and 1.8 million years ago and may belong to H. rudolfensis. These bones suggest a body size of 1.5 m. (5 ft.) and 52 kg. (114 lb.).

By about 1.9 million years ago, the period of middle Homo had begun in Africa. Until recently, paleoanthropologists recognized one species in this period, Homo erectus. Many now recognize three species of middle Homo: Homo. ergaster, Homo. erectus, and Homo. heidelbergensis. However, some still think Homo ergaster is an early African form of H. erectus, or that Homo heidelbergensis is a late form of the Homo erectus.

The skulls and teeth of early African populations of Middle Homo differed subtly from those of later H. erectus populations from China and the island of Java in Indonesia. H. ergaster makes a better candidate for an ancestor of the modern human line because Asian H. erectus has some specialized features not seen in some later humans, including our own species. H. heidelbergensis has similarities to both

H. erectus and the later species. The H. neanderthalensis, even if it may have been a transitional species between middle Homo and the line to which modern humans belong.

Homo ergaster probably first evolved in Africa around two million years ago. This species had a rounded cranium with a brain size of between 700 and 850 cu. cm. (49 to fifty-two cu. in), a prominent brow ridge, small teeth, and many other features that it shared with the later H. erectus. Many paleoanthropologists consider H. ergaster a good candidate for an ancestor of modern humans because it had several modern skull features, including thin cranial bones. Most H. ergaster fossils come from the time range of 1.8 million to 1.5 million years ago.

The most important fossil of this species yet found is a nearly complete skeleton of a young male from West Turkana, Kenya, which dates from about 1.55 million years ago. Scientists determined the sex of the skeleton from the shape of its pelvis. They also determined from patterns of tooth eruption and bone growth that the boy had died when he was between nine and twelve years old. The oldest humanlike fossils outside Africa have also been classified as H. ergaster, dated around 1.75 million year's old. These finds, from the Dmanisi site in the southern Caucasus Mountains of Georgia, consist of several crania, jaws, and other fossilized bones. Some of these are strikingly like East African H. ergaster, but others are smaller or larger than H. ergaster, suggesting a high degree of variation within a single population

H. ergaster, H. rudolfensis, and H. habilis, in addition to possibly two robust Australopiths, all might have coexisted in Africa around 1.9 million years ago. This finding goes against a traditional paleoanthropological view that human evolution consisted of a single line that evolved progressively over time-an Australopiths species followed by early Homo, then Middle Homo, and finally H. sapiens. It appears that periods of species diversity and extinction have been common during human evolution, and that modern H. sapiens has the rare distinction of being the only living human species today.

Although H. ergaster appears to have coexisted with several other human species, they probably did not interbreed. Mating rarely succeeds between two species with significant skeletal differences, such as H. ergaster and H. habilis. Many paleoanthropologists now believe that H. ergaster descended from an earlier population of Homo-perhaps one of the two known species of early Homo-and that the modern human line descended from the H. ergaster.

Paleoanthropologists now know that humans first evolved in Africa and lived only on that continent for a few million years. The earliest human species known to have spread in large numbers beyond the African continent was first discovered in Southeast Asia. In 1891 Dutch physician Eugene Dubois found the cranium of an early human on the Indonesian island of Java. He named this early human Pithecanthropus erectus, or ‘erect ape-man.' Today paleoanthropologists call this species Homo erectus.

H. erectus appears to have evolved in Africa from earlier populations of H. ergaster, and then spread to Asia sometime between 1.8 million and 1.5 million years ago. The youngest known fossils of this species, from the Solo River in Java, may date from about 50,000 years ago (although that dating is controversial). So, H. erectus was a very successful species,-both widespread, having lived in Africa and much of Asia, and long-lived, having survived for possibly more than 1.5 million years.

H. erectus had a low and rounded braincase that was elongated form front to back, a prominent brow ridge, and adult cranial capacity of 800 to 1,250 cu. cm. (50 to eighty cu. in.), an average twice that of the Australopiths. Its bones, including the cranium, were thicker than those of earlier species. Prominent muscle markings and thick, reinforced areas on the bones of H. erectus indicate that its body could withstand powerful movements and stresses. Although it had much smaller teeth than did the Australopiths, it had a heavy and strong jaw.

In the 1920's and 1930's German anatomist and physical anthropologist Franz Weidenreich excavated the most famous collections of H. erectus fossils from a cave at the site of Zhoukoudian (Chou-k ou-tien), China, near Beijing (Peking). Scientists dubbed these fossil humans Sinanthropus pekinensis, or Peking Man, but others later reclassified them as H. erectus. The Zhoukoudian cave yielded the fragmentary remains of more than 30 individuals, ranging from about 500,000 to 250,000 years old. These fossils were lost near the outbreak of World War II, but Weidenreich had made excellent casts of his finds. Further studies at the cave site have yielded more H. erectus remains.

Other important fossil sites for this species in China include Lantian, Yuanmou, Yunxian, and Hexian. Researchers have also recovered many tools made by H. erectus in China at sites such as Nihewan and Bose, and other sites of similar age (at least one million to 250,000 years old).

Ever since the discovery of H. erectus, scientists have debated whether this species was a direct ancestor of later humans, including H. sapiens. The last populations of H. erectus-such as those from the Solo River in Java-may have lived as recently as 50,000 years ago, while did populations of H. sapiens. Modern humans could not have evolved from these late populations of the H. erectus, a much more primitive type of human. However, earlier East Asian populations could have produced The

Homo sapiens.

Many paleoanthropologists believe that early humans migrated into Europe by 800,000 years ago, and that these populations were not Homo erectus. Most scientists refer to these early migrants into Europe,-who predated both Neanderthals and H. sapiens in the region, as

H. heidelbergensis. The species name comes from a 500,000-year-old jaw found near Heidelberg, Germany

Scientists have found few human fossils in Africa for the period between 1.2 million and 600,000 years ago, during which H. heidelbergensis or its ancestors first migrated into Europe. Populations of H. ergaster (or possibly H. erectus) appear to have lived until at least 800,000 years ago in Africa, and possibly until 500,000 years ago in northern Africa. When these populations disappeared, other massive-boned and larger-brained humans-possibly H. heidelbergensis-appear to have replaced them. Scientists have found fossils of these stockier humans at sites in Bodo Ethiopia, Saldanha (also known as Elandsfontein), South Africa, Ndutu, Tanzania, Kabwe, and Zimbabwe.

Scientists have come up with at least three different interpretations of these African fossils. Some scientists place the fossils in the species H. heidelbergensis and think that this species led to both the Neanderthals (in Europe) and H. sapiens (in Africa). Others think that the European and African fossils belong to two distinct species, and that the African populations that, in this view, was not

H. heidelbergensis but a separate species produced Homo sapiens. Yet other scientists advocate a long-head view that H. erectus similarly, Homo sapiens belong to a single evolving lineage, and that the African fossils belong in the category of archaic H. sapiens (archaic meaning not fully anatomically modern).

The fossil evidence does not clearly favour any of these three interpretations over another. Several fossils from Asia, Africa, and Europe have features that are intermediate between early H. ergaster and H. sapiens. This kind of variation makes it hard to decide how to identify distinct species and to determine which group of fossils represents the most likely ancestor of later humans.

Scientists once thought that advances in stone tools could have enabled early humans such as Homo erectus to move into Asia and Europe, perhaps by helping them to obtain new kinds of food, such as the meat of large mammals. If African human populations had developed tools that allowed them to hunt large game effectively, they would have had a good source of food wherever they went. In this view, humans first migrated into Eurasia based on a unique cultural adaptation.

By 1.5 million years ago, early humans had begun to make new kinds of tools, which scientists call Acheulean. Common Acheulean tools included large hand axes and cleavers. While these new tools might have helped early humans to hunt, the first known Acheulean tools in Africa date from later than the earliest known human presence in Asia. Also, most East Asian sites more than 200,000 years old contains only simply shaped cobble and flake tools. In contrast, Acheulean tools were more finely crafted, larger, and more symmetrical. Thus, the earliest settlers of Eurasia did not have a true Acheulean technology, and advances in Toolmaking alone cannot explain the spread out of Africa.

Another possibility is that the early spreads of humans to Eurasia were not unique, but part of a wider migration of meat-eating animals, such as lions and hyenas. The human migration out of Africa occurred during the early part of the Pleistocene Epoch, between 1.8 million and 780,000 years ago. Many African carnivores spread to Eurasia during the early Pleistocene, and humans could have moved along with them. In this view, H. erectus was one of many meat-eating species to expand into Eurasia from Africa, rather than a uniquely adapted species. Relying on meat as a primary food source might have allowed many meat-eating species, including humans, to move through many different environments without having to learn about unfamiliar and potentially poisonous plants quickly.

However, the migration of humans to eastern Asia may have occurred gradually and through lower latitudes and environments similar to those of Africa. If East African populations of H. erectus moved at only 1.6 km. (1 mi.) every twenty years, they could have reached Southeast Asia in 150,000 years. Over this amount of time, humans could have learned about and begun relying on edible plant foods. Thus, eating meat may not have played a crucial role in the first human migrations to new continents. Careful comparison of animal fossils, stone tools, and early human fossils from Africa, Asia, and Europe will help scientists better to determine what factors motivated and allowed humans to venture out of Africa for the first time.

The origin of our own species, Homo sapiens, is one of the most hotly debated topics in Paleoanthropology. This debate centres on whether or not modern humans have a direct relationship to H. erectus or to the Neanderthals, and to a great extent is acknowledged of the more modern group of humans who evolved within the past 250,000 years. Paleoanthropologists commonly use the term anatomically modern Homo sapiens to distinguish people of today from these similar predecessors.

Traditionally, paleoanthropologists classified as Homo sapiens any fossil human younger than 500,000 years old with a braincase larger than that of H. erectus. Thus, many scientists who believe that modern humans descend from a single line dating back to H. erectus use the name archaic Homo sapiens to refer to a variety of fossil humans that predate anatomically modern H. sapiens. The designate with archaic denote a set of physical features typical of Neanderthals and other species of late Homo before modern Homo sapiens. These features include a combination of a robust skeleton, a large but low braincase (positioned in a measure behind, rather than over, the face), and a lower jaw lacking a prominent chin. In this sense, Neanderthals are sometimes classified as a subspecies of archaic H. sapiens and H. Sapiens neanderthalensis. Other scientists think that the variation in archaic fossils falls into clearly identifiable sets of traits, and that any type of human fossil exhibiting a unique set of traits should have a new species name. According to this view, the Neanderthals belong to their own species, H. neanderthalensis.

The Neanderthals lived in areas ranging from western Europe through central Asia from about 200,000 to about 28,000 years ago. The name Neanderthal (sometimes spelled Neanderthal) comes from fossils found in 1856 in the Feldhofer Cave of the Neander Valley in Germany (tal,-a modern form of that-means ‘valley' in German). Scientists realized several years later that prior discoveries, at Engis, Belgium, in 1829 and at Forbes Quarry, Gibraltar, in 1848,-also represented Neanderthal. These two earlier discoveries were the first early human fossils ever found.

In the past. Scientists claimed that Neanderthal differed greatly from modern humans. However, the basis for this claim came from a faulty reconstruction of a Neanderthal skeleton that showed it with bent knees and a slouching gait. This reconstruction gave the common but mistaken impression that Neanderthals were dim-witted brutes who lived a crude lifestyle. On the contrary, Neanderthals, like the species that preceded them, walked fully upright without a slouch or bent knees. In addition, their cranial capacity was quite large at about 1,500 cu. cm. (about ninety cu. in.), larger on average than that of modern humans. (The difference probably relates to the greater muscle mass of Neanderthals as compared with modern humans, which usually correlates with a larger brain size.)

Compared with earlier humans, Neanderthals had a high degree of cultural sophistication. They appear to have acted symbolic rituals, such as the burial of they're dead. Neanderthal fossils,-including some complete skeletons is quite common compared with those of earlier forms of Homo, in part because of the Neanderthal practice of intentional burial. Neanderthals also produced sophisticated types of stone tools known as Mousterian, which involved creating blanks (rough forms) from which several types of tools could be made. Along with many physical similarities, Neanderthals differed from modern humans in several ways. The typical Neanderthal skull had a low forehead, a large nasal area (suggesting a large nose), a forward-projecting nasal and cheek region, a prominent brow ridge with a bony arch over each eye, a non-projecting chin, and obvious space behind the third molar (in front of the upward turn of the lower jaw).

Neanderthals, in addition, had a distinctively heavily built and large-boned skeleton than do modern humans. Other Neanderthal skeletal features included a bowing of the limb bones in some individuals, broad scapulae (shoulder blades), hip joints turned outward, a long and thin pubic bone, short lower leg and arm bones relative to the upper bones, and large surfaces on the joints of the toes and limb bones. Together, these traits made a powerful, compact body of short stature males averaged 1.7 m. (5 ft. 5 in.) tall and 84 kg. (185 lb.), and females averaged 1.5 m. (5 ft.) tall and 80 kg. (176 lb.).

The short, stocky build of Neanderthals conserved heat and helped them withstand extremely cold conditions that prevailed in temperate regions beginning about 70,000 years ago. The last known Neanderthal fossils come from western Europe and date from approximately 36,000 years ago.

Just when Neanderthal populations grew in number in Europe and parts of Asia, other populations of nearly modern humans arose in Africa and Asia. Scientists also commonly refer to these fossils, which are distinct from but similar to those of Neanderthals, as archaic. Fossils from the Chinese sites of Dali, Maba, and Xujiayao display the long, low cranium and large face typical of archaic humans, yet they also have features similar to those of modern people in the region. At the cave site of Jebel Irhoud, Morocco, scientists have found fossils with the long skull typical of archaic humans but also the modern traits of a moderately higher forehead and flatter mid face. Fossils of humans from East African sites older than 100,000 years, such as Ngaloba in Tanzania and Eliye Springs in Kenya-also seem to show a mixture of archaic and modern traits.

The oldest known fossils that possess skeletal features typical of modern humans date from between 130,000 and 90,000 years ago. Several key features distinguish the skulls of modern humans from those of archaic species. These features include a much smaller brow ridge, if any; a globe-shaped braincase; and a flat or only projecting face of reduced size, located under the front of the braincase. Among all mammals, only humans have a face positioned directly beneath the frontal lobe (forward-most area) of the brain. As a result, modern humans tend to have a higher forehead than did Neanderthals and other archaic humans. The cranial capacity of modern humans ranges from about 1,000 to 2,000 cu. cm. (60 to 120 cu. in.), with the average being about 1,350 cu. cm. (80 cu. in.).

Scientists have found both fragmentary and nearly complete cranial fossils of early anatomically modern Homo sapiens from the sites of Singha, Sudan; Omo, Ethiopia; Klasies River Mouth, South Africa, and Skh~l Cave unbounded of Israel. Based on these fossils, many scientists conclude that modern H. sapiens had evolved in Africa by 130,000 years ago and started spreading to diverse parts of the world beginning on a route through the Near East sometime before 90,000 years ago.

Paleoanthropologists are engaged in an ongoing debate about where modern humans evolved and how they spread around the world. Differences in opinion rest on the question of whether the evolution of modern humans took place in a small region of Africa or over a broad area of Africa and Eurasia. By extension, opinions differ as to whether modern human populations from Africa displaced all existing populations of earlier humans, eventually resulting in their extinction.

Those who think of modern humans originating only in Africa and then spreading around the world support as their thesis the out of Africa hypothesis. Those who think modern humans evolved over a large region of Eurasia and Africa support the so-called multi-regional hypothesis. The African origins of Humanity where Richard Leakey's work at Omo-Kibish gave scientists a fresh start in their study of Homo sapiens' origins. In fact, his finds gave them two beginnings. First, they led a few researchers in the 1970s to conclude that the Kibish man was a far more likely ancestor for the Cro-Magnons, a race of early Europeans who thrived about 25,000 years ago, than their immediate predecessors in Europe, the heavyset Neanderthals. Then in the 1980s, a new reconstruction and study of the Kibish man revealed an even more startling possibility-that, and he was a far better candidate as the forbear, not just for the Cro-Magnons but for every one of us in the wake of an ignited awareness for life today, not just Europeans but all the other peoples of the world, from the Eskimos of Greenland to the Twa people of Africa, and from Australian aborigines to Native Americans. In other words, the Kibish man acted as pathfinder for a new genesis for the human species.

In the past few years, many paleontologists, anthropologists, and geneticists have come to agree that this ancient resident of the riverbanks of Ethiopia and all his Kibish kin-both far and near-could are even among our ancestors. However, it has also become clear that the evolutionary pathway of these fledgling modern humans was not an easy one. At one stage, according to genetic data, our species became as endangered as the mountain gorilla is today, its population reduced to only about 10,000 adults. Restricted to one region of Africa, but tempered in the flames of near extinction, this population went on to make a remarkable comeback. It then spread across Africa until-nearly 100,000 years ago-it had colonized much of the continent's savannas and woodlands. We see the imprint of this spread in biological studies that have revealed that races within Africa are genetically the most disparate on the planet, indicating that modern humans have existed there in larger numbers for a longer time than anywhere else.

We can also observe intriguing clues about our African origins in other less obvious but equally exciting arenas. One example comes from Congo-Kinshasa. This huge tropical African country has never assumed much importance in the field of Paleoanthropology, the branch of anthropology concerned with the investigation of ancient humans. Unlike the countries to the east, Ethiopia, Kenya, and Tanzania, Congo-Kinshasa has provided few exciting fossil sites-until recently.

In the neglected western branch of the African Rift Valley, that giant geological slash that has played such a pivotal role in human evolution, the Semliki River runs northward between two large lakes, and its waters eventually from the source of the Nile. Along its banks, sediments are being exposed that were laid down 90,000 years ago, just as Homo sapiens was making its mark across Africa.

At the town of Katanda an archaeological treasure trove: thousands of artifacts, mostly stone tools, and a few bone implements that quite astonished the archaeologists, led by the husband-and-wife team of John Yellen, of the National Science Foundation, Washington, and Alison Brooks, of George Washington University. Among the wonders they have uncovered are sophisticated bone harpoons and knives. Previously it was thought that the Cro-Magnons were the first humans to develop such delicate carving skills. Yet this very much older grouped of Homo sapiens, living in the heartland of Africa, displayed the same extraordinary skills as craft's workers. It was as if, said one observer, a prototype Pontiac car had been found in the attic of Leonardo da. Vinci.

There were other surprises for researchers, however. Apart from the finely carved implements, they found fish bones, including some from two-metre-long catfish. It seems the Katanda people were efficiently and repeatedly catching catfish during their spawning season, indicating that systematic fishing is quite an ancient human skill and not some recently acquired expertise, as many archaeologists had previously thought. In addition, the team found evidence that a Katanda site had at least two separate but similar clusters of stones and debris that looked like the residue of two distinct neighbouring groupings, signs of the possible impact of the nuclear family on society, a phenomenon that now defines the fabric of our lives.

Clearly, our African forbears were sophisticated people. Bands of them, armed with new proficiencies, like those men and women who had flourished on the banks of the Semliki, began an exodus from their African homeland. Slowly they trickled northward, and into the Levant, the region bordering the eastern Mediterranean. Then, by 80,000 years ago, small groups began spreading across the globe, via the Middle East, planting the seeds of modern humanity in Asia and later in Europe and Australia.

Today men and women conduct themselves in highly complex ways: some are uncovering the strange, indeterminate nature of matter, with its building blocks of quarks and leptons; some are probing the first few seconds of the origins of the universe fifteen billion years ago; while others are trying to develop artificial brains capable of staggering feats of calculation. Yet the intellectual tools that allow us to investigate the deepest secrets of our world are the ones that were forged during our fight for survival, in a very different set of circumstances from those that prevail today. How on earth could an animal that struggled for survival like any other creature, whose time was absorbed in a constant search for meat, nuts, and tubers, and who had to maintain constant vigilance against predators, develop the mental hardwiring needed by a nuclear physicist or an astronomer? This is a vexing issue that takes us to the very heart of our African exodus, to the journey that brought us from precarious survival on a single continent to global control.

If we can ever hope to understand the special attributes that delineate a modern human being we have to attempt to solve such puzzles. How was the Kibish man different from his Neanderthal cousins in Europe, and what evolutionary pressures led the Katanda people to develop in such crucially different ways-ironically in the heart of a continent that has for far too long been stigmatized as backward?

Nonetheless, it remains bewildering, but French researchers announced at a press conference on May 22, 1996, the discovery of a new fossil hominid species in central Chad, estimated to have lived between three million and 3.5 million years ago. The fossilized remains of a lower jaw and seven teeth were found in 1995 near Koro Toro, in the desert about 2500 km (about 1500 mi) east of the Great Rift Valley in Africa, the site of many major hominid fossil finds. The leader of the French team that discovered the fossils at Bahr-el-Ghazal, Chad-paleontologist Michel Brunet of the University of Poitiers-named the species Australopithecus bahrelghazali (from the Arabic name of the nearby River of the Gazelles). The research team published its findings in the May 20 bulletin of the French Academy of Sciences. In a letter to the journal Nature published November 16, 1995, the researchers initially classified the fossil as an example of Australopithecus afarensis, the 3.4-million-year-old species that walked upright in eastern Africa. In the letter, Brunet said that more detailed comparisons with other fossils were necessary before he could determine that the jaw came from another species, and he noted that geographic separation can produce differences among animals of the same species. After the letter was published, Brunet travelled to museums to compare the jaw with other hominid bones. The fossil combines both primitive and modern hominid features. The jaw includes the right and left premolars, both canines, and the right lateral incisor. Brunet said the strong canine teeth and the shape of the incisor resemble human teeth more than ape teeth. The chin area is more vertical than the backward-sloping chin of A. afarensis, and it lacks the strong reinforcement for chewing power found among other early hominids. However, the premolars retain primitive characteristics, such as three roots, and modern humans have only one root. Scientists said they needed more fossil material before they can place the species on the evolutionary tree. Brunet cited the find as the first evidence of hominid occupation of areas outside the Great Rift Valley and South Africa, where anthropologists have concentrated their search for hominid fossils. Other experts noted that the eroding volcanic soils in the Great Rift Valley are simply better for preserving and exposing fossils than the soils in most other regions in Africa. Although many digs have occurred in the Great Rift Valley, most scientists believe that hominids existed throughout Africa.

Researchers have conducted many genetic studies and carefully assessed fossils to determine which of these hypotheses agrees more with scientific evidence. The results of this research do not entirely confirm or reject either one. Therefore, some scientists think a compromise between the two hypotheses is the best explanation. The debate between these views has implications for how scientists understand the concept of race in humans. The question raised is whether the physical differences among modern humans evolved deep in the past or most recently, according to the out of Africa hypothesis, also known as the replacement hypothesis, early populations of modern humans from Africa migrated to other regions and entirely replaced existing populations of archaic humans. The replaced populations would have included the Neanderthals and any surviving groups of Homo erectus. Supporters of this view note that many modern human skeletal traits evolved recently,-within the past 200,000 years or so suggesting a single, common origin. In addition, the anatomical similarities shared by all modern human populations far outweigh those shared by premodern and modern humans within particular geographic regions. Furthermore, biological research indicated that most new species of organisms, including mammals, arose from small, geographically isolated populations.

According to the multi-regional hypothesis, also known as the continuity hypothesis, the evolution of modern humans began when Homo erectus spread throughout much of Eurasia around one million years ago. Regional populations retained some unique anatomical features for hundreds of thousands of years, but they also mated with populations from neighbouring regions, exchanging heritable traits with each other. This exchange of heritable traits is known as gene flow.

Through gene flow, populations of H. erectus passed on a variety of increasingly modern characteristics, such as increases in brain size, across their geographic range. Gradually this would have resulted in the evolution of more modern looking humans throughout Africa and Eurasia. The substantial differences among our citizenries today, are, then, a sortal result from hundreds of thousands of years of regional evolution. This is the concept of continuity. For instance, modern East Asian populations have some skull features that scientists also see in H. erectus fossils from that region.

Some critics of the multi-regional hypothesis claim that it wrongly advocates a scientific belief in race and could be used to encourage racism. Supporters of the theory point out, however, that their position does not imply that modern races evolved in isolation from each other, or that racial differences justify racism. Instead, the theory holds that gene flow linked different populations together. These links allowed progressively more modern features, no matter where they arose, to spread from region to region and eventually become universal among humans.

Scientists have weighed the out of Africa and multi-regional hypotheses against both genetic and fossil evidence. The results do not unanimously support either one, but weigh more heavily in favour of the out of Africa hypothesis.

Geneticists have studied difference in the DNA (deoxyribonucleic acid) of different populations of humans. DNA is the molecule that contains our heritable genetic code. Differences in human DNA result from mutations in DNA structure. Mutations may result from exposure to external elements such as solar radiation or certain chemical compounds, while others occur naturally at random.

Geneticists have calculated rates at which mutations can be expected to occur over time. Dividing the total number of genetic differences between two populations by an expected rate of mutation provides an estimate of the time when the two shared a common ancestor. Many estimates of evolutionary ancestry rely on studies of the DNA in cell structures called mitochondria. This DNA is called mtDNA (mitochondrial DNA). Unlike DNA from the nucleus of a cell, which codes for most of the traits an organism inherits from both parents, mtDNA inheritance passes only from a mother to her offspring. MtDNA also accumulates mutations about ten times faster than does DNA in the cell nucleus (the location of most DNA). The structure of mtDNA changes so quickly that scientists can easily measure the differences between one human population and another. Two closely related populations should have only minor differences in their mtDNA. Conversely, two very distantly related populations should have large differences in their mtDNA

MtDNA research into modern human origins has produced two major findings. First, the entire amount of variation in mtDNA across human populations is small in comparison with that of other animal species. This means that all human mtDNA originated from a single since which ancestral lineage-specifically, a single female-of late has been mutating ever. Most estimates of the mutation rate of mtDNA suggest that this female ancestor lived about 200,000 years ago. In addition, the mtDNA of African populations varies more than that of peoples in other continents. This suggests that the mtDNA of African populations sustained of change for a longer time than in populations of any other region. That all living people inherited their mtDNA from one woman in Africa, who is sometimes called the Mitochondrial Eve. Some geneticists and anthropologists have concluded from this evidence that modern humans originated in a small population in Africa and spread from there.

MtDNA studies have weaknesses, however, including the following four. First, the estimated rate of mtDNA mutation varies from study to study, and some estimates put the date of origin closer to 850,000 years ago, the time of Homo erectus. Second, mtDNA makes up a small part of the total genetic material that humans inherit. The rest of our genetic material-about 400,000 times more than the mtDNA,-came from many individuals living at the time of the African Eve, conceivably from many different regions. Third, the time at which modern mtDNA began to diversify does not necessarily coincide with the origin of modern human biological traits and cultural abilities. Fourth, the smaller amount of modern mtDNA diversity but Africa could result from times when European and Asian populations declined in numbers, perhaps due to climate changes.

Despite these criticisms, many geneticists continue to favour the out of Africa hypothesis of modern human origins. Studies of nuclear DNA also suggest an African origin for a variety of genes. Furthermore, in a remarkable series of studies in the late 1990's, scientists recovered mtDNA from the first Neanderthal fossil found in Germany and two other Neanderthal fossils. In each case, the mtDNA does not closely match that of modern humans. This finding suggests that at least some Neanderthal populations had diverged from the line to modern humans by 500,000 to 600,000 years ago. This also suggests that Neanderthals represent a separate species from modern H. sapiens. In another study, however, mtDNA extracted from a 62,000 -year-old Australian H. sapiens fossil was found to differ significantly from modern human mtDNA, suggesting a much wider range of mtDNA variation within H. sapiens than was previously believed. According to the Australian researchers, this finding lends support to the multi-regional hypothesis because it shows that different populations of H. sapiens, possibly including Neanderthals, could have evolved independently in different parts of the world.

As with genetic research, fossil evidence also does not entirely support or refute either of the competing hypotheses of modern human origins. However, many scientists see the balance of evidence favouring an African origin of modern H. sapiens within the past 200,000 years. The oldest known modern-looking skulls come from Africa and date from perhaps 130,000 years ago. The next oldest comes from the Near East, where they date from about 90,000 years ago. Fossils of modern humans in Europe do not exist in advancing their precedence, in as much as lacking generative qualities that extend no further than 40,000 years ago. In addition, the first modern humans in Europe-often called Cro-Magnon people -had elongated lower leg bones, as did African populations adapted too warm, tropical climates. This suggests that populations from warmer regions replaced those in colder European regions, such as the Neanderthals.

Fossils also show that populations of modern humans lived when and in the same regions as did populations of Neanderthals and Homo erectus, but that each retained its distinctive physical features. The different groups overlapped in the Near East and Southeast Asia for between about 30,000 and 50,000 years. The maintenance of physical differences for this amount of time implies that archaically and modern humans could either not or generally did not interbreed. To some scientists, this also means that the Neanderthals belong to a separate species, H. neanderthalensis, and that migratory populations of modern humans entirely replaced archaic humans in both Europe and eastern Asia.

On the other hand, fossils of archaic and modern humans in some regions show continuity in certain physical characteristics. These similarities may indicate multi-regional evolution. For example, both archaic and modern skulls of eastern Asia have flatter cheek and nasal areas than do skulls from other regions. By contrast, the same parts of the face project forward in the skulls of both archaic and modern humans of Europe. If these traits were influenced primarily by genetic inheritance rather than environmental factors, archaic humans may have produced modern humans in some regions or at least interbred with migrant modern-looking humans.

Each of the competing major hypotheses of modern human origins has its strengths and weaknesses. Genetic evidence appears to support the out of Africa hypothesis. In the western half of Eurasia and in Africa, this hypothesis also seems the better explanation, particularly as for the apparent replacement of Neanderthals by modern populations. Also, the multi-regional hypothesis appears to explain some of the regional continuity found in East Asian populations.

Therefore, many paleoanthropologists advocate a theory of modern human origins that combine elements of the out of Africa and the multi-regional hypotheses. Humans with modern features may have initiatively emerged in Africa or come together there as a result of gene flow with populations from other regions. These African populations may then have replaced archaic humans in certain regions, such as western Europe and the Near East. Still, elsewhere,-especially in East Asia-gene flow may have occurred among local populations of archaic and modern humans, resulting in distinct and enduring regional characteristics.

All three of these views,-the two competing positions and the compromiser acknowledge the strong biological unity of all people. In the multi-regional hypothesis, this unity results from hundreds of thousands of years of continued gene flow among all human populations. According to the out of Africa hypothesis, on the other hand, similarities among all living human populations result from a recent common origin. The compromise position accepts both as reasonable and compatible explanations of modern human origins.

The story of human evolution is as much about the development of cultural behaviour as it is about changes in physical appearance. The term culture, in anthropology, traditionally refers to all human creations and activities governed by social customs and rules. It includes elements such as technology, language, and art. Human cultural behaviour depends on the social transfer of information from one generation to the next, which it depends on a sophisticated system of communication, such as language.

The term culture has often been used to distinguish the behaviour of humans from that of other animals. However, some nonhuman animals also appear to have forms of learned cultural behaviours. For instance, different groups of chimpanzees use different techniques to capture termites for food using sticks. Also, in some regions chimps use stones or pieces of wood for cracking open nuts. Chimps in other regions do not practice this behaviour, although their forests have similar nut trees and materials for making tools. These regional differences resemble traditions that people pass from generation to generation. Traditions are a fundamental aspect of culture, and paleoanthropologists assume that the earliest humans also had some types of traditions.

However, modern humans differ from other animals, and probably many early human species. In that, they actively teach each other and are able to pass on an accumulative amounts of resulting knowledge. People also have a uniquely long period of learning before adulthood, and the physical and mental capacity for language. Language of all forms spoken, signed, and written,-provides a medium for communicating vast amounts of information, much more than any other animal could probably transmit through gestures and vocalizations.

Scientists have traced the evolution of human cultural behaviour through the study of archaeological artifacts, such as tools, and related evidence, such as the charred remains of cooked food. Artifacts show that throughout much of human evolution, culture has developed slowly. During the Palaeolithic, or early Stone Age, basic techniques for making stone tools changed very little for periods of well more than a million years.

Human fossils also provide information about how culture has evolved and what effects it has had on human life. For example, over the past 30,000 years, the basic anatomy of humans has undergone only one prominent change: The bones of the average human skeleton have become much smaller and thinner. Innovations in the making and usage of tools and their obtaining food: results of cultural evolution may have led to more efficient and less physically taxing lifestyles, and thus caused changes in the skeleton.

Culture has played a prominent role in the evolution of Homo sapiens. Within the last 60,000 years, people have migrated to settle most unoccupied regions of the world, such as small island chains and the continents of Australia and the Americas. These migrations depended on developments in transportation, hunting and fishing tools, shelter, and clothing. Within the past 30,000 years, cultural evolution has sped up dramatically. This change shows up in the archaeological record as a rapid expansion of stone tool types and Toolmaking techniques, and in works of art and indications of evolving religion, such as burials. By 10,000 years ago, people first began to harvest and cultivate grains and to domesticate animals-a fundamental change in the ecological relationship between human beings and other life on Earth. The development of agriculture gave people larger quantities and more stable supplies of food, which set the stage for the rise of the first civilizations. Today, culture and particularly technology dominates human life.

Paleoanthropologists and archaeologists have studied many topics in the evolution of human cultural behaviour. These have included the evolution of (1) social life; (2) subsistence (the acquisition and production of food); (3) the making and using of tools; (4) environmental adaptation; (5) symbolic thought and its expression through language, art, and religion; and (6) the development of agriculture and the rise of civilizations.

Most primate species, including the African apes, live in social groups of varying size and complexity. Within their groups, individuals often have multifaceted roles, based on age, sex, status, social skills, and personality. The discovery in 1975 at Hadar, Ethiopia, of a group of several Australopithecus afarensis individuals who died together 3.2 million years ago appears to confirm that early humans lived in social groups. Scientists have referred to this collection of fossils as The First Family.

One of the first physicals changes in the evolution of humans from apes-a decrease in the size of male canine teeth-also, indicating a change in social relations. Male apes sometimes use their large canines to threaten (or sometimes fight with) other males of their species, usually over access to females, territory, or food. The evolution of small canines in Australopiths implies that males had either developed other methods of threatening each other or become more cooperative. In addition, both male and female Australopiths had small canines, indicating a reduction of sexual dimorphism from that in apes. Yet, although sexual dimorphism in canine size decreased in Australopiths, males were still much larger than females. Thus, male Australopiths might have competed aggressively with each other based on sheer size and strength, and the social life of humans may not have differed much from that of apes until later times.

Scientists believe that several of the most important changes from apelike to characteristically human social life occurred in species of the genus Homo, whose members show even less sexual dimorphism. These changes, which may have occurred at different times, included, (1) prolonged maturation of infants, including an extended period during which they required intensive care from their parents; (2) special bonds of sharing and exclusive mating between particular males and females, called pair-bonding; and (3) the focus of social activity at a home base, a safe refuge in a special location known to family or group members.

Humans, who have a large brain, has a prolonged period of infant development and childhood because the brain takes a long time too mature. Since the Australopiths brain was not much larger than that of a chimp, some scientists think that the earliest humans had a more apelike rate of growth, which is far more rapid than that of modern humans. This view is supported by studies of Australopiths fossils looking at tooth development-a good indicator of overall body development.

In addition, the human brain becomes very large as it develops, so a woman must give birth at an early stage of development in order for the infant's head to fit through her birth canal. Thus, human babies require a long period of care to reach a stage of development at which they depend less on their parents. In contrast with a modern female, a female Australopiths could give birth to a baby at an advanced stage of development because its brain would not be too large to pass through the birth canal. The need to give birth early-and therefore to provide more infant care,-may have evolved around the time of the middle Homo's species Homo's ergaster. This species had a brain significantly larger than that of the Australopiths, but a narrow birth canal.

Pair-bonding, usually of a short duration, occurs in a variety of primate species. Some scientists speculate that prolonged bonds developed in humans along with increased sharing of food. Among primates, humans have a distinct type of food-sharing behaviour. People will delay eating food until they have returned with it to the location of other members of their social group. This type of food sharing may have arisen at the same time as the need for intensive infant care, probably by the time of H. ergaster. By devoting himself to a particular female and sharing food with her, a male could increase the chances of survival for his own offspring.

Humans have lived as foragers for millions of years. Foragers obtain food when and where it is available over a broad territory. Modern-day foragers (also known as hunter-gatherers)-such as the San people in the Kalahari Desert of southern Africa,-also set up central campsites, or home bases, and divide work duties between men and women. Women gather readily available plant and animal foods, while men take on the often less successful task of hunting. Female and male family members and relatives bring together their food to share at their home base. The modern form of the home base-that also serves as a haven for raising children and caring for the sick and elderly-may have first developed with middle Homo after about 1.7 million years ago. However, the first evidence of hearths and shelters common to all modern home bases,-comes from only after 500,000 years ago. Thus, a modern form of social life may not have developed until late in human evolution.

Human subsistence refers to the types of food humans eat, the technology used in and methods of obtaining or producing food, and the ways in which social groups or societies organize them for getting, making, and distributing food. For millions of years, humans probably fed on-the-go, much as other primates do. The lifestyle associated with this feeding strategy is generally organized around small, family-based social groups that take advantage of different food sources at different times of year.

The early human diet probably resembled that of closely related primate species. The great apes eat mostly plant foods. Many primates also eat easily obtained animal foods such as insects and bird eggs. Among the few primates that hunt, chimpanzees will prey on monkeys and even small gazelles. The first humans probably also had a diet based mostly on plant foods. In addition, they undoubtedly ate some animal foods and might have done some hunting. Human subsistence began to diverge from that of other primates with the production and use of the first stone tools. With this development, the meat and marrow (the inner, fat-rich tissue of bones) of large mammals became a part of the human diet. Thus, with the advent of stone tools, the diet of early humans became distinguished in an important way from that of apes.

Scientists have found broken and butchered fossil bones of antelopes, zebras, and other comparably sized animals at the oldest archaeological sites, which go of a date from some 2.5 million years ago. With the evolution of late Homo, humans began to hunt even the largest animals on Earth, including mastodons and mammoths, members of the elephant family. Agriculture and the of animals arose only in the recent past, with H. sapiens.

Paleoanthropologists have debated whether early members of the modern human genus were aggressive hunters, peaceful plant gatherers, or opportunistic scavengers. Many scientists once thought that predation and the eating of meat had strong effects on early human evolution. This hunting hypothesis suggested that early humans in Africa survived particularly arid periods by aggressively hunting animals with primitive stone or bone tools. Supporters of this hypothesis thought that hunting and competition with carnivores powerfully influenced the evolution of human social organization and behaviour; Toolmaking; anatomy, such as the unique structure of the human hand; and intelligence.

Beginning in the 1960's, studies of apes cast doubt on the hunting hypothesis. Researchers discovered that chimpanzees cooperate in hunts of at least small animals, such as monkeys. Hunting did not, therefore, entirely distinguish early humans from apes, and therefore hunting alone may not have determined the path of early human evolution. Some scientists instead argued in favour of the importance of food-sharing in early human life. According to a food-sharing hypothesis, cooperation and sharing within family groups-instead of aggressive hunting-strongly influenced the path of human evolution.

Scientists once thought that archaeological sites as much as two million years old provided evidence to support the food-sharing hypothesis. Some of the oldest archaeological sites were places where humans brought food and stone tools together. Scientists thought that these sites represented home bases, with many social features of modern hunter-gatherers campsites, including the sharing of food between pair-bonded males and females.

Critique of the food-sharing hypothesis resulted from more careful study of animal bones from the early archaeological sites. Microscopic analysis of these bones revealed the marks of human tools and carnivore teeth, indicating that both humans and potential predators,-such as, hyenas, cats, and jackals-were active at these sites. This evidence suggested that what scientists had thought were home bases where early humans shared food were in fact food-processing sites that humans abandoned to predators. Thus, evidence did not clearly support the idea of food-sharing among early humans.

The new research also suggested a different view of early human subsistence-that early humans scavenged meat and bone marrow from dead animals and did little hunting. According to this scavenging hypothesis, early humans opportunistically took parts of animal carcasses left by predators, and then used stone tools to remove marrow from the bones.

Observations that many animals, such as antelope, often die off in the dry season make the scavenging hypothesis quite plausible. Early Toolmaker would have had plenty of opportunity to scavenge animal fat and meat during dry times of the year. However, other archaeological studies,-and a better appreciation of the importance of hunting among chimpanzees suggests that the scavenging hypothesis be too narrow. Many scientists now believe that early humans both scavenged and hunted. Evidence of carnivore tooth marks on bones cut by early human Toolmaker suggests that the humans scavenged at least the larger of the animals they ate. They also ate a variety of plant foods. Some disagreement remains, however, about how much early humans relied on hunting, especially the hunting of smaller animals.

Scientists debate when humans first began hunting on a regular basis. For instance, elephant fossils were made-known to be found existent with tools made by Middle Homo once led researchers to the idea that members of this species were hunters of big game. However, the simple association of animal bones and tools at the same site does not necessarily mean that early humans had killed the animals or eaten their meat. Animals may die in many ways, and natural forces can accidentally place fossils next to tools. Recent excavations at Olorgesailie, Kenya, show that H. erectus cut meat from elephant carcasses but do not reveal whether these humans were regular or specialized hunters

Humans who lived outside Africa,-especially in colder temperate climates almost needed to eat more meat than their African counterparts. Humans in temperate Eurasia would have had to learn about which plants they could safely eat, and the number of available plant foods would drop significantly during the winter. Still, although scientists have found very few fossils of edible or eaten plants at early human sites, early inhabitants of Europe and Asia probably did eat plant foods besides meat.

Sites that provide the clearest evidence of early hunting include Boxgrove, England, where about 500,000 years ago people trapped several large game animals between a watering hole and the side of a cliff and then slaughtered them. At Schningen, Germany, a site about 400,000 years old, scientists have found wooden spears with sharp ends that were well designed for throwing and probably used in hunting large animals.

Neanderthals and other archaic humans seem to have eaten whatever animals were available at a particular time and place. So, for example, in European Neanderthal sites, the number of bones of reindeer (a cold-weather animal) and red deer (a warm-weather animal) changed depending on what the climate had been like. Neanderthals probably also combined hunting and scavenging to obtain animal protein and fat.

For at least the past 100,000 years, various human groups have eaten foods from the ocean or coast, such as shellfish and some sea mammals and birds. Others began fishing in interior rivers and lakes. Between probably 90,000 and 80,000 years ago people in Katanda, in what is now the Democratic Republic of the Congo, caught large catfish using a set of barbed bone points, the oldest known specialized fishing implements. The oldest stone tips for arrows or spears date from about 50,000 to 40,000 years ago. These technological advances, probably first developed by early modern humans, indicate an expansion in the kinds of foods humans could obtain. Beginning 40,000 years ago humans began making even more significant advances in hunting dangerous animals and large herds, and in exploiting ocean resources. People cooperated in large hunting expeditions in which they killed many reindeer, bison, horses, and other animals of the expansive grasslands that existed at that time. In some regions, people became specialists in hunting certain kinds of animals. The familiarity these people had with the animals they hunted appears in sketches and paintings on cave walls, dating from as much as 32,000 years ago. Hunters also used the bones, ivory, and antlers of their prey to create art and beautiful tools. In some areas, such as the central plains of North America that once teemed with a now-extinct type of large bison (Bison occidentalis), hunting may have contributed to the extinction of entire species.

The making and use of tools alone probably did not distinguish early humans from their ape predecessors. Instead, humans made the important breakthrough of using one tool to make another. Specifically, they developed the technique of precisely hitting one stone against another, known as knapping. Stone Toolmaking characterized the period that on give occasion to have to do with the Stone Age, which began at least 2.5 million years ago in Africa and lasted until the development of metal tools within the last 7,000 years (at different times in different parts of the world). Although early humans may have made stone tools before 2.5 million years ago, Toolmaker may not have remained long enough in one spot to leave clusters of tools that an archaeologist would notice today.

The earliest simple form of stone Toolmaking involved breaking and shaping an angular rock by hitting it with a palm-sized round rock known as a hammerstone. Scientists refer to tools made in this way as Oldowan, after Olduvai Gorge in Tanzania, a site from which many such tools have come. The Oldowan tradition lasted for about one million years. Oldowan tools include large stones with a chopping edge, and small, sharp flakes that could be used to scrape and slice. Sometimes Oldowan Toolmaker used anvil stones (flat rocks found or placed on the ground) on which hard fruits or nuts could be broken open. Chimpanzees are known to do this today.

Humans have always adapted to their environments by adjusting their behaviour. For instance, early Australopiths moved both in the trees and on the ground, which probably helped them survive environmental fluctuations between wooded and more open habitats. Early Homo adapted by making stone tools and transporting their food over long distances, thereby increasing the variety and quantities of different foods they could eat. An expanded and flexible diet would have helped these Toolmaker survive unexpected changes in their environment and food supply

When populations of H. erectus moved into the temperate regions of Eurasia, but they faced unseasoned challenges to survival. During the colder seasons they had to either move away or seek shelter, such as in caves. Some of the earliest definitive evidence of cave dwellers dates from around 800,000 years ago at the site of Atapuerca in northern Spain. This site may have been home too early H. heidelbergensis populations. H. erectus also used caves for shelter.

Eventually, early humans learned to control fire and to use it to create warmth, cook food, and protect themselves from other animals. The oldest known fire hearths date from between 450,000 and 300,000 years ago, at sites such as Bilzingsleben, Germany; Verteszöllös, Hungary; and Zhoukoudian (Chou-k ou-tien), China. African sites as old as 1.6 million to 1.2 million years contain burned bones and reddened sediments, but many scientists find such evidence too ambiguous to prove that humans controlled fire. Early populations in Europe and Asia may also have worn animal hides for warmth during glacial periods. The oldest known bone needles, which indicate the development of sewing and tailored clothing, date from about 30,000 to 26,000 years ago.

Behaviour relates directly to the development of the human brain, and particularly the cerebral cortex, the part of the brain that allows abstract thought, beliefs, and expression through language. Humans communicate through the use of symbols-ways of referring to things, ideas, and feelings that communicate meaning from one individual to another but that need not have any direct connection to what they identify. For instance, a word, or utterance is only one type of symbolization, in that of doing or not as the usually related directional thing or, perhaps, as an ideal symbol represents; it is nonrepresentational English-speaking people use the word lion to describe a lion, not because a dangerous feline looks like the letters I i-o-n, but because these letters together have a meaning created and understood by people.

People can also paint abstract pictures or play pieces of music that evoke emotions or ideas, even though emotions and ideas have no form or sound. In addition, people can conceive of and believe in supernatural beings and powers-abstract concepts that symbolize real-world events such as the creation of Earth and the universe, the weather, and the healing of the sick. Thus, symbolic thought lies at the heart of three hallmarks of modern human culture: language, art, and religion.

In language, people creatively join words together in an endless variety of sentences, hopefully graduating to phrases and perhaps, the paragraphs and lastly with the grandiosities fulfilled in writing a book. Each set-category has a distinct meaning as accorded to its set-classification by mental rules, or grammar. Language provides the ability to communicate complex concepts. It also allows people to exchange information about both past and future events, about objects that are not present, and about complex philosophical or technical concepts

Language gives people many adaptive advantages, including the ability to plan, to communicate the location of food or dangers to other members of a social group, and to tell stories that unify a group, such as mythologies and histories. However, words, sentences, and languages cannot be preserved like bones or tools, so the evolution of language is one of the most difficult topics to investigate through scientific study.

It appears that modern humans have an inborn instinct for language. Under normal conditions not developing language is almost impossible for a person, and people everywhere go through the same stages of increasing language skill at about the same ages. While people appear to have inborn genetic information for developing language, they learn specific languages based on the cultures from which they come and the experiences they have in life.

The ability of humans to have language depends on the complex structure of the modern brain, which has many interconnected, specific areas dedicated to the development and control of language. The complexity of the brain structures necessary for language suggests that it probably took a long time to evolve. While paleoanthropologists would like to know when these important parts of the brain evolved, endocasts (inside impressions) of early human skulls do not provide enough detail to show this.

Some scientists think that even the early Australopiths had some ability to understand and use symbols. Support for this view comes from studies with chimpanzees. A few chimps and other apes have been taught to use picture symbols or American Sign Language for simple communication. Nevertheless, it appears that language-as well as art and religious ritual-became vital aspects of human life only during the past 100,000 years, primarily within our own species.

Humans also express symbolic thought through many forms of art, including painting, sculpture, and music. The oldest known object of possible symbolic and artistic value dates from about 250,000 years ago and comes from the site of Berekhat Ram, Israel. Scientists have interpreted this object, a figure carved into a small piece of volcanic rock, as a representation of the outline of a female body. Only a few other possible art objects are known from between 200,000 and 50,000 years ago. These items, from western Europe and usually attributed to Neanderthals, include two simple pendants-a tooth and a bone with bored holes and several grooved or polished fragments of tooth and bone.

Sites dating from at least 400,000 years ago contain fragments of red and black pigment. Humans might have used these pigments to decorate bodies or perishable items, such as wooden tools or clothing of animal hides, but this evidence would not have survived to today. Solid evidence of the sophisticated use of pigments for symbolic purposes-such as in religious rituals,-comes only from after 40,000 years ago. From early in this period, researchers have found carefully made types of crayons used in painting and evidence that humans burned pigments to create a range of colours.

People began to create and use advanced types of symbolic objects between about 50,000 and 30,000 years ago. Much of this art appears to have been used in rituals-possibly ceremonies to ask spirit beings for a successful hunt. The archaeological record shows a tremendous blossoming of art between 30,000 and 15,000 years ago. During this period people adorned themselves with intricate jewellery of ivory, bone, and stone. They carved beautiful figurines representing animals and human forms. Many carvings, sculptures, and paintings depict stylized images of the female body. Some scientists think such female figurines represent fertility.

Early wall paintings made sophisticated use of texture and colour. The area of what is now. Southern France contains many famous sites of such paintings. These include the caves of Chauvet, which contain art more than 30,000 years old, and Lascaux, in which paintings date from as much as 18,000 years ago. In some cases, artists painted on walls that can be reached only with special effort, such as by crawling. The act of getting to these paintings gives them a sense of mystery and ritual, as it must have to the people who originally viewed them, and archaeologists refer to some of the most extraordinary painted chambers as sanctuaries. Yet no one knows for sure what meanings these early paintings and engravings had for the people who made them.

Graves from Europe and western Asia indicate that the Neanderthals were the first humans to bury their dead. Some sites contain very shallow graves, which group or family members may have dug simply to remove corpses from sight. In other cases it appears that groups may have observed rituals of grieving for the dead or communicating with spirits. Some researchers have claimed that grave goods, such as meaty animal bones or flowers, had been placed with buried bodies, suggesting that some Neanderthal groups might have believed in an afterlife. In a large proportion of Neanderthal burials, the corpse had its legs and arms drawn in close to its chest, which could indicate a ritual burial position.

Other researchers have challenged these interpretations, however. They suggest that perhaps the Neanderthals had practically rather than religious reasons for positioning dead bodies. For instance, a body manipulated into a fetal position would need only a small hole for burial, making the job of digging a grave easier. In addition, the animal bones and flower pollen near corpses could have been deposited by accident or without religious intention.

Many scientists once thought that fossilized bones of cave bears (a now-extinct species of large bear) found in Neanderthal caves indicated that these people had what has been referred to as a cave bear cult, in which they worshipped the bears as powerful spirits. However, after careful study researchers concluded that the cave bears probably died while hibernating and that Neanderthals did not collect their bones or worship them. Considering current evidence, the case for religion among Neanderthal prevails upon disputatiousness.

One of the most important developments in human cultural behaviours occurred when people began to domesticate (control the breeding of) plants and animals. and the advent of agriculture led to the development of dozens of staple crops (foods that forms the basis of an entire diet) in temperate and tropical regions around the world. Almost the entire population of the world today depends on just four of these major crops: wheat, rice, corn, and potatoes.

The growth of farming and animal herding initiated one of the most remarkable changes ever in the relationship between humans and the natural environment. The change first began just 10,000 years ago in the Near East and has accelerated very rapidly since then. It also occurred independently in other places, including areas of Mexico, China, and South America. Since the first of plants and animals, many species over large areas of the planet have come under human control. The overall number of plant and animal species has decreased, while the populations of a few species needed to support large human populations have grown immensely. In areas dominated by people, interactions between plants and animals usually fall under the control of a single species-Homo sapiens.

By the time of the initial transition to plant and animal, the cold, glacial landscapes of 18,000 years ago had long since given way to warmer and wetter environments. At first, people adapted to these changes by using a wider range of natural resources. Later they began to focus on a few of the most abundant and hardy types of plants and animals. The plant's people began to use in large quantities included cereal grains, such as wheat in western Asia; wild varieties of rice in eastern Asia; and maize, of which corn is one variety, in what is now Mexico. Some of the animals people first began to herd included wild goats in western Asia, wild ancestors of chickens in eastern Asia, and llamas in South America.

By carefully collecting plants and controlling wild herd animals, people encouraged the development of species with characteristics favourable for growing, herding, and eating. This process of selecting certain species and controlling their breeding eventually created new species of plants, such as oats, barley, and potatoes, eatable animals, including cattle, sheep, and pigs. From these domesticated plant and animal species, people obtained important products, such as flour, milk, and wool.

By harvesting and herding domesticated species, people could store large quantities of plant foods, such as seeds and tubers, and have a ready supply of meat and milk. These readily available supplies gave people an abounding overindulgence-designate with a term food security. In contrast, the foraging lifestyle of earlier human populations never provided them with a significant store of food. With increased food supplies, agricultural peoples could settle into villages and have more children. The new reliance on agriculture and change to settled village life also had some negative effects. As the average diet became more dependent on large quantities of one or a few staple crops, people became more susceptible to diseases brought on by a lack of certain nutrients. A settled lifestyle also increased contact between people and between people and their refuse and waste matter, both of which acted to increase the incidence and transmission of disease.

People responded to the increasing population density-and a resulting overuse of farming and grazing lands-in several ways. Some people moved to settle entirely new regions. Others devised ways of producing food in larger quantities and more quickly. The simplest way was to expand onto new fields for planting and new pastures to support growing herds of livestock. Many populations also developed systems of irrigation and fertilization that allowed them to reuse crop-land and to produce greater amounts of food on existing fields.

The rise of civilizations-the large and complex types of societies in which most people still live today-developed along with surplus food production. People of high status eventually used food surpluses as a way to pay for labour and to create alliances among groups, often against other groups. In this way, large villages could grow into city-states (urban centres that governed them) and eventually empires covering vast territories. With surplus food production, many people could work exclusively in political, religious, or military positions, or in artistic and various skilled vocations. Command of food surpluses also enabled rulers to control labourers, such as in slavery. All civilizations developed based on such hierarchical divisions of status and vocation.

The earliest civilization arose more than 7,000 years ago in Sumer in what is now Iraq. Sumer grew powerful and prosperous by 5,000 years ago, when it centred on the city-state of Ur. The region containing Sumer, known as Mesopotamia, was the same area in which people had first domesticated animals and plants. Other centres of early civilizations include the Nile Valley of Northeast Africa, the Indus. Valley of South Asia, the Yellow River Valley of East Asia, the Oaxaca and Mexico valleys and the Yucatán region of Central America, and the Andean region of South America, China and Inca Empire

All early civilizations had some common features. Some of these included a bureaucratic political body, the military, a body of religious leadership, large urban centres, monumental buildings and other works of architecture, networks of trade, and food surpluses created through extensive systems of farming. Many early civilizations also had systems of writing, numbers and mathematics, and astronomy (with calendars); road systems; a formalized body of law; and facilities for education and the punishment of crimes. With the rise of civilizations, human evolution entered a phase vastly different from all before which came. Before this time, humans had lived in small, family-centred groups essentially exposed to and controlled by forces of nature. Several thousand years after the rise of the first civilizations, most people now live in societies of millions of unrelated people, all separated from the natural environment by houses, buildings, automobiles, and numerous other inventions and technologies. Culture will continue to evolve quickly and in unforeseen directions, and these changes will, in turn, influence the physical evolution of Homo sapiens and any other human species to come.

During the fist two billion years of evolution, bacteria were the sole inhabitants of the earth, and the emergence of a more complex form is associated with networking and symbiosis. During these two billion years, prokaryote, or organisms composed of cells with no nucleus (namely bacteria), transformed he earth's surface and atmosphere. It was the interaction of these simple organisms that resulted in te complex processes of fermentation, photosynthesis, oxygen breathing, and the removal of nitrogen gas from the air. Such processes would not have evolved, however, if these organisms were atomized in the Darwinian sense or if the force of interaction between parts existed only outside the parts.

In the life of bacteria, bits of genetic material within organisms are routinely and rapidly transferred to other organisms. At any given time, an individual bacteria have the use of accessory gene, often from very different strains, which execute unprepared functions are not carried through by its own DNA. Some of this genetic material can be incorporated into the DNA of the bacterium and some may be passed on to other bacteria. What this picture indicates, as Margulis and Sagan put it, is that "all the worlds' bacteria have access to a single gene pool and hence to the adaptive mechanisms of the entire bacterial kingdom."

Since the whole of this gene pool operates in some sense within the parts, the speed of recombination is much greater than that allowed by mutation alone, or by random changes inside parts that alter interaction between parts. The existence of the whole within parts explains why bacteria can accommodate change on a worldwide cale in a few years. If the only mechanism at work were mutation inside organisms, millions of years would require for bacteria to adapt to a global change in the conditions for survival. "By constantly and rapidly adapting to environmental conditions," wrote Margukis and Sagan, "the organisms of the microcosm support the entire biota, their global exchange network ultimately affecting every living plant and animal."

The discovery of symbiotic alliance between organisms that become permanent is other aspect of the modern understanding of evolution that appears to challenge Darwin's view of universal struggle between atomized individual organisms. For example, the mitochondria fond outside the nucleus of modern cells allows the cell to utilize oxygen and to exist in an oxygen-rich environment. Although mitochondria enacts upon integral and essential functions in the life of the cell, they have their own genes composed of DNA, reproduced by simple division, and did so at time different from the rest of the cells.

The most reasonable explanation for this extraordinary alliance between mitochondria and the rest of the cell that oxygen-breathing bacteria in primeval seas combined with the organisms. These ancestors of modern mitochondria provided waste disposal and oxygen-derived energy in exchange for food and shelter and evolved via symbiosis more complex forms of oxygen-breathing life, since the whole of these organisms was lager than the sum of their symbiotic pats, this allowed for life functions that could not be carried to completion by the mere collection of pasts. The existence of the whole within the parts coordinates metabolic functions and overall organization

Awaiting upon the unformidable future, of which the future has framed its proposed modern understanding of the relationship between mind and world within the larger content of the history of mathematical physics, the origin and extensions of the classical view of the functional preliminaries in association with scientific knowledge, and the various ways that physics has attempted to prevent previous challenges to the efficacy of classical epistemology. There is no basis in contemporary physics or biology for believing in the stark Cartesian division between mind and world that some have moderately described as ‘the disease of the Western mind.' The dialectic orchestrations will serve as background for understanding a new relationship between parts and wholes in physics, with a similar view of that relationship that has emerged in the so-called ‘new-biology' and in recent studies of the evolution of a scientific understanding to a more conceptualized representation of ideas, and includes its ally ‘content'.

Recent studies on the manner in which the brains of our ancestors evolved the capacity to acquire and use complex language systems also present us with a new view of the relationship between parts and wholes in the evolution of human consciousness. These studies suggest that cognitive narrations cannot fully explain the experience of consciousness about the physical substrates of consciousness, or that the whole that corresponds with any moment of conscious awareness is an emergent phenomenon that a stable and cohering cognizance cannot fully explain as to the sum of its constituent parts. This also suggests that the pre-adaptive change in the hominid brain that enhanced the capacity to use symbolic communication over a period of 2.5 million years cannot be fully explained as to the usual dynamics of Darwinian evolution.

Recent studies on the manner in which the brains of our ancestors evolved the capacity to acquire and use complex language systems also present us with a new view of the relationship between parts and wholes in the evolution of human consciousness. These studies suggest that the experience of consciousness cannot be fully explained through the physical substrates of consciousness, or that the whole that corresponds with any moment of conscious awareness is an emergent phenomenon that cannot be fully explained as to the sum of its constituent parts. This also suggests that the pre-adaptive change in the hominid brain that enhanced the capacity to use symbolic communication over a period of 2.5 million years cannot be fully explained as for the usual dynamics of Darwinian evolution

Part and wholes in Darwinian theory cannot reveal the actual character of living organisms because that organism exists only in relation to the whole of biological life. What Darwin did not anticipate, however, is that the whole that is a living organism appears to exist in some sense within the parts, and that more complex life forms evolved in precesses in which synergy and cooperation between parts (organisms) result in new wholes (more complex of parts) withe emergent properties that do not exist in the collection of parts. More remarkable, this new understanding of the relationship between part and whole in biology seems very analogous to the disclosed by the discovery of non-locality in physics. We should stress, however, that this view of the relationship between parts and wholes in biologic reality is most orthodox and may occasion some controversy in the community of biological scientists.

Since Darwin's understanding of the relations between part and whole was essentially classical and mechanistic, the new understanding of this relationship is occasioning some revising of his theory of evolution. Darwin made his theory public for the first time in a paper derived to the Linnean Society in 1858. The paper began, ‘All nature hidden, extorted by its adhering agenda embedded to the primitivity of its shaken hostilities as once founded imbedded within the organisms of one another, or with other congestive appetites that gives to the characterology by some externalized nature. In the Origins of Species, Darwin speaks more specifically about the charter of this war: "There must be in every case a struggle for existence one individual either with another of the same species, or with the individual with another of the same species, and still, with the individuals of distinct species, or with physical condition of life." All these assumptions are apparent in Darwin's definition of natural selection: If under chancing conditions of life organic brings present individual differences in almost every part of their structure, and that all construing metabolisms cannot dispute this: If there be, owing to their geometrical rate of an increase, a severe struggle for life to some age, season, or year, and this certainty can then, be considered the infinite complexity of the relating of all organic being to each other and to their conditions of life causing an infinite diversity in structure, constitution, habits, to be advantageous as those that it would be most extraordinary fact if no variations had ever occurred usefully to each being' own welfare. Nevertheless, in the variations useful any organic being ever d occurred, absurdly individuals thus characterized will have the best chance of being preserved in the struggle for life, and from the strong principle of inheritances, that often have a tendency to produce offsprings similarly characterized. Thus the principle of preservation, of resembling the survival of the fittest-is called Natural Selection.

Based on the assumption that the study of variation in domestic animals and plants, ‘afforded the best and safest clue' to understanding evolution. Unforeseeably, the humans who domesticated animals were the first to fall victim to the newly evolved germs, but those humans then evolved substantial resistance to the new diseases. When such partly immune people came into contact with others who had no previous exposure to the germ, epidemics resulted in which up to 99 percent of the previously unexposed population was killed. Germs thus acquired ultimately from domestic animals played decisive roles in the European conquests of Native Americans, Australians, South Africans, and Pacific islanders.

Yet as before, the same pattern repeated itself elsewhere in the world, whenever peoples lacking native wild mammal species suitable for finally had the opportunity to acquire by Native Americans in both North and South America, within a generation of the escape of horses from Europe settlements. For example, by the 19th century North America's Great Plain Indians were famous as expert horse-mounted warriors and bison hunters, but they did not even obtain horses until the late 17th century. Sheep acquired from Spaniards similarly transformed Navajo Indian society and led to, among other things, the weaving of the beautiful woolen blankets for which the Navajo have become renowned. Within a decade of Tasmania's settlement by Europeans with dogs, Aboriginal Tasmanian's who had never before seen dogs, began to breed them in large numbers for use in hunting. Thus, among the thousands of culturally diverse native peoples of Australia. The America, and Africa, no universal cultural taboo stood in the way of animal.

Surely, if some local wild mammal species of those continents had been domesticable, some Australian, American, and African peoples would have domesticated them and gained great advantage from them, just as they benefited from the Eurasian domestic animals that they immediately adopted when those became available. For instance, consider all the peoples of sub-Saharan Africa living within the range of wild zebras and buffalo. Why wasn't there at least on African hunter-gatherer tribe that domesticated those zebras and buffalo and that thereby gained sway over other Africans, without having to await the arrival of Eurasian horses and cattle? All these facts show that the explanation for the lack of native mammal outside Eurasia lay with the locally available wild mammals themselves, nor with the local people.

To the point, evidence for the same interpretation comes from pets. Keeping wild animals as pets, and taming them. Constitute an initial stage in. However, pets have been reported from virtually all traditional human societies on all continents. The variety of wild animals thus tamed is far grater than the variety eventually domesticated, and includes some species that we would scarcely have imagined as pets.

Given our proximity to the animals we love, we must be getting constantly bombarded by their microbes. Those invaders get winnowed by natural selection, and only a few of them succeed in establishing themselves as human diseases.

The first stage is illustrated by dozens of diseases that we now and then pick up directly from our pets and domestic animals. They include cat-scratch fever from our cats, leptospirosis from our dogs, psittacosis from our chickens and parrots, and brucellosis from our cattle. We're similarly liable to pick up diseases from wild animals, such as the tularaemia that hunters can get from skinning wild rabbits. All those microbes are still at an early stage in their evolution into specialized human pathogens. They still don't get transmitted directly from one person to another, and even their transfer to us from animals remain uncommon.

In the second stage a former animal pathogen evolves to the point where it does get transmitted directly between people and causes epidemics. However, the epidemic dies out for any of several reasons, such for being cured by modern medicine, or being stopped when everybody around has already been infected and either becomes immune or dies. For example, a previously unknown fever termed O'nyong-nyong fever appeared in East Africa in 1959 and proceeded to infect several million Africans. It probably arose from a virus of monkeys and was transmitted humans by mosquitoes. The fact that patients recovered quickly and became immune too further attack helped the new disease die out quickly. Closer to home for Americans, Fort Gragg fever was the name applied to a new leptospiral disease that broke out in the United States an the summer of 1942 and soon disappeared.

A third stage in the evolution of our major diseases is represented by former animal pathogens that did establish themselves in humans, whom have not (not yet?) died out, and that may or may not still become major killers of humanity. The future remains very uncertain for Lassa fever, caused by a virus derived probably from rodents. Lassa fevers were first observed in 1969 in Nigeria, were it causes a fatal illness so contagious that Nigerian hospitals have been closed down if even a single case appears. Better established is Lyme disease, caused by a spirochete that we get from the bite of ticks carried by mice and deer. Although the first known human cases in the United States appeared only as recently as 1962, Lyme disease is already reaching epidemic proportions in many parts of our country. the future of AIDS, derived from monkey viruses and first documented in humans around 1959, is even secure (from the virus's perspective).

The final stage of this evolution is represented by the major, long-established epidemic diseases confined to humans. These diseases must have been the evolutionary survivors of far more pathogens that tried to make the jump to us from animal, and mostly failed.

In short, diseases represent evolution in progress, and microbes adapt by natural selection to new hosts and vectors. Nonetheless, compared with cows' bodies, ours offer different immune defences, lice, faeces, and chemistries. In that new environment, a microbe must evolve new ways to live and to propagate itself. In several instructive cases doctors or veterinarians have been able to observe microbes evolving those new ways.

Darwin concluded that nature could by crossbreeding and selection of traits, provide new species. His explanation of the mechanism in nature that results in a new specie took the form of a syllogism: (1) the principle of geometric increases indicated that more individuals in each species will have produced than can survive, (2) the struggle for existence occurs as one organism competes with another, (3) in this struggle for existence, slight variations, if they prove advantageous will accumulate to produce new species, in analogy with the animal breeder's artificial selection of traits Darwin termed the elimination of the disadvantaged and the promotion of the advantaged natural selection.

In Darwin's view, the struggle for existence occurs ‘between' an atomized individual organism and of the atomized individual organisms in the same species: ‘between' and ‘atomized' individual organisms of new species with that of a different species, or ‘between' an atomized individual organism and the physical conditions of life the whole as Darwin conceived it is the collection of all atomized individual organisms, or parts. The struggle for survival occurs ‘between' or ‘outside' the parts. Since Darwin's viewing this struggle as the only limiting condition in which the accountable rate of an increase in organises, he assumed that rate will be geometrical when the force of a struggle between parts is weak and that the rate will decline with the force becomes stronger.

Natural selection occurred, said Darwin, when variations applicatively form; as each being accountable for through his own welfare,' or useful to the welfare of an atomized individual organism, provides a survival advantage and the organism produces ‘offspring similarly characterized.' Since the force that makes this selection operates ‘outside' the totality of parts. For example, the ‘infinite complexities of relations of all organic beings to each other and to their condition of liveliness' refers to dealing relations between parts, and the ‘infinite diversity in structure, constitute habit' refers to remaining traits within the atomized part. It seems clear in our view that the atomized individual organism in Darwin's biological machine reassembles classical atoms and that the force that drives the interactions of the atomized parts, the ‘struggle for life' resembles Newton's force of universal gravity. Although Darwin parted company with classical determinism in the claim that changes, of mutations, within organisms occurred randomly, his view of the relationship between parts and wholes essentially mechanistic.

Darwinism belief in the theory of ‘evolution' by natural selection took form in its original formality from the observation of Malthus, although belonging principally to the history of science, as these encountering beliefs are met straight on into a philosophically influenced Malthus's Essay on Population (1798) in undermining the Enlightenment belief in unlimited possibilities of human progress and perfection. The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change; Darwin himself remained open to the search for additional in its mechanisms, while also remaining convinced that naturae section was at the heart of it. It was only with the later discovery of him ‘gene' as the unit of inheritance hast the synthesis known as ‘neo-Darwinism' became the orthodox theory of evolution in life science.

Human Evolution, is pressively the process through which a lengthy period of change is admissively given by people who have originated from apelike ancestors. Scientific evidence shows that the physical and behavioural traits shared by all people evolved over a period of at least six million years.

One of the earliest defining human traits, Bipedalism -walking on two legs as the primary form of locomotion-undergoes an evolution of more than four million years ago. Other important human characteristics-such as a large and complex brain, the ability to make and use tools, and the capacity for language-developed more recently. Many advanced traits,-including complex symbolic expression, such as art, and elaborate cultural diversity emerged mainly during the past 100,000 years.

Humans are primates. Physical and genetic similarities show that the modern human species, Homo sapiens, has a very close relationship to another group of primate species, the apes. Humans and the so-called great apes (large apes) of Africa-chimpanzees (including bonobos, or so-called pygmy chimpanzees) and gorillas,-share a common ancestor that lived sometime between eight million and six million years ago. The earliest humans evolved in Africa, and much of human evolution occurred on that continent. The fossils of early humans who lived between six million and two million years ago come entirely from Africa.

Early humans first migrated out of Africa into Asia probably between two million and 1.7 million years ago. They entered Europe so-so later, generally within the past one million years. Species of modern humans populated many parts of the world much later. For instance, people first came to Australia probably within the past 60,000 years, and to the Americas within the past 35,000 years. The beginnings of agriculture and the rise of the first civilizations occurred within the past 10,000 years.

The scientific study of human evolution is called Paleoanthropology. Paleoanthropology is a sub-field of anthropology, the study of human culture, society, and biology. Paleoanthropologists search for the roots of human physical traits and behaviour. They seek to discover how evolution has shaped the potentials, tendencies, and limitations of all people. For many people, Paleoanthropology is an exciting scientific field because it illuminates the origins of the defining traits of the human species, as well as the fundamental connections between humans and other living organisms on Earth. Scientists have abundant evidence of human evolution from fossils, artifacts, and genetic studies. However, some people find the concept of human evolution troubling because it can seem to conflict with religious and other traditional beliefs about how people, other living things, and the world came to be. Yet many people have come to reconcile such beliefs with the scientific evidence.

All species of organisms originate through the process of biological evolution. In this process, new species arise from a series of natural changes. In animals that reproduce sexually, including humans, the term species refers to a group whose adult members regularly interbreed, resulting in fertile offspring,-that is, offspring themselves capable of reproducing. Scientists classify each species with a unique, and two-part scientific name. In this system, modern humans are classified as Homo sapiens.

The mechanism for evolutionary change resides in genes-the basic units of heredity. Genes affect how the body and behaviour of an organism develop during its life. The information contained within genetical change is a latent process known as mutation. The way particular genes are expressive articulated as they affect the body or behaviour of an organism-can also change. Over time, genetic change can alter a species's overall way of life, such as what it eats, how it grows, and where it can live.

Genetic changes can improve the ability of organisms to survive, reproduce, and, in animals, raise offspring. This process is called adaptation. Parents pass adaptive genetic changes to their offspring, and ultimately these changes become common throughout a population-a group of organisms of the same species that share a particular local habitat. Many factors can favour new adaptations, but changes in the environment often play a role. Ancestral human species adapted to new environments as their genes changed, altering their anatomy (physical body structure), physiology (bodily functions, such as digestion), and behaviour. Over long periods, evolution dramatically transformed humans and their ways of life.

Geneticists estimate that the human line began to diverge from that of the African apes between eight million and five million years ago (paleontologists have dated the earliest human fossils, too, at least, six million years ago). This figure comes from comparing differences in the genetic makeup of humans and apes, and then calculating how long it probably took for those differences to develop. Using similar techniques and comparing the genetic variations among human populations around the world, scientists have calculated that all people may share common genetic ancestors that lived sometime between 290,000 and 130,000 years ago.

Humans belong to the scientific order named Primates, a group of more than 230 species of mammals that also includes lemurs, lorises, tarsiers, monkeys, and apes. Modern humans, early humans, and other species of primates all have many similarities as well as some important differences. Knowledge of these similarities and differences helps scientists to understand the roots of many human traits, as well as the significance of each step in human evolution.

All primates, including humans, share at least part of a set of common characteristics that distinguish them from other mammals. Many of these characteristics evolved as adaptations for life in the trees, the environment in which earlier primates evolved. These include more reliance on sight than smell; overlapping fields of vision, allowing stereoscopic (three-dimensional) sight; limbs and hands adapted for clinging on, leaping from, and swinging on tree trunks and branches; the ability to grasp and manipulate small objects (using fingers with nails instead of claws); large brains in relation to body size; and complex social lives.

The scientific classification of primates reflects evolutionary relationships between individual species and groups of species. Strepsirhines (meaning ‘turned-nosed') primates-of that the living representatives include lemurs, lorises, and other groups of species all commonly known as prosimians-evolved earliest and are the most primitive forms of primates. The earliest monkeys and apes evolved from ancestral haplorhine (meaning ‘simple-nosed') primates, of which the most primitive living representative is the tarsier. Humans evolved from ape ancestors.

Tarsiers have traditionally been grouped with prosimians, but many scientists now recognize that tarsiers, monkeys, and apes share some distinct traits, and group the three together. Monkeys, apes, and humans-who share many traits not found in other primates-together make up the suborder Anthropoidea. Apes and humans together make up the super-family bestowed upon Hominoidea, a grouping that emphasizes the close relationship among the species of these two groups.

Strepsirhines are the most primitive types of living primates. The last common ancestors of Strepsirhines and other mammals-creatures similar to tree shrews and classified as Plesiadapiformes-evolved at least sixty-five million years ago. The earliest primates evolved by about fifty-five million years ago, and fossil species similar to lemurs evolved during the Eocene Epoch (about fifty-five million to thirty-eight million years ago). Strepsirhines share all of the basic characteristics of primates, although their brains are not particularly large or complex and they have a more elaborate and sensitive olfactory system (sense of smell) than do other primates are the only living representatives of a primitive group of primates that ultimately led to monkeys, apes, and humans. Fossil species called omomyids, with some traits similar to those of tarsiers, evolved near the beginning of the Eocene, followed by early tarsier-like primates. While the omomyids and tarsiers are separate evolutionary branches (and there are no living omomyids), they both share features having to do with a reduction in the olfactory system, a trait shared by all haplorhine primates, including humans.

The anthropoid primates are divided into New World (South America, Central America, and the Caribbean Islands) and Old World (Africa and Asia) groups. New World monkeys-such as marmosets, capuchins, and spider monkeys-belong to the infra-order of platyrrhine (broad-nosed) anthropoids. Old World monkeys and apes belong to the infra-order of catarrhine (downward-nosed) anthropoids. Since humans and apes together make up the hominoids, humans are also catarrhine anthropoids.

The first catarrhine primates evolved between fifty million and thirty-three million years ago. Most primate fossils from this period have been found in a region of northern Egypt known as Al fay y~? m (or the Fayum). A primate group known as Propliopithecus, one lineage of which is sometimes called Aegyptopithecus, had primitive catarrhine features-that is, it had many of the basic features that Old World monkeys, apes, and humans share today. Scientists believe, therefore, that Propliopithecus resembles the common ancestor of all later Old World monkeys and apes. Thus, Propliopithecus may also be considered an ancestor or a close relative of an ancestor of humans evolved during the Miocene Epoch (24 million to five million years ago). Among the oldest known hominoids is a group of primates known by its genus name, Proconsul. Species of Proconsul had features that suggest a close link to the common ancestor of apes and humans-for example, the lack of a tail. The species Proconsul heseloni lived in the trees of dense forests in eastern Africa about twenty million years ago. An agile climber, it had the flexible backbone and narrow chest characteristic of monkeys, but also a wide range of movement in the hip and thumb, traits characteristic of apes and humans.

Early in their evolution, the large apes underwent several radiations-periods when new and diverse species branched off from common ancestors. Following Proconsul, the ape genus Afropithecus evolved about eighteen million years ago in Arabia and Africa and diversified into several species. Soon afterward, three other ape genera evolved-Griphopithecus of western Asia about 16.5 million years ago, the earliest ape to have spread from Africa; Kenyapithecus of Africa about fifteen million years ago; moreover, Dryopithecus of Europe exceeds twelve million years ago. Scientists have not yet determined which of these groups of apes may have given rise to the common ancestor of modern African apes and humans.

Scientists do not all agree about the appropriate classification of hominoids. They group the living hominoids into either two or three families: Hylobatidae, Hominidae, and sometimes Pongidae. Hylobatidae consists of the small or so-called lesser apes of Southeast Asia, commonly known as gibbons and siamangs. The Hominidae (hominids) includes humans and, according to some scientists, the great apes. For those who categorize its properties of being only human among the Hominidae, are as yet, unconditionally positioned as out of place, and contained to the great apes, including the orangutans of Southeast Asia, from which belong to the family Pongidae.

In the past only humans were considered to belong to the family Hominidae, and the term hominid referred only to species of humans. Today, however, genetic studies support placing all of the great apes and humans together in this family and the placing of African apes-chimpanzees and gorillas-together with humans at an even lower level, or subfamily.

According to this reasoning, the evolutionary branch of Asian apes leading to orangutans, which separated from the other hominid branches by about thirteen million years ago, belongs to the subfamily Ponginae. The ancestral and living representatives of the African ape and human branches together belong to the subfamily Homininae (sometimes called Hominines). Lastly, the line of early and modern humans belongs to the tribe (classificatory level above genus) Hominini, or hominins.

This order of classification corresponds with the genetic relationships between ape and human species. It groups humans and the African apes together at the same level in which scientists group together, for example, all types of foxes, all buffalo, or all flying squirrels. Within each of these groups, the species are very closely related. However, in the classification of apes and humans the similarities between the name's hominoid, hominid, hominine, and hominin can be confusing. In this article the term early human refers to all species of the human family tree since the divergence from a common ancestor with the African apes. Popular writing often still uses the term hominid to mean the same thing.

About 98.5 percent of the genes in people and chimpanzees are identical, making chimps the closest living biological relatives of humans. This does not mean that humans evolved from chimpanzees, but it does indicate that both species evolved from a common ape ancestor. Orangutans, the great apes of Southeast Asia, differ much more from humans genetically, indicating a more distant evolutionary relationship.

Modern humans have a number of physical characteristics reflective of an ape ancestry. For instance, people have shoulders with a wide range of movement and fingers capable of strong grasping. In apes, these characteristics are highly developed as adaptations for brachiation-swinging form branch to branch in trees. Although humans do not brachiate, the general anatomy from that earlier adaptation remains. Both people and apes also have larger brains and greater cognitive abilities than do most other mammals.

Human social life, too, shares similarities with that of African apes and other primates-such as baboons and rhesus monkeys-that live in large and complex social groups. Group behaviour among chimpanzees, in particular, strongly resembles that of humans. For instance, chimps form long-lasting attachments with each other; participate in social bonding activities, such as grooming, feeding, and hunting; and form strategic coalitions with each other in order to increase their status and power. Early humans also probably had this kind of elaborate social life.

However, modern humans fundamentally differ from apes in many significant ways. For example, as intelligent as apes are, people's brains are much larger and more complex, and people have a unique intellectual capacity and elaborate forms of culture and communication. In addition, only people habitually walk upright, can precisely manipulate very small objects, and have a throat structure that makes speech possible.

By around six million years ago in Africa, an apelike species had evolved with two important traits that distinguished it from apes: (1) small canine, or eye, teeth (teeth next to the four incisors, or front teeth) and (2) Bipedalism, that is walking on two legs as the primary form of locomotion. Scientists refer to these earliest human species as australopithecines, or Australopiths for short. The earliest Australopiths species known today belong to three genera: Sahelanthropus, Orrorin, and Ardipithecus. Other species belong to the genus Australopithecus and, by some classifications, Paranthropus. The name australopithecine translates literally as ‘southern ape,' in reference to South Africa, where the first known Australopiths fossils were found.

The Great Rift Valley, a region in eastern Africa in which past movements in Earth's crust have exposed ancient deposits of fossils, has become famous for its Australopiths finds. Countries in which scientists have found Australopiths fossils include Ethiopia, Tanzania, Kenya, South Africa, and Chad. Thus, Australopiths ranged widely over the African continent.

Fossils from several different early Australopiths species that lived between four million and two million years ago clearly show a variety of adaptations that marks the transition from ape too human. The very early period of this transition, before four million years ago, remains poorly documented in the fossil record, but those fossils that do exist show the most primitive combinations of ape and human features.

Fossils reveal much about the physical build and activities of early Australopiths, but not everything about outward physical features such as the colour and texture of skin and hair, or about certain behaviours, such as methods of obtaining food or patterns of social interaction. For these reasons, scientists study the living great apes-specifically the African apes, particularly to familiarize a-bettering understanding of how early the Australopiths might have looked and behaved to his transition from ape too human might have occurred. For example, Australopiths probably resembled the great apes in characteristics such as the shape of the face and the amount of hair on the body. Australopiths also had brains roughly equal in size to those of the great apes, so they probably had apelike mental abilities. Their social life probably resembled that of chimpanzees.

Most of the distinctly human physical qualities in Australopiths related to their bipedal stance. Before Australopiths, no mammal had ever evolved an anatomy for habitual upright walking. Australopiths also had small canine teeth, as compared with long canines found in almost all other catarrhine primates.

Other characteristics of Australopiths reflected their ape ancestry. They had a low cranium behind a projecting face, and a brain size of 390 to 550 cu. cm. (24 to thirty-four cu. in.)-in the range of an ape's brain. The body weight of Australopiths, as estimated from their bones, ranged from twenty-seven to 49 kg. (60 to 108 lb.), and they stood 1.1 to 1.5 m. (3.5 to 5 ft.) tall. Their weight and height compare closely to those of chimpanzees (chimp height measured standing). Some Australopiths species had a large degree of sexual dimorphism-males were much larger than females-a trait also found in gorillas, orangutans, and another primates.

Australopiths also had curved fingers and long thumbs with a wide range of movement. In comparison, the fingers of apes are longer, more powerful, and more curved, making them extremely well adapted for hanging and swinging from branches. Apes also have very short thumbs, which limits their ability to manipulate small objects. Paleoanthropologists speculate as to whether the long and dexterous thumbs of Australopiths allowed them to use tools more efficiently than do apes.

The anatomy of Australopiths shows a number of adaptations for Bipedalism, in both the upper and lower body. Adaptations in the lower body included the following: The Australopiths ilium, or pelvic bone, which rises above the hip joint, was much shorter and broader than it is in apes. This shape enabled the hip muscles to steady the body during each step. The Australopiths pelvis also had a bowl-like shape, which supported the internal organs in an upright stance. The upper legs angled inward from the hip joints, which positioned the knees better to support the body during upright walking. The legs of apes, on the other hand, are positioned almost straight down from the hip, so that when an ape walks upright for a short distance, its body sways from side to side. Australopiths also had short and fewer flexible toes than do apes. The toes worked as rigid levers for pushing off the ground during each bipedal step.

Other adaptations occurred above the pelvis. The Australopiths spine had a S-shaped curve, which shortened the overall length of the torso and gave it rigidity and balance when standing. By contrast, apes have a straight spine. The Australopiths skull also had an important adaptation related to Bipedalism. The opening at the bottom of the skull through which the spinal cord attaches to the brain, called the foramen magnum, was positioned more forward than it is in apes. This position set the head in balance over the upright spine.

Australopiths clearly walked upright on the ground, but paleoanthropologists debate whether the earliest humans also spent a significant amount of time in the trees. Certain physical features indicate that they spent at least some of their time climbing in trees. Such features included they're curved and elongated fingers and elongated arms. However, their fingers, unlike those of apes, may not have been long enough to allow them to brachiate through the treetops. Study of fossil wrist bones suggests that early Australopiths had the ability to lock their wrists, preventing backward bending at the wrist when the body weight was placed on the knuckles of the hand. This could mean that the earliest bipeds had an ancestor that walked on its knuckles, as African apes do

Compared with apes, humans have very small canine teeth. Apes-particularly males-have thick, projecting, sharp canines that they use for displays of aggression and as weapons to defend themselves. The oldest known bipeds, who lived at least six million years ago, still had large canines by human standards, though not as large as in apes. By four million years ago Australopiths had developed the human characteristic of having smaller, flatter canines. Canine reduction might have related to an increase in social cooperation between humans and an accompanying decrease in the need for males to make aggressive displays.

The Australopiths can be divided into an early group of species, known as gracile Australopiths, which arose before three million years ago; and a later group, known as robust Australopiths, which evolved after three million years ago. The gracile Australopiths of that several species evolved between 4.5 million and three million years ago-generally had smaller teeth and jaws. The later-evolving robusts had larger faces with large jaws and molars (cheek teeth). These traits indicate powerful and prolonged chewing of food, and analyses of wear on the chewing surface of robust Australopiths molar teeth support this idea. Some fossils of early Australopiths have features resembling those of the later species, suggesting that the robusts evolved from one or more gracile ancestors.

Paleoanthropologists recognize at least eight species of early Australopiths. These include the three earliest established species, which belong to the genera Sahelanthropus, Orrorin, and Ardipithecus, a species of the genus Kenyanthropus, and four species of the genus Australopithecus.

The oldest known Australopiths species is Sahelanthropus tchadensis. Fossils of this species were first discovered in 2001 in northern Chad, Central Africa, by a research team led by French paleontologist Michel Brunet. The researchers estimated the fossils to be between seven million and six million years old. One of the fossils is a fracture, yet nearly completes cranium that shows a combination of apelike and humanlike features. Apelike features include small brain size, an elongated brain case, and areas of bone where strong neck muscles would have attached. Humanlike features made up of small, flat canine teeth, a short middle part of the face, and a massive brow ridge (a bony, protruding ridge above the eyes) similar to that of later human fossils. The opening where the spinal cord attaches to the brain is tucked under the brain case, which suggests that the head was balanced on an upright body. It is not certain that Sahelanthropus walked bipedally, however, because bones from the rest of its skeleton have yet to be discovered. Nonetheless, its age and humanlike characteristics suggest that the human and African ape lineages had divided from one another by at least six million years ago.

In addition to reigniting debate about human origins, the discovery of Sahelanthropus in Chad significantly expanded the known geographic range of the earliest humans. The Great Rift Valley and South Africa, from which almost all other discoveries of early human fossils came, are apparently not the only regions of the continent that preserve the oldest clues of human evolution.

Orrorin tugenensis lived about six million years ago. This species was discovered in 2000 by a research team led by French paleontologist Brigitte Senut and French geologist Martin Pickford in the Tugen Hills region of central Kenya. The researchers found more than a dozen early human fossils dating between 6.2 million and six million years old. Among the finds were two thighbones that possess a groove indicative of an upright stance and bipedal walking. Although the finds are still being studied, the researchers consider these thighbones to be the oldest evidence of habitual two-legged walking. Fossilized bones from other parts of the skeleton show apelike features, including long, curved finger bones useful for strong grasping and movement through trees, and apelike canine and premolar teeth. Because of this distinctive combination of ape and human traits, the researchers gave a new genus and species name to these fossils, Orrorin tugenensis, which in the local language means ‘original man in the Tugen region.' The age of these fossils suggests that the divergence of humans from our common ancestor with chimpanzees occurred before six million years ago.

In 1994 an Ethiopian member of a research team led by American paleoanthropologists Tim White discovered human fossils estimated to be about 4.4 million year's old. White and his colleagues gave their discovery the name Ardipithecus ramidus. Ramid means ‘root' in the Afar language of Ethiopia and refers to the closeness of this new species to the roots of humanity. At the time of this discovery, the genus Australopithecus was scientifically well established. White devised the genus name Ardipithecus to distinguish this new species from other Australopiths because its fossils had a very ancient combination of apelike and humanlike traits. More recent finds indicate that this species may have lived as early as 5.8 million to 5.2 million years ago.

The teeth of Ardipithecus ramidus had a thin outer layer of enamel-a trait also seen in the African apes but not in other Australopiths species or older fossil apes. This trait suggests a close relationship with an ancestor of the African apes. In addition, the skeleton shows strong similarities to that of a chimpanzee but has slightly reduced canine teeth and adaptations for Bipedalism.

In 1965 a research team from Harvard University discovered a single arm bone of an early human at the site of Kanapoi in northern Kenya. The researchers estimated this bone to be four million years old, but could not identify the species to which it belonged or return at the time to look for related fossils. It was not until 1994 that a research team, led by British-born Kenyan paleoanthropologists Meave Leakey, found numerous teeth and fragments of bone at the site that could be linked to the previously discovered fossil. Leakey and her colleagues determined that the fossils were those of a species very primitives from those of the Australopiths, which was given the name Australopithecus Anamensis. Researchers have since found other A. Anamensis fossils at nearby sites, dating between about 4.2 million and 3.9 million years old. The skull of this species appears apelike, while its enlarged tibia (lower leg bone) indicates that it supported its full body weight on one leg at a time, as in regular bipedal walking

Australopithecus Anamensis was quite similar to another, much better-known species, A. afarensis, a gracile Australopiths that thrived in eastern Africa between about 3.9 million and three million years ago. The most celebrated fossil of this species, known as Lucy, is a partial skeleton of a female discovered by American paleoanthropologists Donald Johanson in 1974 at Hadar, Ethiopia. Lucy lived 3.2 million years ago. Scientists have identified several hundred fossils of A. afarensis from Hadar, including a collection representing at least thirteen individuals of both sexes and various ages, all from a single site.

Researchers working in northern Tanzania have also found fossilized bones of A. afarensis at Laetoli. This site, dated at 3.6 million years old, is best known for its spectacular trails of bipedal human footprints. Preserved in hardened volcanic ash, these footprints were discovered in 1978 by a research team led by British paleoanthropologists Mary Leakey. They provide irrefutable evidence that Australopiths regularly walked bipedally.

Paleoanthropologists have debated interpretations of the characteristics of A. afarensis and its place in the human family tree. One controversy centres on the Laetoli footprints, which some scientists believe show that the foot anatomy and gait of A. afarensis did not exactly match those of the modern humans. This observation may suggest that early Australopiths did not live primarily on the ground or at least spent a significant amount of time in the trees. The skeleton of Lucy also suggests that A. afarensis had longer, more powerful arms than most later human species, suggesting that this species was adept at climbing trees.

A third controversy arises from the claim that A. afarensis was the common ancestor of both later Australopiths and the modern human genus, Homo. While this idea remains a strong possibility, the similarity between this and another Australopiths species-one from southern Africa, named Australopithecus africanus-makes it difficult to decide which of the two species gave rise to the genus Homo.

Australopithecus africanus thrived in the Transvaal region of what is now South Africa between about 3.3 million and 2.5 million years ago. Australian-born anatomist Raymond Dart discovered this species-the first known Australopiths,-in 1924 at Taung, South Africa. The specimen that of a young child, came to be known as the Taung Child. For decades after this discovery, almost no one in the scientific community believed Dart's claim that the skull came from an ancestral human. In the late 1930's teams led by Scottish-born South African paleontologist Robert Broom unearthed many more A. africanus skulls and other bones from the Transvaal site of Sterkfontein.

A. africanus generally had a more globular braincase and less primitive-looking face and teeth than did A. afarensis. Thus, some scientists consider the southern species of early Australopiths to be a likely ancestor of the genus Homo. According to other scientists, however, certain heavily built facial and cranial features of A. africanus from Sterkfontein identify it as an ancestor of the robust Australopiths that lived later in the same region. In 1998 a research team led by South African paleoanthropologists Ronald Clarke discovered an almost complete early Australopiths skeleton at Sterkfontein. This important find may resolve some of the questions about where A. africanus fits in the story of human evolution

Working in the Lake Turkana's region of northern Kenya, a research team led by paleontologist Meave Leakey uncovered in 1999 a cranium and other bone remains of an early human that showed a mixture of features unseen in previous discoveries of early human fossils. The remains were estimated to be 3.5 million years old, and the cranium's small brain and earhole was similar to those of the earliest humans. Its cheekbone, however, joined the rest of the face in a forward position, and the region beneath the nose opening was flat. These are traits found in later human fossils from around two million years ago, typically those classified in the genus Homo. Noting this unusual combination of traits, researchers named a new genus and species, Kenyanthropus platyops, or ‘flat-faced humans from Kenya.' Before this discovery, it seemed that only a single early human species, Australopithecus afarensis, lived in East Africa between four million and three million years ago. Yet Kenyanthropus suggests that a diversity of species, including a more humanlike lineage then A. afarensis, lived in this time, just as in most other eras in human prehistory.

The human fossil record is poorly known between three million and two million years ago, from which estimates make recent results in finding from the site of Bouri, Ethiopia, particularly important. From 1996 to 1998, a research team led by Ethiopian paleontologist Berhane Asfaw and American paleontologist Tim White found the skull and other skeletal remains of an early human specimen about 2.5 million years old. The researchers named it Australopithecus garhi; the word garhi means ‘surprise' in the Afar language. The specimen is unique in having large incisors and molars in combination with an elongated forearm and thighbone. Its powerful arm bones suggest a tree-living ancestry, but its longer legs show the ability to walk upright on the ground. Fossils of A. garhi are associated with some of the oldest known stone tools, along with animal bones that were cut and cracked with tools. It is possible, then, that this species was among the first to make the transition to stone Toolmaking and to eating meat and bone marrow from large animals

By 2.7 million years ago the later, robust Australopiths had evolved. These species had what scientists refer to as megadont cheek teeth-wide molars and premolars coated with thick enamel. Their incisors, by contrast, were small. The robusts also had an expanded, flattened, and more vertical face than did gracile Australopiths. This face shape helped to absorb the stresses of strong chewing. On the top of the head, robust Australopiths had a sagittal crest (ridge of bone along the top of the skull from front to back) to which thick jaw muscles attached. The zygomatic arches (which extend back from the cheek bones to the ears), curved out wide from the side of the face and cranium, forming very large openings for the massive chewing muscles to pass through near their attachment to the lower jaw. Together, these traits say that the robust Australopiths chewed their food powerfully and for long periods.

Other ancient animal species that specialized in eating plants, such as some types of wild pigs, had similar adaptations in their facial, dental, and cranial anatomy. Thus, scientists think that the robust Australopiths had a diet consisting partly of tough, fibrous plant foods, such as seed pods and underground tubers. Analyses of microscopic wear on the teeth of some robust Australopiths specimens appear to support the idea of a vegetarian diet, although chemical studies of fossils suggest that the southern robust species may also have eaten meat.

Scientists originally used the word robust to refer to the late Australopiths out of the belief that they had much larger bodies than did the early, gracile Australopiths. However, further research has revealed that the robust Australopiths stood about the same height and weighed roughly the same amount as Australopithecus afarensis and A. africanus.

The earliest known robust species, Australopithecus aethiopicus, lived in eastern Africa by 2.7 million years ago. In 1985 at West Turkana, Kenya, American paleoanthropologists Alan Walker discovered a 2.5-million-year- old fossil skull that helped to define this species. It became known as the ‘black skull' because of the colour it had absorbed from minerals in the ground. The skull had a tall sagittal crest toward the back of its cranium and a face that projected far outward from the forehead. A. aethiopicus shared some primitive features with A. afarensis-that is, features that originated in the earlier East African Australopiths. This may suggest that A. aethiopicus evolved from A. afarensis.

Australopithecus boisei, the other well-known East African robust Australopiths, lived over a long period of time, between about 2.3 million and 1.2 million years ago. In 1959 Mary Leakey discovered the original fossil of this species-a nearly complete skull-at the site of Olduvai Gorge in Tanzania. Kenyan-born paleoanthropologists Louis Leakey, husband of Mary, originally named the new species Zinjanthropus boisei (Zinjanthropus translates as ‘East African man'). This skull-dating from 1.8 million years ago-has the most specialized features of all the robust species. It has a massive, wide and dished-in face capable of withstanding extreme chewing forces, and molars four times the size of those in modern humans. Since the discovery of Zinjanthropus, now recognized as an Australopiths, scientists have found great numbers of A. boisei fossils in Tanzania, Kenya, and Ethiopia.

The southern robust species, called Australopithecus robustus, lived between about 1.8 million and 1.3 million years ago in the Transvaal, the same region that was home to A. africanus. In 1938 Robert Broom, who had found many A. africanus fossils, bought a fossil jaw and molar that looked distinctly different from those in A. africanus. After finding the site of Kromdraai, from which the fossil had come, Broom collected many more bones and teeth that together convinced him to name a new species, which he called Paranthropus robustus (Paranthropus meaning ‘beside man'). Later scientists dated this skull at about 1.5 million years old. In the late 1940's and 1950 Broom discovered many more fossils of this species at the Transvaal site of Swartkrans.

Paleoanthropologists believe that the eastern robust species, A. aethiopicus and A. boisei, may have evolved from an early Australopiths of the same region, perhaps A. afarensis. According to this view, A. africanus gave rise only to the southern species A. robustus. Scientists refer to such a case characteristics in different places or at different times-as parallel evolution. If parallel evolution occurred in Australopiths, the robust species would make up two separate branches of the human family tree.

The last robust Australopiths died out about 1.2 million years ago. At about this time, climate patterns around the world entered a period of fluctuation, and these changes may have reduced the food supply on which robusts depended. Interaction with larger-brained members of the genus Homo, such as Homo erectus, may also have contributed to the decline of late Australopiths, although no compelling evidence exists of such direct contact. Competition with several other species of plant-eating monkeys and pigs, which thrived in Africa at the time, may have been an even more important factor. Nevertheless, the reason that the robust Australopiths became extinct after flourishing for such a long time is not yet known for sure.

Scientists have several ideas about why Australopiths first split off from the apes, initiating the course of human evolution. Virtually all hypotheses suggest that environmental change was an important factor, specifically in influencing the evolution of Bipedalism. Some well-established ideas about why humans first evolved include (1) the savanna hypothesis, (2) the woodland-mosaic hypothesis, and (3) the variability hypothesis.

The global climate cooled and became drier between eight million and five million years ago, near the end of the Miocene Epoch. According to the savanna hypothesis, this climate change broke up and reduced the area of African forests. As the forests shrunk, an ape population in eastern Africa became separated from other populations of apes in the more heavily forested areas of western Africa. The eastern population had to adapt to its drier environment, which contained larger areas of grassy savanna.

The expansion of dry terrain favoured the evolution of terrestrial living, and made it more difficult to survive by living in trees. Terrestrial apes might have formed large social groups in order to improve their ability to find and collect food and to fend off predators-activities that also may have required the ability to communicate well. The challenges of savanna life might also have promoted the rise of tool use, for purposes such as scavenging meat from the kills of predators. These important evolutionary changes would have depended on increased mental abilities and, therefore, may have correlated with the development of larger brains in early humans.

Critics of the savanna hypothesis argue against it on several grounds, but particularly for two reasons. First, discoveries by a French scientific team of Australopiths fossils in Chad, in Central Africa, suggest that the environments of East Africa may not have been fully separated from those farther west. Recent research suggests that open savannas were not prominent in Africa until sometime after two million years ago.

Criticism of the savanna hypothesis has spawned alternative ideas about early human evolution. The woodland-mosaic hypothesis proposes that the early Australopiths evolved in patchily wooded areas-a mosaic of woodland and grassland-that offered opportunities for feeding both on the ground and in the trees, and that ground feeding favoured Bipedalism.

The variability hypothesis suggests that early Australopiths experienced many changes in environment and ended up living in a range of habitats, including forests, open-canopy woodlands, and savannas. In response, their populations became adapted to a variety of surroundings. Scientists have found that this range of habitats existed at the time when the early Australopiths evolved. So the development of new anatomical characteristics,-particularly Bipedalism-combined with an ability to climb trees, may have given early humans the versatility to live in a variety of habitats.

Bipedalism in early humans would have enabled them to travel efficiently over long distances, giving them an advantage over quadrupedal apes in moving across barren open terrain between groves of trees. In addition, the earliest humans continued to have the advantage from their ape ancestry of being able to escape into the trees to avoid predators. The benefits of both Bipedalism and agility in the trees may explain the unique anatomy of Australopiths. Their long, powerful arms and curved fingers probably made them good climbers, while their pelvis and lower limb structure were reshaped for upright walking people belong to the genus Homo, which first evolved at least 2.3 million to 2.5 million years ago. The earliest members of this genus differed from the Australopiths in at least one important respect-they had larger brains than did their predecessors.

The evolution of the modern human genus can be divided roughly into three periods: early, middle, and late. Species of early Homo resembled gracile Australopiths in many ways. Some early Homo species lived until possibly 1.6 million years ago. The period of middle Homo began perhaps between two million and 1.8 million years ago, overlapping with the end of early Homo. Species of middle Homo evolved an anatomy much more similar to that of modern humans but had comparatively small brains. The transition from middle to late Homo probably occurred sometime around 200,000 years ago. Species of late Homo evolved large and complex brains and eventually language. Culture also became an increasingly important part of human life during the most recent period of evolution.

The origin of the genus Homo has long intrigued paleoanthropologists and prompted much debate. One of several known species of Australopiths, or one not yet discovered, could have given rise to the first species of Homo. Scientists also do not know exactly what factors favoured the evolution of a larger and more complex brain-the defining physical trait of modern humans.

Louis Leakey originally argued that the origin of Homo related directly to the development of Toolmaking -specifically, the making of stone tools. Toolmaking requires certain mental skills and fine hand manipulation that may exist only in members of our own genus. Literally, the name Homo habilis (meaning ‘handy man') refer directly to the making and use of tools.

However, several species of Australopiths lived at the same time as early Homo, making it unclear which species produced the earliest stone tools. Recent studies of Australopiths hand bones have suggested that at least one of the robust species, Australopithecus robustus, could have made tools. In addition, during the 1960's and 1970's researchers first observed that some nonhuman primates, such as chimpanzees, make and use tools, suggesting that Australopiths and the apes that preceded them probably also made some kinds of tools.

According to some scientists, however, early Homo probably did make the first stone tools. The ability to cut and pound foods would have been most useful to these smaller-toothed humans, whereas the robust Australopiths could chew even very tough foods. Furthermore, early humans continued to make stone tools similar to the oldest known kinds for a time long after the gracile Australopiths died out. Some scientists think that a period of environmental cooling and drying in Africa set the stage for the evolution of Homo. According to this idea, many types of animals suited to the challenges of a drier environment originated during the period between about 2.8 million and 2.4 million years ago, including the first species of Homo.

A Toolmaking human might have had an advantage in obtaining alternative food sources as vegetation became sparse in increasingly dry environments. The new foods might have included underground roots and tubers, as well as meat obtained through scavenging or hunting. However, some scientists disagree with this idea, arguing that the period during which Homo evolved fluctuated between drier and wetter conditions, rather than just becoming dry. In this case, the making and use of stone tools and an expansion of the diet in early Homo-as well as an increase in brain size-may all have been adaptations to unpredictable and fluctuating environments. In either case, more scientific documentation is necessary to support strongly or refute the idea that early Homo arose as part of a larger trend of rapid species extinction and the evolution of many new species during a period of environmental change.

Paleoanthropologists generally recognize two species of early Homo-Homo habilis and H. rudolfensis (although other species may also have existed). The record is unclear because most of the early fossils that scientists have identified as species of Homo,-rather than robust Australopiths who lived at the same time occur as isolated fragments. In many places, only teeth, jawbones, and pieces of skull-without any other skeletal remains-suggest that new species of smaller-toothed humans had evolved as early as 2.5 million years ago. Scientists cannot always tell whether these fossils belong to late-surviving gracile Australopiths or early representatives of Homo. The two groups resemble each other because Homo likely descended directly from a species of gracile Australopiths.

In the early 1960's, at Olduvai Gorge, Tanzania, Louis Leakey, British primate researcher John Napier, and South African paleoanthropologists Philip Tobias discovered a group of early human fossils that showed a cranial capacity from 590 to 690 cu. cm. (36 to forty-two cu. in.). Based on this brain size, which was completely above the range of that in known Australopiths, the scientists argued that a new genus, Homo, and a new species, Homo habilis, should be recognized. Other scientists questioned whether this amount of brain enlargement was sufficient for defining a new genus, and even whether H. habilis were different from Australopithecus africanus, as the teeth of the two species look similar. However, scientists now widely accept both the genus and species names designated by the Olduvai team.

H. habilis lived in eastern and possibly southern Africa between about 1.9 million and 1.6 million years ago, and maybe as early as 2.4 million years ago. Although the fossils of this species moderately resemble those of Australopiths, H. habilis had smaller and narrower molar teeth, premolar teeth, and jaws than did its predecessors and contemporary robust Australopiths.

A fragmented skeleton of a female from Olduvai shows that she stood only about one m. (3.3 ft.) tall, and the ratio of the length of her arms to her legs was greater than that in the Australopiths Lucy. At least in the case of this individual, therefore, H. habilis had very apelike body proportions. However, H. habilis had more modern-looking feet and hands capable of producing tools. Some of the earliest stone tools from Olduvai have been found with H. habilis fossils, suggesting that this species made and used the tools at this site.

Scientists began to notice a high degree of variability in body size as they discovered more early Homo fossils. This could have suggested that H. habilis had a large amount of sexual dimorphism. For instance, the Olduvai female skeleton was dwarfed in comparison with other fossils-exemplified by a sizable early Homo cranium from East Turkana in northern Kenya. However, the differences in size exceeded those expected between males and females of the same species, and this finding later helped convince scientists that another species of early Homo had lived in eastern Africa.

This second species of early Homo was given the name Homo rudolfensis, after Lake Rudolf (now Lake Turkana). The best-known fossils of H. rudolfensis come from the area surrounding this lake and date from about 1.9 million years ago. Paleoanthropologists have not determined the entire time range during which H. rudolfensis may have lived.

This species had a larger face and body than did

H. habilis. The cranial capacity of H. rudolfensis averaged about 750 cu. cm. (46 cu. in.). Scientists need more evidence to know whether the brain of H. rudolfensis in relation to its body size was larger than that proportion in H. habilis. A larger brain-to-body-size ratio can suggest increased mental abilities. H. rudolfensis also had large teeth, approaching the size of those in robust Australopiths. The discovery of even a partial fossil skeleton would reveal whether this larger form of early Homo had apelike or more modern body proportions. Scientists have found several modern-looking thighbones that date from between two million and 1.8 million years ago and may belong to H. rudolfensis. These bones suggest a body size of 1.5 m. (5 ft.) and 52 kg. (114 lb.).

The skulls and teeth of early African populations of middle Homo differed subtly from those of later H. erectus populations from China and the island of Java in Indonesia. H. ergaster makes a better candidate for an ancestor of the modern human line because Asian H. erectus has some specialized features not seen in some later humans, including our own species. H. heidelbergensis has similarities to both H. erectus and the later species H. neanderthalensis, although it may have been a transitional species between middle Homo and the line to which modern humans belong.

Homo ergaster probably first evolved in Africa around two million years ago. This species had a rounded cranium with a brain size of between 700 and 850 cu. cm. (49 to fifty-two cu. in.), a prominent brow ridge, small teeth, and many other features that it shared with the later H. erectus. Many paleoanthropologists consider H. ergaster a good candidate for an ancestor of modern humans because it had several modern skull features, including a thin cranial bones. Most H. ergaster fossils come from the time range of 1.8 million to 1.5 million years ago.

The most important fossil of this species yet found is a nearly complete skeleton of a young male from West Turkana, Kenya, which dates from about 1.55 million years ago. Scientists determined the sex of the skeleton from the shape of its pelvis. They also found out from patterns of tooth eruption and bone growth that the boy had died when he was between nine and twelve years old.

The Turkana boy, as the skeleton is known, had elongated leg bones and arm, leg, and trunk proportions of which essentially match those of a modern humans, in sharp contrast with the apelike proportions H. habilis and Australopithecus afarensis. He appears to have been quite tall and slender. Scientists estimate that, had he grown into adulthood, the boy would have reached a height of 1.8 m. (6 ft.) and a weight of 68 kg (150 lb.). The anatomy of the Turkana boy shows that H. ergaster was particularly well adapted for walking and perhaps for running long distances in a hot environment (a tall and slender body dissipates heat well) but not for any significant amount of tree climbing.

The oldest humanlike fossils outside of Africa have also been classified as H. ergaster, dated around 1.75 million year's old. These finds, from the Dmanisi site in the southern Caucasus Mountains of Georgia, consist of several crania, jaws, and other fossilized bones. Some of these are strikingly like East African H. ergaster, but others are smaller or larger than H. ergaster, suggesting a high degree of variation within a single population.

H. ergaster, H. rudolfensis, and H. habilis, in addition to possibly two robust Australopiths, all might have coexisted in Africa around 1.9 million years ago. This finding goes against a traditional paleoanthropological view that human evolution consisted of a single line that evolved progressively over time,- an Australopiths species followed by early Homo, then middle Homo, and finally H. sapiens. It appears that periods of species diversity and extinction have been common during human evolution, and that modern H. sapiens has the rare distinction of being the only living human species today.

Although H. ergaster appears to have coexisted with several other human species, they probably did not interbreed. Mating rarely succeeds between two species with significant skeletal differences, such as H. ergaster and H. habilis. Many paleoanthropologists now believe that H. ergaster descended from an earlier population of Homo-perhaps one of the two known species of early Homo-and that the modern human line descended from H. ergaster.

Paleoanthropologists now know that humans first evolved in Africa and lived only on that continent for a few million years. The earliest human species known to have spread in large numbers beyond the African continent was first discovered in Southeast Asia. In 1891 Dutch physician Eugene Dubois found the cranium of an early human on the Indonesian island of Java. He named this early human Pithecanthropus erectus, or ‘erect ape-man.'Today paleoanthropologists refer to this species as Homo erectus.

H. erectus appears to have evolved in Africa from earlier populations of H. ergaster, and then spread to Asia sometime between 1.8 million and 1.5 million years ago. The youngest known fossils of this species, from the Solo River in Java, may date from about 50,000 years ago (although that dating is controversial). So H. erectus was a very successful widespread species-as both having lived in Africa and much of Asia, and long-lived, having survived for possibly more than 1.5 million years.

Homo erectus had a low and rounded braincase that was elongated to example the peripheral frontage to measurements extending inward to the back, a prominent brow ridge, and adult cranial capacity of 800 to 1,250 cu. cm. (50 to 80 cu. in.), an average twice that of the Australopiths. Its bones, including the cranium, were thicker than those of earlier species. Prominent muscle markings and thick, reinforced areas on the bones of H. erectus indicate that its body could withstand powerful movements and stresses. Although it had much smaller teeth than did the Australopiths, it had a heavy and strong jaw.

In the 1920's and 1930's German anatomist and physical anthropologist Franz Weidenreich excavated the most famous collections of H. erectus fossils from a cave at the site of Zhoukoudian (Chou-k'ou-tien), China, near Beijing (Peking). Scientists dubbed these fossil humans Sinanthropus pekinensis, or Peking Man, but others later reclassified them as H. erectus. The Zhoukoudian cave yielded the fragmentary remains of more than thirty individuals, ranging from about 500,000 to 250,000 years old. These fossils were lost near the outbreak of World War II, but Weidenreich had made excellent casts of his finds. Further studies at the cave site have yielded more H. erectus remains.

Other important fossil sites for this species in China include Lantian, Yuanmou, Yunxian, and Hexian. Researchers have also recovered many tools made by H. erectus in China at sites such as Nihewan and Bose, and other sites of similar age (at least one million to 250,000 years old).

Ever since the discovery of Homo erectus, scientists have debated whether this species was a direct ancestor of later humans, including H. sapiens. The last populations of H. erectus-such as those from the Solo River in Java,-may have lived as recently as 50,000 years ago, at the same time as did populations of H. sapiens. Modern humans could not have evolved from these late populations of H. erectus, a much more primitive type of human. However, earlier East Asian populations could have given rise to H. sapiens.

Many paleoanthropologists believe that early humans migrated into Europe by 800,000 years ago, and that these populations were not Homo erectus. A growing number of scientists refer to these early migrants into Europe-who predated both Neanderthals and H. sapiens in the region,-as H. heidelbergensis. The species name comes from a 500,000-year-old jaw found near Heidelberg, Germany.

Scientists have found few human fossils in Africa for the period between 1.2 million and 600,000 years ago, during which

H. heidelbergensis or its ancestors first migrated into Europe. Populations of H. ergaster (or possibly H. erectus) appear to have lived until at least 800,000 years ago in Africa, and possibly until 500,000 years ago in northern Africa. When these populations disappeared, other massive-boned and larger-brained humans,-possibly H. heidelbergensis appears to have replaced them. Scientists have found fossils of these stockier humans at sites in Bodo, Ethiopia; Saldanha (also known as Elandsfontein), South Africa; Ndutu, Tanzania; and Kabwe, Zimbabwe.

Scientists have come up with at least three different interpretations of these African fossils. Some scientists place the fossils in the species H. heidelbergensis and think that this species gave rise to both the Neanderthals (in Europe) and H. sapiens (in Africa). Others think that the European and African fossils belong to two distinct species, and that the African populations that, in this view, was not H. heidelbergensis but a separate species gave rise to H. sapiens. Yet other scientists advocate a long-head view that H. erectus and H. sapiens belong to a single evolving lineage, and that the African fossils belong in the category of archaic H. sapiens (archaic meaning not fully anatomically modern).

The fossil evidence does not clearly favour any of these three interpretations over another. A growing number of fossils from Asia, Africa, and Europe have features that are intermediate between early H. ergaster and H. sapiens. This kind of variation makes it hard to decide how to identify distinct species and to find out which group of fossils represents the most likely ancestor of later humans.

Humans evolved in Africa. Lived took of their stand for only as long as four million years or more, so scientists wonder what finally triggered the first human migration out of Africa (a movement that coincided with the spread of early human populations throughout the African continent). The answer to this question depends, in part, on knowing exactly when that first migration occurred. Some studies claim that site in Asia and Europe contain crude stone tools and fossilized fragments of humanlike teeth that date from more than 1.8 million years ago. Although these claims remain unconfirmed, small populations of humans may have entered Asia before 1.8 million years ago, followed by a more substantial spread between 1.6 million and one million years ago. Early humans reached northeastern Asia by around 1.4 million years ago, inhabiting a region close to the perpetually dry deserts of northern China. The first major habitation of central and western Europe, on the other hand, does not appear to have occurred until between one million and 500,000 years ago.

Scientists once thought that advances in stone tools could have enabled early humans such as Homo erectus to move into Asia and Europe, perhaps by helping them to obtain new kinds of food, such as the meat of large mammals. If African human populations had developed tools that allowed them to hunt large game effectively, they would have had a good source of food wherever they went. In this view, humans first migrated into Eurasia based on a unique cultural adaptation.

By 1.5 million years ago, early humans had begun to make new kinds of tools, which scientists call Acheulean. Common Acheulean tools included large hand axes and cleavers. While these new tools might have helped early humans to hunt, the first known Acheulean tools in Africa date from later than the earliest known human presence in Asia. Also, most East Asian sites more than 200,000 years old contains only simply shaped cobble and flake tools. In contrast, Acheulean tools were more finely crafted, larger, and more symmetrical. Thus, the earliest settlers of Eurasia did not have a true Acheulean technology, and advances in Toolmaking alone cannot explain the spread out of Africa.

Another possibility is that the early spread of humans to Eurasia was not unique, but parts of a wider migration of meat -eating animals, such as lions and hyenas. The human migration out of Africa occurred during the early part of the Pleistocene Epoch, between 1.8 million and 780,000 years ago. Many African carnivores spread to Eurasia during the early Pleistocene, and humans could have moved along with them. In this view, H. erectus seems one of many meat-eating species to expand into Eurasia from Africa, rather than a uniquely adapted species. Relying on meat as a primary food source might have allowed many meat-eating species, including humans, to move through many different environments without having to learn about unfamiliar and potentially poisonous plants quickly.

However, the migration of humans to eastern Asia may have occurred gradually and through lower latitudes and environments similar to those of Africa. If East African populations of H. erectus moved at only 1.6 km. (1 mi.) every twenty years, they could have reached Southeast Asia in 150,000 years. Over this amount of time, humans could have learned about and begun relying on edible plant foods. Thus, eating meat may not have played a crucial role in the first human migrations to new continents. Careful comparison of animal fossils, stone tools, and early human fossils from Africa, Asia, and Europe will help scientists better to find what factors motivated and allowed humans to venture out of Africa for the first time.

The origin of our own species, Homo sapiens, is one of the most hotly debated topics in Paleoanthropology. This debate centres on whether or not modern humans have a direct relationship to H. erectus or to the Neanderthals, and to a great extent is acknowledged of the more modern group of humans who evolved within the past 250,000 years. Paleoanthropologists commonly use the term anatomically modern Homo sapiens to distinguish people of today from these similar predecessors.

Traditionally, paleoanthropologists classified as Homo sapiens any fossil human younger than 500,000 years old with a braincase larger than that of H. erectus. Thus, many scientists who believe that modern humans descend from a single line dating back to H. erectus use the name archaic Homo sapiens to refer to a wide variety of fossil humans that predate anatomically modern H. sapiens. The archaic term denotes a set of physical features typical of Neanderthals and other species of late Homo before modern Homo sapiens. These features include a combination of a robust skeleton, a large but low braincase (positioned in a measure behind, rather than over, the face), and a lower jaw lacking a prominent chin. In this sense, Neanderthals are sometimes classified as a subspecies of archaic H. sapiens-H. neanderthalensis. Other scientists think that the variation in archaic fossils falls into clearly identifiable sets of traits, and that any type of human fossil exhibiting a unique set of traits should have a new species name. According to this view, the Neanderthals belong to their own species, H. neanderthalensis.

In the past, scientists claimed that Neanderthals differed greatly from modern humans. However, the basis for this claim came from a faulty reconstruction of a Neanderthal skeleton that showed it with bent knees and a slouching gait. This reconstruction gave the common but mistaken impression that Neanderthals were dim-witted brutes who lived a crude lifestyle. On the contrary, Neanderthals, like the species that preceded them, walked fully upright without a slouch or bent knees. In addition, their cranial capacity was quite large at about 1,500 cu. cm. (about ninety cu. in.), larger on average than that of modern humans. (The difference probably relates to the greater muscle mass of Neanderthals as compared with modern humans, which usually correlates with a larger brain size.).

Compared with earlier humans, Neanderthals had a high degree of cultural sophistication. They appear to have encountered some informality, as perhaps something as primitive in construction showing symbolic rituals, such as the burial of they're dead. Neanderthal fossils-including a number of fairly complete skeletons,-are quite common compared with those of earlier forms of Homo, in part because of the Neanderthal practice of intentional burial. Neanderthals also produced sophisticated types of stone tools known as Mousterian, which involved creating blanks (rough forms) from which several types of tools could be made.

Along with many physical similarities, Neanderthals differed from modern humans in several ways. The typical Neanderthal skull had a low forehead, a large nasal area (suggesting a large nose), a forward-projecting nasal and cheek region, a prominent brow ridge with a bony arch over each eye, a non-projecting chin, and obvious space behind the third molar (in front of the upward turn of the lower jaw).

Neanderthals were heavily built and had prominently-boned skeleton body structures than do modern humans. Other Neanderthal skeletal features included a bowing of the limb bones in some individuals, broad scapulae (shoulder blades), hip joints turned outward, a long and thin pubic bone, short lower leg and arm bones on the upper bones, and large surfaces on the joints of the toes and limb bones. Together, these traits made a powerful, compact body of short stature of males averaged 1.7 m. (5 ft. 5 in.) tall and 84 kg. (185 lb.), and females averaged 1.5 m. (5 ft.) tall and 80 kg. (176 lb.). The short, stocky build of Neanderthals conserved heat and helped them withstand extremely cold conditions that prevailed in temperate regions beginning about 70,000 years ago. The last known Neanderthal fossils come from western Europe and date from approximately 36,000 years ago.

At the same time as Neanderthal populations grew in number in Europe and parts of Asia, other populations of nearly modern humans arose in Africa and Asia. Scientists also commonly refer to these fossils, which are distinct from but similar to those of Neanderthals, as archaic. Fossils from the Chinese sites of Dali, Maba, and Xujiayao display the long, low cranium and large face typical of archaic humans, yet they also have features similar to those of modern people in the region. At the cave site of Jebel Irhoud, Morocco, scientists have found fossils with the long skull typical of archaic humans but also the modern traits of a higher forehead and flatter mid face. Fossils of humans from East African sites older than 100,000 years, such as Ngaloba in Tanzania and Eliye Springs in Kenya,-also seem to show a mixture of archaic and modern traits.

The oldest known fossils that possess skeletal features typical of modern humans date from between 130,000 and 90,000 years ago. Several key features distinguish the skulls of modern humans from those of archaic species. These features include a much smaller brow ridge, if any; a globe-shaped braincase; and a flat or only projecting face of reduced size, located under the front of the braincase. Among all mammals, only humans have a face positioned directly beneath the frontal lobe (forward-most area) of the brain. As a result, modern humans tend to have a higher forehead than did Neanderthals and other archaic humans. The cranial capacity of modern humans ranges from about 1,000 to 2,000 cu. cm. (60 to 120 cu. in.), with the average being about 1,350 cu. cm. (80 cu. in.).

Scientists have found both fragmentary and nearly complete cranial fossils of early anatomically modern Homo sapiens from the sites of Singha, Sudan; Omo, Ethiopia; Klasies River Mouth, South Africa; and Skh -Cave, Israel. Based on these fossils, many scientists conclude that modern H. sapiens had evolved in Africa by 130,000 years ago and started spreading to diverse parts of the world beginning on a route through the Near East sometime before 90,000 years ago.

Paleoanthropologists are engaged in an ongoing debate about where modern humans evolved and how they spread around the world. Differences in opinion rest on the question of whether the evolution of modern humans took place in a small region of Africa or over a broad area of Africa and Eurasia. By extension, opinions differ as to whether modern human populations from Africa displaced all existing populations of earlier humans, eventually resulting in their extinction.

Those, who think modern humans originated exclusively in Africa, and then spread around the world support what is known as the out of Africa hypothesis. Those who think modern humans evolved over a large region of Eurasia and Africa support the so-called multi-regional hypothesis.

Researchers have conducted many genetic studies and carefully assessed fossils to figure out which of these hypotheses agrees more with scientific evidence. The results of this research do not entirely confirm or reject either one. Therefore, some scientists think a compromise between the two hypotheses is the best explanation. The debate between these views has implications for how scientists understand the concept of race in humans. The dubious question that raises is a distributed contribution guised in curiously of itself is to whether the physical differences among modern humans evolved deep in the past or recent, in which is accorded to the out of Africa hypothesis. It is also known as the replacement hypothesis, by which early populations of modern humans out from Africa migrated to other regions and entirely replaced existing populations of archaic humans. The replaced populations would have included the Neanderthals and any surviving groups of Homo erectus. Supporters of this view note that many modern human skeletal traits evolved most recently-within the past 200,000 years or so,-suggesting a single, common origin. Additionally, the anatomical similarities shared by all modern human populations far outweigh those shared by premodern and modern humans within particular geographic regions. Furthermore, biological research suggested that most new species of organisms, including mammals, arose from small, geographically isolated populations.

According to the multi-regional hypothesis, also known as the continuity hypothesis, the evolution of modern humans began when Homo erectus spread throughout much of Eurasia around one million years ago. Regional populations retained some unique anatomical features for hundreds of thousands of years, but they also mated with populations from neighbouring regions, exchanging heritable traits with each other. This exchange of heritable traits is known as gene flow.

Through gene flow, populations of H. erectus passed on a variety of increasingly modern characteristics, such as increases in brain size, across their geographic range. Gradually this would have resulted in the evolution of more modern looking humans throughout Africa and Eurasia. The physical differences among people on this day, then, would result from hundreds of thousands of years of regional evolution. This is the concept of continuity. For instance, modern East Asian populations have some skull features that scientists also see in H. erectus fossils from that region.

Noticeably critics of the multi-regional hypothesis claim that it wrongly advocates a scientific belief in race and could be used to encourage racism. Supporters of the theory point out, however, that their position does not imply that modern races evolved in isolation from each other, or that racial differences justify racism. Instead, the theory holds that gene flow linked different populations together. These links allowed progressively more modern features, no matter where they arose, to spread from region to region and eventually become universal among humans.

Scientists have weighed the out of Africa and multi-regional hypotheses against both genetic and fossil evidence. The results do not unanimously support either one, but weigh more heavily in favour of the out of Africa hypothesis.

Geneticists have studied the amount of difference in the DNA (deoxyribonucleic acid) of different populations of humans. DNA is the molecule that contains our heritable genetic code. Differences in human DNA result from mutations in DNA structure. Mutations may result from exposure to external elements such as solar radiation or certain chemical compounds, while others occur naturally at random.

Geneticists have calculated rates at which mutations can be expected to occur over time. Dividing the total number of genetic differences between two populations by an expected rate of mutation provides an estimate of the time when the two gave cause to be joined of a common ancestor. Many estimates of evolutionary ancestry rely on studies of the DNA in cell structures called mitochondria. This DNA is referred to as mtDNA (mitochondrial DNA). Unlike DNA from the nucleus of a cell, which codes for most of the traits an organism inherits from both parents, mtDNA inheritance passes only from a mother to her offspring. MtDNA also accumulates mutations about ten times faster than does DNA in the cell nucleus (the location of most DNA). The structure of mtDNA changes so quickly that scientists can easily measure the differences between one human population and another. Two closely related populations should have only minor differences in their mtDNA. Conversely, two very distantly related populations should have large differences in their mtDNA.

MtDNA research into modern human origins has produced two major findings. First, the entire amount of variation in mtDNA across human populations is small in comparison with that of other animal species. This significance, in that all human mtDNA originated from a single since which ancestral lineage-specifically, a single female-of late has been mutating ever. Most estimates of the mutation rate of mtDNA suggest that this female ancestor lived about 200,000 years ago. In addition, the mtDNA of African populations varies more than that of peoples in other continents. This suggests that the mtDNA of African populations have proven in identifying their place of a value on a longer time than it has in populations over any other region. In that all living people inherited their mtDNA from one woman in Africa, who is sometimes called the Mitochondrial Eve, in addition geneticists and anthropologists have concluded from this evidence that modern humans originated in a small population in Africa and spread out from there.

MtDNA studies have weaknesses, however, including the following four. First, the estimated rate of mtDNA mutation varies from study to study, and some estimates put the date of origin closer to 850,000 years ago, the time of Homo erectus. Second, mtDNA makes up a small part of the total genetic material that humans inherit. The rest of our genetic material-about 400,000 times more than the amount of mtDNA,-came from many individuals living at the time of the African Eve, conceivably from many different regions. This intermittent interval of which time modern mtDNA began to diversify does not necessarily coincide with the origin of modern human biological traits and cultural abilities. Fourth, the smaller amount of modern mtDNA diversity outside of Africa could result from times when European and Asian populations declined in numbers, perhaps due to climate changes.

Regardless of these criticisms, many geneticists continue to favour the out of Africa hypothesis of modern human origins. Studies of nuclear DNA also suggest an African origin for a variety of genes. Furthermore, in a remarkable series of studies in the late 1990s, scientists recovered mtDNA from the first Neanderthal fossil found in Germany and two other Neanderthal fossils. In each case, the mtDNA does not closely match that of modern humans. This finding suggests that at least some Neanderthal populations had diverged from the line to modern humans by 500,000 to 600,000 years ago, and the depriving of an augmented potential of possible occurrence is apprehensibly actualized, and which can be known as having an existence as categorized in virtue been no attributed thing but some substantiation by a form of something exacted to have happened. Also to suggest that Neanderthals represent a separate species from modern H. sapiens. In another study, however, mtDNA extracted from a 62,000-year-old Australian H. sapiens fossil was found to differ significantly from modern human mtDNA, suggesting a much wider range of mtDNA variation within H. sapiens than was previously believed. According to the Australian researchers, this finding lends support to the multi-regional hypothesis because it shows that different populations of H. sapiens, possibly including Neanderthals, could have evolved independently in different parts of the world.

As with genetic research, fossil evidence also does not entirely support or refute either of the competing hypotheses of modern human origins. However, many scientists see the balance of evidence favouring an African origin of modern H. sapiens within the past 200,000 years. The oldest known modern-looking skulls come from Africa and date from perhaps 130,000 years ago. The next oldest comes from the Near East, where they date from about 90,000 years ago. Fossils of modern humans in Europe do not exist between years from before 40,000 years ago. In addition, the first modern humans in Europe-often referred to as Cro-Magnon people had elongated lower leg bones, as did African populations that were adapted too warm, tropical climates. This suggests that populations from warmer regions replaced those in colder European regions, such as the Neanderthals.

Fossils also show that populations of modern humans lived at the same time and in the same regions as did populations of Neanderthals and Homo erectus, but that each retained its distinctive physical features. The different groups overlapped in the Near East and Southeast Asia for between about 30,000 and 50,000 years. The maintenance of physical differences for this amount of time implies that archaically and modern humans could either not or generally did not interbreed. To some scientists, this also means that the Neanderthals belong to a separate species, H. neanderthalensis, and that migratory populations of modern humans entirely replaced archaic humans in both Europe and eastern Asia.

On the other hand, fossils of archaic and modern humans in some regions show continuity in certain physical characteristics. These similarities may indicate multi-regional evolution. For example, both archaic and modern skulls of eastern Asia have flatter cheek and nasal areas than do skulls from other regions. By contrast, the same parts of the face project forward in the skulls of both archaic and modern humans of Europe. Assuming that these traits were influenced primarily by genetic inheritance rather than environmental factors, archaic humans may have given rise to modern humans in some regions or at least interbred with migrant modern-looking humans.

Each of the competing major hypotheses of modern human origins has its strengths and weaknesses. Genetic evidence appears to support the out of Africa hypothesis. In the western half of Eurasia and in Africa, this hypothesis also seems the better explanation, particularly in regard to the apparent replacement of Neanderthals by modern populations. At the same time, the multi-regional hypothesis appears to explain some of the regional continuity found in East Asian populations.

Therefore, many paleoanthropologists advocate a theory of modern human origins that combines elements of the out of Africa and the changing regional hypotheses. Humans with modern features may have first come forth in Africa or come together there as a result of gene flow with populations from other regions. These African populations may then have replaced archaic humans in certain regions, such as western Europe and the Near East. Nevertheless, elsewhere,-especially in East Asia-gene flow may have occurred among local populations of archaic and modern humans, resulting in distinct and enduring regional characteristics.

All three of these views-the two competing positions and the compromise; acknowledge the strong biological unity of all people. In the multi-regional hypothesis, this unity results from hundreds of thousands of years of continued gene flow among all human populations. According to the out of Africa hypothesis, on the other hand, similarities among all living human populations result from a recent common origin. The compromise position accepts both of these as reasonable and compatible explanations of modern human origins.

The story of human evolution is as much about the development of cultural behaviour as it is about changes in physical appearance. The term culture, in anthropology, traditionally refers to all human creations and activities governed by social customs and rules. It includes elements such as technology, language, and art. Human cultural behaviour depends on the social transfer of information from one generation to the next, which it depends on a sophisticated system of communication, such as language.

The term culture has often been used to distinguish the behaviour of humans from that of other animals. However, some nonhuman animals also appear to have forms of learned cultural behaviours. For instance, different groups of chimpanzees use different techniques to capture termites for food using sticks. Also, in some regions chimps use stones or pieces of wood for cracking open nuts. Chimps in other regions do not practice this behaviour, although their forests have similar nut trees and materials for making tools. These regional differences resemble traditions that people pass from generation to generation. Traditions are a fundamental aspect of culture, and paleoanthropologists assume that the earliest humans also had some types of traditions.

Nonetheless, modern humans differ from other animals, and probably many earlier human species, in that they actively teach each other and can pass on and accumulate unusually large amounts of knowledge. People also have a uniquely long period of learning before adulthood, and the physical and mental capacity for language. Language of all forms, spoken, signed, and written in provides a medium for communicating vast amounts of information, much more than any other animal appears to be able to transmit through gestures and vocalizations.

Scientists have traced the evolution of human cultural behaviour through the study of archaeological artifacts, such as tools, and related evidence, such as the charred remains of cooked food. Artifacts show that throughout much of human evolution, culture has developed slowly. During the Palaeolithic, or early Stone Age, basic techniques for making stone tools changed very little for periods of well more than a million years.

Human fossils also provide information about how culture has evolved and what effects it has had on human life. For example, over the past 30,000 years, the basic anatomy of humans has undergone only one prominent change: The bones of the average human skeleton have become much smaller and thinner. Innovations in the making and use of tools and in obtaining food.- results of cultural evolution may have led to more efficient and less physically taxing lifestyles, and thus caused changes in the skeleton.

Paleoanthropologists and archaeologists have studied many topics in the evolution of human cultural behaviour. These have included the evolution of (1) social life; (2) subsistence (the acquisition and production of food); (3) the making and using of tools; (4) environmental adaptation; (5) symbolic thought and its expression through language, art, and religion; and (6) the development of agriculture and the rise of civilizations.

One of the first physical changes in the evolution of humans from apes-a decrease in the size of male canine teeth

- also indicates a change in social relations. Male apes sometimes use their large canines to threaten (or sometimes fight with) other males of their species, usually over access to females, territory, or food. The evolution of small canines in Australopiths implies that males had either developed other methods of threatening each other or become more cooperative. In addition, both male and female Australopiths had small canines, indicating a reduction of sexual dimorphism from that in apes. Yet, although sexual dimorphism in canine size decreased in Australopiths, males were still much larger than females. Thus, male Australopiths might have competed aggressively with each other based on sheer size and strength, and the social life of humans may not have differed much from that of apes until later times.

Scientists believe that several of the most important changes from apelike to characteristically human social life occurred in species of the genus Homo, whose members show even less sexual dimorphism. These changes, which may have occurred at different times, included (1) prolonged maturation of infants, including an extended period during which they required intensive care from their parents; (2) special bonds of sharing and exclusive mating between particular males and females, called pair-bonding; and (3) the focus of social activity at a home base, a safe refuge in a special location known to family or group members.

Humans, who have a large brain, have a prolonged periods of infant development and childhood because the brain takes a long time too mature. Since the Australopiths brain was not much larger than that of a chimp, some scientists think that the earliest humans had a more apelike rate of growth, which is far more rapid than that of modern humans. This view is supported by studies of Australopiths fossils looking at tooth development-a good indicator of overall body development.

In addition, the human brain becomes very large as it develops, so a woman must give birth to a baby at an early stage of development in order for the infant's head to fit through her birth canal. Thus, human babies require a long period of care to reach a stage of development at which they depend less on their parents. In contrast with a modern female, a female Australopiths could give birth to a baby at an advanced stage of development because its brain would not be too large to pass through the birth canal. The need to give birth early,-and therefore, to provide more infant care-may have evolved around the time of the middle Homo species Homo ergaster. This species had a brain significantly larger than that of the Australopiths, but a narrow birth canal.

Pair-bonding, usually of a short duration, occurs in a variety of primate species. Some scientists speculate that prolonged bonds developed in humans along with increased sharing of food. Among primates, humans have a distinct type of food-sharing behaviour. People will delay eating food until they have returned with it to the location of other members of their social group. This type of food sharing may have arisen at the same time as the need for intensive infant care, probably by the time of H. ergaster. By devoting himself to a particular female and sharing food with her, a male could increase the chances of survival for his own offspring.

Humans have lived as foragers for millions of years. Foragers obtain food when and where it is available over a broad territory. Modern-day foragers (also known as hunter-gatherers) such as, the San people in the Kalahari Desert of southern Africa who also set up central campsites, or home bases, and divide work duties between men and women. Women gather readily available plant and animal foods, while men take on the often less successful task of hunting. For most of the time since the ancestors of modern humans diverged from the ancestors of the living great apes, around seven million years ago, all humans on Earth f ed themselves exclusively by hunting wild animals and gathered wild planets, as the Blackfeet still did in thee 19th century. It was only within the last 11,000 years that some peoples turned to what is termed food production: that is, domesticating wild animals and planets and eating the resulting livestock and crops. Today, most people on Earth consume food that they produced themselves or that someone else produced for them. Some current rates of change, within the next decade the few remaining bands of hunter-gatherers will abandon their ways, disintegrate, or die out, thereby ending our million of the years of commitment to the hunter-gatherers lifestyle. Those few peoples who remained hunter-gatherers into the 20th century escaped replacement by food producers because they ere confined to areas not fit for food production, especially deserts and Arctic regions. Within the present decade, even they will have been seduced by the attractions of civilization, settled down under pressure from bureaucrats or missionaries, or succumbed to germs.

Nevertheless, female and male family members and relatives bring together their food to share at their home base. The modern form of the home base,-that also serves as a haven for raising children and caring for the sick and elderly-may have first developed with middle Homo after about 1.7 million years ago. However, the first evidence of hearths and shelters, -common to all modern home bases-comes from only after 500,000 years ago. Thus, a modern form of social life may not have developed until late in human evolution.

Human subsistence refers to the types of food humans eat, the technology used in and methods of obtaining or producing food, and the ways in which social groups or societies organize them for getting, making, and distributing food. For millions of years, humans probably fed on-the-go, much as other primates do. The lifestyle associated with this feeding strategy is generally organized around small, family-based social groups that take advantage of different food sources at different times of year.

The early human diet probably resembled that of closely related primate species. The great apes eat mostly plant foods. Many primates also eat easily obtained animal foods such as insects and bird eggs. Among the few primates that hunt, chimpanzees will prey on monkeys and even small gazelles. The first humans probably also had a diet based mostly on plant foods. In addition, they undoubtedly ate some animal foods and might have done some hunting. Human subsistence began to diverge from that of other primates with the production and use of the first stone tools. With this development, the meat and marrow (the inner, fat-rich tissue of bones) of large mammals became a part of the human diet. Thus, with the advent of stone tools, the diet of early humans became distinguished in an important way from that of apes.

Scientists have found broken and butchered fossil bones of antelopes, zebras, and other comparably sized animals at the oldest archaeological sites, which go on a date from about 2.5 million years ago. With the evolution of late Homo, humans began to hunt even the largest animals on Earth, including mastodons and mammoths, members of the elephant family. Agriculture and the of animals arose only in the recent past, with H. sapiens.

Paleoanthropologists have debated whether early members of the modern human genus were aggressive hunters, peaceful plant gatherers, or opportunistic scavengers. Many scientists once thought that predation and the eating of meat had strong effects on early human evolution. This hunting hypothesis suggested that early humans in Africa survived particularly arid periods by aggressively hunting animals with primitive stone or bone tools. Supporters of this hypothesis thought that hunting and competition with carnivores powerfully influenced the evolution of human social organization and behaviour; Toolmaking; anatomy, such as the unique structure of the human hand; and intelligence.

Beginning in the 1960s, studies of apes cast doubt on the hunting hypothesis. Researchers discovered that chimpanzees cooperate in hunts of at least small animals, such as monkeys. Hunting did not, therefore, entirely distinguish early humans from apes, and therefore hunting alone may not have determined the path of early human evolution. Some scientists instead argued in favour of the importance of food-sharing in early human life. According to a food-sharing hypothesis, cooperation and sharing within family groups,- instead of aggressive hunting-strongly influenced the path of human evolution.

Scientists once thought that archaeological sites as much as two million years old provided evidence to support the food-sharing hypothesis. Some of the oldest archaeological sites were places where humans brought food and stone tools together. Scientists thought that these sites represented home bases, with many social features of modern hunter-gatherers campsites, including the sharing of food between pair-bonded males and females.

A critique of the food-sharing hypothesis resulted from more careful study of animal bones from the early archaeological sites. Microscopic analysis of these bones revealed the marks of human tools and carnivore teeth, showing that both humans and potential predators, such as hyenas, cats, and jackals were active at these sites. This evidence suggested that what scientists had thought were home bases where early humans shared food were in fact food-processing sites that humans abandoned to predators. Thus, evidence did not clearly support the idea of food-sharing among early humans.

The new research also suggested a different view of early human subsistence that early humans scavenged meat and bone marrow from dead animals and did little hunting. According to this scavenging hypothesis, early humans opportunistically took parts of animal carcasses left by predators, and then used stone tools to remove marrow from the bones.

Observations that many animals, such as antelope, often die off in the dry season make the scavenging hypothesis quite plausible. Early Toolmaker would have had plenty of opportunity to scavenge animal fat and meat during dry times of the year. However, other archaeological studies-and a better appreciation of the importance of hunting among chimpanzees-suggest that the scavenging hypothesis is too narrow. Many scientists now believe that early humans both scavenged and hunted. Evidence of carnivore tooth marks on bones cut by early human Toolmaker suggests that the humans scavenged at least the larger of the animals they ate. They also ate a variety of plant foods. Some disagreement remains, however, as to how much early humans relied on hunting, especially the hunting of smaller animals.

Scientists debate when humans first began hunting on a regular basis. For instance, elephant fossils found with tools made by middle Homo once led researchers to the idea that members of this species were hunters of big game. However, the simple association of animal bones and tools at the same site does not necessarily mean that early humans had killed the animals or eaten their meat. Animals may die in many ways, and natural forces can accidentally place fossils next to tools. Recent excavations at Olorgesailie, Kenya, show that H. erectus cut meat from elephant carcasses but give rise of not revealing to whether these humans were regular or specialized hunters.

Humans who lived outside of Africa,-especially in colder temperate climates,-almost necessitated eating more meat than their African counterparts. Humans in temperate Eurasia would have had to learn about which plants they could safely eat, and the number of available plant foods would drop significantly during the winter. Still, although scientists have found very few fossils of edible or eaten plants at early human sites, early inhabitants of Europe and Asia probably did eat plant foods in addition to meat.

Sites that provide the clearest evidence of early hunting include Boxgrove, England, where about 500,000 years ago people trapped a great number of large game animals between a watering hole and the side of a cliff and then slaughtered them. At Schningen, Germany, a site about 400,000 years old, scientists have found wooden spears with sharp ends that were well designed for throwing and probably used in hunting large animals.

Neanderthals and other archaic humans seem to have eaten whatever animals were available at a particular time and place. So, for example, in European Neanderthal sites, the number of bones of reindeer (a cold-weather animal) and red deer (a warm-weather animal) changed depending on what the climate had been like. Neanderthals probably also combined hunting and scavenging to obtain animal protein and fat.

For at least the past 100,000 years, various human groups have eaten foods from the ocean or coast, such as shellfish and some sea mammals and birds. Others began fishing in interior rivers and lakes. Between probably 90,000 and 80,000 years ago people in Katanda, in what is now the Democratic Republic of the Congo, caught large catfish using a set of barbed bone points, the oldest known specialized fishing implements. The oldest stone tips for arrows or spears date from about 50,000 to 40,000 years ago. These technological advances, probably first developed by early modern humans, indicate an expansion in the kinds of foods humans could obtain.

Beginning 40,000 years ago humans began making even more significant advances in hunting dangerous animals and large herds, and in exploiting ocean resources. People cooperated in large hunting expeditions in which they killed great numbers of reindeer, bison, horses, and other animals of the expansive grasslands that existed at that time. In some regions, people became specialists in hunting certain kinds of animals. The familiarity these people had with the animals they hunted appears in sketches and paintings on cave walls, dating from as much as 32,000 years ago. Hunters also used the bones, ivory, and antlers of their prey to create art and beautiful tools. In some areas, such as the central plains of North America that once teemed with a now-extinct type of large bison (Bison occidentalis), hunting may have contributed to the extinction of entire species.

The making and use of tools alone probably did not distinguish early humans from their ape predecessors. Instead, humans made the important breakthrough of using one tool to make another. Specifically, they developed the technique of precisely hitting one stone against another, known as knapping. Stone Toolmaking characterized the period sometimes referred to as the Stone Age, which began at least 2.5 million years ago in Africa and lasted until the development of metal tools within the last 7,000 years (at different times in different parts of the world). Although early humans may have made stone tools before 2.5 million years ago, Toolmaker may not have remained long enough in one spot to leave clusters of tools that an archaeologist would notice today.

The earliest simple form of stone Toolmaking involved breaking and shaping an angular rock by hitting it with a palm-sized round rock known as a hammerstone. Scientists refer to tools made in this way as Oldowan, after Olduvai Gorge in Tanzania, a site from which many such tools have come. The Oldowan tradition lasted for about one million years. Oldowan tools include large stones with a chopping edge, and small, sharp flakes that could be used to scrape and slice. Sometimes Oldowan Toolmaker used anvil stones (flat rocks found or placed on the ground) on which hard fruits or nuts could be broken open. Chimpanzees are known to do this today.

Scientists once thought that Oldowan Toolmaker intentionally produced several different types of tools. It now appears that differences in the shapes of larger tools were some byproducts of detaching flakes from a variety of natural rock shapes. Learning the skill of Oldowan Toolmaking assiduously required observation, but not necessarily instruction or language. Thus, Oldowan tools were simple, and their makers used them for such purposes as cutting up animal carcasses, breaking bones to obtain marrow, cleaning hides, and sharpening sticks for digging up edible roots and tubers.

Oldowan Toolmaker sought out the best stones for making tools and carried them to food-processing sites. At these sites, the Toolmaker would butcher carcasses and eat the meat and marrow, thus avoiding any predators that might return to a kill. This behaviour of bringing food and tools together contrasts with an eat-as-you-go strategy of feeding commonly seen in other primates.

The Acheulean Toolmaking traditions, which began sometime between 1.7 million and 1.5 million years ago, consisted of increasingly symmetrical tools, most of which scientists refer as to hand-axes and cleavers. Acheulean Toolmaker, such as Homo erectus, also worked with much larger pieces of stone than did Oldowan Toolmaker. The symmetry and size of later Acheulean tools show increased planning and design-and thus probably increased intelligence-on the part of the Toolmaker. The Acheulean tradition continued for more than 1.35 million years.

The next significant advances in stone Toolmaking were made by at least 200,000 years ago. One of these methods of Toolmaking, known as the prepared core technique (and Levallois in Europe), involved carefully and exactingly knocking off small flakes around one surface of a stone and then striking it from the side to produce a preformed tool blank, which could then be worked further. Within the past 40,000 years, modern humans developed the most advanced stone Toolmaking techniques. The so-called prismatic-blade core Toolmaking technique involved removing the top from a stone, leaving a flat platform, and then breaking off multiple blades down the sides of the stone. Each blade had a triangular cross-section, giving it excellent strength. Using these blades as blanks, people made exquisitely shaped spearheads, knives, and numerous other kinds of tools. The most advanced stone tools also exhibit distinct and consistent regional differences in style, indicating a high degree of cultural diversity.

Early humans experienced dramatic shifts in their environments over time. Fossilized plant pollen and animal bones, along with the chemistry of soils and sediments, reveal much about the environmental conditions to which humans had to adapt.

By eight million years ago, the continents of the world, which move over very long periods, had come to the positions they now occupy. However, the crust of the Earth has continued to move since that time. These movements have dramatically altered landscapes around the world. Important geological changes that affected the course of human evolution include those in southern Asia that formed the Himalayan mountain chain and the Tibetan Plateau, and those in eastern Africa that formed the Great Rift Valley. The formation of major mountain ranges and valleys led to changes in wind and rainfall patterns. In many areas dry seasons became more pronounced, and in Africa conditions became generally cooler and drier.

By five million years ago, the amount of fluctuation in global climate had increased. Temperature fluctuations became quite pronounced during the Pliocene Epoch (five million to 1.6 million years ago). During this time the world entered a period of intense cooling called an ice age, which began from place to place of 2.8 million years ago. Ice ages cycle through colder phases known as glacial (times when glaciers form) and warmer phases known as interglacial (during which glaciers melt). During the Pliocene, glacial and interglacial each lasted about 40,000 years each. The Pleistocene Epoch (1.6 million to 10,000 years ago), in contrast, had much larger and longer ice age fluctuations. For instance, beginning about 700,000 years ago, these fluctuations repeated roughly every 100,000 years.

Between five million and two million years ago, a mixture of forests, woodlands, and grassy habitats covered most of Africa. Eastern Africa entered a significant drying period around 1.7 million years ago, and after one million years ago large parts of the African landscape turned to grassland. So the early Australopiths and early Homo lived in wooded places, whereas Homo ergaster and H. erectus lived in areas of Africa that were more open. Early human populations encountered many new and different environments when they spread beyond Africa, including colder temperatures in the Near East and bamboo forests in Southeast Asia. By about 1.4 million years ago, populations had moved into the temperate zone of northeast Asia, and by 800,000 years ago they had dispersed into the temperate latitudes of Europe. Although these first excursions to latitudes of 400 north and higher may have occurred during warm climate phases, these populations also must have encountered long seasons of cold weather.

All of these changes,-dramatic shifts in the landscape, changing rainfall and drying patterns, and temperature fluctuations posed challenges to the immediate and long-term survival of early human populations. Populations in different environments evolved different adaptations, which in part explains why more than one species existed at the same time during much of human evolution.

Some early human adaptations to new climates involved changes in physical (anatomical) form. For example, the physical adaptation of having a tall, lean body such as that of H. ergaster,-with lots of skin exposed to cooling winds-would have dissipated heat very well. This adaptation probably helped the species to survive in the hotter, more open environments of Africa around 1.7 million years ago. Conversely, the short, wide bodies of the Neanderthals would have conserved heat, helping them to survive in the ice age climates of Europe and western Asia

Increases in the size and complexity of the brain, however, made early humans progressively better at adapting through changes in cultural behaviour. The largest of these brain-size increases occurred over the past 700,000 years, a period during which global climates and environments fluctuated dramatically. Human cultural behaviour also evolved more quickly during this period, most likely in response to the challenges of coping with unpredictable and changeable surroundings

Humans have always adapted to their environments by adjusting their behaviour. For instance, early Australopiths moved both in the trees and on the ground, which probably helped them survive environmental fluctuations between wooded and more open habitats. Early Homo adapted by making stone tools and transporting their food over long distances, thereby increasing the variety and quantities of different foods they could eat. An expanded and flexible diet would have helped these Toolmaker survive unexpected changes in their environment and food supply

When populations of H. erectus moved into the temperate regions of Eurasia, but they faced new challenges to survival. During the colder seasons they had to either move away or seek shelter, such as in caves. Some of the earliest definitive evidence of cave dwellers dates from around 800,000 years ago at the site of Atapuerca in northern Spain. This site may have been home too early H. heidelbergensis populations. H. erectus also used caves for shelter.

Eventually, early humans learned to control fire and to use it to create warmth, cook food, and protect themselves from other animals. The oldest known fire hearths date from between 450,000 and 300,000 years ago, at sites such as Bilzingsleben, Germany; Verteszöllös, Hungary; and Zhoukoudian (Chou-k'ou-tien), China. African sites as old as 1.6 million to 1.2 million years contain burned bones and reddened sediments, but many scientists find such evidence too ambiguous to prove that humans controlled fire. Early populations in Europe and Asia may also have worn animal hides for warmth during glacial periods. The oldest known bone needles, which indicate the development of sewing and tailored clothing, date from about 30,000 to 26,000 years ago.

Behaviour relates directly to the development of the human brain, and particularly the cerebral cortex, the part of the brain that allows abstract thought, beliefs, and expression through language. Humans communicate through the use of symbols-ways of referring to things, ideas, and feelings that communicate meaning from one individual to another but that need not have any direct connection to what they identify. For instance, a word-one types of symbol-does not usually relate directly or actualized among the things or indexical to its held idea, but by its representation, it has only of itself for being abstractive.

People can also paint abstract pictures or play pieces of music that evoke emotions or ideas, even though emotions and ideas have no form or sound. In addition, people can conceive of and believe in supernatural beings and powers-abstract concepts that symbolize real-world events such as the creation of Earth and the universe, the weather, and the healing of the sick. Thus, symbolic thought lies at the heart of three hallmarks of modern human culture: language, art, and religion.

In language, people creatively join words together in an endless variety of sentences,-each with a noun, verb and with the collective distinction in meanings, according to a set of mental rules, or grammar. Language provides the ability to communicate complex concepts. It also allows people to exchange information about both past and future events, about objects that are not present, and about complex philosophical or technical concepts

Language gives people many adaptive advantages, including the ability to plan, to communicate the location of food or dangers to other members of a social group, and to tell stories that unify a group, such as mythologies and histories. However, words, sentences, and languages cannot be preserved like bones or tools, so the evolution of language is one of the most difficult topics to investigate through scientific study.

It appears that modern humans have an inborn instinct for language. Under normal conditions not developing language is almost impossible for a person, and people everywhere go through the same stages of increasing language skill at about the same ages. While people appear to have inborn genetic information for developing language, they learn specific languages based on the cultures from which they come and the experiences they have in life.

The ability of humans to have language depends on the complex structure of the modern brain, which has many interconnected, specific areas dedicated to the development and control of language. The complexity of the brain structures necessary for language suggests that it probably took a long time to evolve. While paleoanthropologists would like to know when these important parts of the brain evolved, endocasts (inside impressions) of early human skulls do not provide enough detail to show this.

Some scientists think that even the early Australopiths had some ability to understand and use symbols. Support for this view comes from studies with chimpanzees. A few chimps and other apes have been taught to use picture symbols or American Sign Language for simple communication. Nevertheless, it appears that language, -as well as art and religious rituals became vital aspects of human life only during the past 100,000 years, primarily within our own species.

Humans also express symbolic thought through many forms of art, including painting, sculpture, and music. The oldest known object of possible symbolic and artistic value dates from about 250,000 years ago and comes from the site of Berekhat Ram, Israel. Scientists have interpreted this object, a figure carved into a small piece of volcanic rock, as a representation of the outline of a female body. Only a few other possible art objects are known from between 200,000 and 50,000 years ago. These items, from western Europe and usually attributed to Neanderthals, include two simple pendants-a tooth and a bone with bored holes,-and several grooved or polished fragments of tooth and bone.

Sites dating from at least 400,000 years ago contain fragments of red and black pigment. Humans might have used these pigments to decorate bodies or perishable items, such as wooden tools or clothing of animal hides, but this evidence would not have survived to today. Solid evidence of the sophisticated use of pigments for symbolic purposes,-such as in religious rituals comes only from after 40,000 years ago. From early in this period, researchers have found carefully made types of crayons used in painting and evidence that humans burned pigments to create a range of colours.

People began to create and use advanced types of symbolic objects between about 50,000 and 30,000 years ago. Much of this art appears to have been used in rituals-possibly ceremonies to ask spirit beings for a successful hunt. The archaeological record shows a tremendous blossoming of art between 30,000 and 15,000 years ago. During this period people adorned themselves with intricate jewellery of ivory, bone, and stone. They carved beautiful figurines representing animals and human forms. Many carvings, sculptures, and paintings depict stylized images of the female body. Some scientists think such female figurines represent fertility.

Early wall paintings made sophisticated use of texture and colour. The area upon which is now Southern France contains many famous sites of such paintings. These include the caves of Chauvet, which contain art more than 30,000 years old, and Lascaux, in which paintings date from as much as 18,000 years ago. In some cases, artists painted on walls that can be reached only with special effort, such as by crawling. The act of getting to these paintings gives them a sense of mystery and ritual, as it must have to the people who originally viewed them, and archaeologists refer to some of the most extraordinary painted chambers as sanctuaries. Yet no one knows for sure what meanings these early paintings and engravings had for the people who made them.

Graves from Europe and western Asia indicate that the Neanderthals were the first humans to bury their dead. Some sites contain very shallow graves, which group or family members may have dug simply to remove corpses from sight. In other cases it appears that groups may have observed rituals of grieving for the dead or communicating with spirits. Some researchers have claimed that grave goods, such as meaty animal bones or flowers, had been placed with buried bodies, suggesting that some Neanderthal groups might have believed in an afterlife. In a large proportion of Neanderthal burials, the corpse had its legs and arms drawn in close to its chest, which could indicate a ritual burial position.

Other researchers have challenged these interpretations, however. They suggest that perhaps the Neanderthals had practically rather than religious reasons for positioning dead bodies. For instance, a body manipulated into a fetal position would need only a small hole for burial, making the job of digging a grave easier. In addition, the animal bones and flower pollen near corpses could have been deposited by accident or without religious intention.

Many scientists once thought that fossilized bones of cave bears (a now-extinct species of large bear) found in Neanderthal caves indicated that these people had what has been referred to as a cave bear cult, in which they worshipped the bears as powerful spirits. However, after careful study researchers concluded that the cave bears probably died while hibernating and that Neanderthals did not collect their bones or worship them. Considering current evidence, the case for religion among Neanderthals remains controversial.

One of the most important developments in human cultural behaviour occurred when people began to domesticate (control the breeding of) plants and animals. and the advent of agriculture led to the development of dozens of staple crops (foods that forms the basis of an entire diet) in temperate and tropical regions around the world. Almost the entire population of the world today depends on just four of these major crops: wheat, rice, corn, and potatoes.

The growth of farming and animal herding initiated one of the most remarkable changes ever in the relationship between humans and the natural environment. The change first began just 10,000 years ago in the Near East and has accelerated very rapidly since then. It also occurred independently in other places, including areas of Mexico, China, and South America. Since the first of plants and animals, many species over large areas of the planet have come under human control. The overall number of plant and animal species has decreased, while the populations of a few species needed to support large human populations have grown immensely. In areas dominated by people, interactions between plants and animals usually fall under the control of a single species,-Homo sapiens.

The rise of civilizations-the large and complex types of societies in which most people still live today-developed along with surplus food production. People of high status eventually used food surpluses as a way to pay for labour and to create alliances among groups, often against other groups. In this way, large villages could grow into city-states (urban centres that governed them) and eventually empires covering vast territories. With surplus food production, many people could work exclusively in political, religious, or military positions, or in artistic and various skilled vocations. Command of food surpluses also enabled rulers to control labourers, such as in slavery. All civilizations developed based on such hierarchical divisions of status and vocation.

The earliest civilization arose more than 7,000 years ago in Sumer in what is now Iraq. Sumer grew powerful and prosperous by 5,000 years ago, when it entered on the city-state of Ur. The region containing Sumer, known as Mesopotamia, was the same area in which people had first domesticated animals and plants. Other centres of early civilizations include the Nile Valley of Northeast Africa, the Indus. Valley of South Asia, the Yellow River Valley of East Asia, the Oaxaca and Mexico valleys and the Yucatán region of Central America, and the Andean region of South America, China and Inca Empire.

All early civilizations had some common features. Some of these included a bureaucratic political body, a military, a body of religious leadership, large urban centres, monumental buildings and other works of architecture, networks of trade, and food surpluses created through extensive systems of farming. Many early civilizations also had systems of writing, numbers and mathematics, and astronomy (with calendars); road systems; a formalized body of law; and facilities for education and the punishment of crimes. With the rise of civilizations, human evolution entered a phase vastly different from all before which came. Before this time, humans had lived in small, family-entered groups essentially exposed to and controlled by forces of nature. Several thousand years after the rise of the first civilizations, most people now live in societies of millions of unrelated people, all separated from the natural environment by houses, buildings, automobiles, and numerous other inventions and technologies. Culture will continue to evolve quickly and in unforeseen directions, and these changes will, in turn, influence the physical evolution of Homo sapiens and any other human species to come,-attempt to base ethical reasoning on the presumed fact about evolution. The movement is particularly associated with Spencer, the premise that later elements in an evolutionary path are better than earlier ones, the application of the principle then requires seeing western society, laissez faire capitalism, or another object of approval as more evolved than more ‘primitive' social forms. Neither the principle nor the application commands much respect. The version of evolutionary ethics called ‘social Darwinism, emphasised the struggle for natural selection, and drew the conclusion that we should glorify and help such struggles, usually by enchaining competitive and aggressive relations between people in society, or between societies themselves. More recently subjective matters and opposing physical theories have rethought the relations between evolution and ethics in the light of biological discoveries concerning altruism and kin-selection.

It is, nevertheless, and, least of mention, that Sociobiology (the academic discipline best known through the work of Edward O. Alison who coined the tern in his Sociobiology: the New Synthesise, 1975). The approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that logical basis as to genetic encoding for features that are themselves selected for through evolutionary history. The philosophical problem is essentially of methodology of finding criteria for identifying features that are objectively manifest in that they can usefully identify features, which classical epistemology can usefully explain in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations among the features proposed for this kind of explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristics accused of ignoring the influence of environmental and social factors in moulding people's characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty, however there is no need for the approach to committing such errors, since the feature explained sociobiologically may be indexical to environmental considerations: For instance, it may be a propensity to develop some feature in some social or order environment, or even a propensity to develop propensities . . . That man's problem was to separate genuine explanation from speculatively methodological morally stories, which may or may not identify really selective mechanisms

Scientists are unbiased observers who use the scientific method to confirm conclusively and falsify various theories. These experts have no preconceptions in gathering the data and logically derive theories from these objective observations. One great strength of science is that its self-correcting, because scientists readily abandon theories when their use has been forfeited, and then again they have shown them to be irrational, although many people have accepted such eminent views of science, they are almost completely untrue. Data can neither conclusively confirm nor conclusively falsify theories, there really is no such thing as the scientific method, data become subjective in practice, and scientists have displayed a surprising fierce loyalty to their theories. There have been many misconceptions of what science is and what science is not.

Science, is, and should be the systematic study of anything that breathes, walk of its own locomotion, in a bipedal orthogonality, and has some effectual regard for its own responsibility of Beingness, and, of course, have to some degreeable form in living personal manner. In that others of science can examine, test, and verify. Not-knowing or knowing has derived the word science from the Latin word scribe meaning ‘to know.' From its beginnings, science has developed into one of the greatest and most influential fields of human endeavour. Today different branches of science investigate almost everything that thumps in the night in that can observe or detect, and science as the whole shape in the way we understand the universe, our planet, ourselves, and other living things.

Science develops through objective analysis, instead of through personal belief. Knowledge gained in science accumulates as time goes by, building to a turn of work through with what has ben foregoing. Some of this knowledge, such as our understanding of numbers, stretches back to the time of ancient civilizations, when scientific thought first began. Other scientific knowledge,-such as our understanding of genes that cause cancer or of quarks (the smallest known building block of matter), dates back to less than fifty years. However, in all fields of science, old or new, researchers use the same systematic approach, known as the scientific method, to add to what governing evolutionary principles have known.

During scientific investigations, scientists put together and compare new discoveries and existing knowledge. Commonly, new discoveries extend what continuing phenomenons have currently accepted, providing further evidence that existing idea are correct. For example, in 1676 the English physicist Robert Hooke discovered those elastic objects, such as metal springs, stretches in proportion to the force that acts on them. Despite all the advances made in physics since 1676, this simple law still holds true.

Scientists use existing knowledge in new scientific investigations to predict how things will behave. For example, a scientist who knows the exact dimensions of a lens can predict how the lens will focus a beam of light. In the same way, by knowing the exact makeup and properties of two chemicals, a researcher can predict what will happen when they combine. Sometimes scientific predictions go much further by describing objects or events those existing object relations have not yet known. An outstanding instance occurred in 1869, when the Russian chemist Dmitry Mendeleyev drew up a periodic table of the elements arranged to illustrate patterns of recurring chemical and physical properties. Mendeleyev used this table to predict the existence and describe the properties of several elements unknown in his day, and when the mysteriousness of science began the possibilities of experimental simplicities in the discovering enactments whose elements, under which for the several years past, the later, predictions were correct.

No comments:

Post a Comment