Logical Positivism
Introduction
20th century logical positivism has its roots in people like Comte, Mill and Mach. There was a Vienna circle of the logical positivists which developed in the 1910s and 1920s which shaped the continental development of this movement. The English development was sort of a spinoff from the Vienna circle but then was then popularized by A.J Ayer in his “Language, Truth and Logic”. Within the Vienna circle you have people like Moritz Schlick and Rudolf Carnap. And the main significance of the Vienna circle is their initial development of the movement from a rather naive kind of empiricism into one which recognized that if we distinguish between sense data and material objects, we tend towards a phenomenalist epistemology and one which recognized that we cannot always have direct empirical verification of an apparently empirical statement. Sometimes it has to be indirect and through the logical implications of that statement in conjunction with other assertions.
Both the Vienna circle and Aj.J Ayer, the basis was its verifiability of meaning. It’s not a theory of how you ascertain truth, it has to do with the meaning of language. Language has basically two uses, cognitive and non-cognitive. There are all sort of non-cognitive use, emotional, question etc. On the other hand, cognitive statement, synthetic and analytic. Synthetic statement being factual, which one would be expect to being amenable to empirical verifiability an analytics statements in which predicate is logically contained within the subject have simply formal meaning, in the sense they are simply talking about the logical use of the subject and predicate. Of the latter sort you have definitions, tautologies and you likely have mathematical statements. The verifiability theory is about meaning about factual statement. The meaning is the method of its verification. The meaning of an empirical statement is in its reference to empirical data whether those actually available or possible empirical data. What the verifiability theory disallows is the kind of statement that is not available to empirical verification, namely metaphysical statements, of a reality in itself, that is distinct from all appearances. The metaphysic that it’s eliminating is the one which makes a distinction between the thing in itself and the thing for me, the underlying reality and the world of appearances. There is a third distinction, a distinction between strong and weak verification. Strong verification would be conclusive. Weak verification would be satisfied with probability. Ayer is perfectly happing to define a verifiability principle which admits indirect verification, verification in principle rather than in practice and weak rather than strong verification.
Criticisms of Logical Positivism
Some of the distinctions Ayer introduces are responses to criticisms and eventually was the criticism of the verifiability principle which led to the demise of logical positivism.
One of the first criticisms was that empirical generalizations are not verifiable even in principle. Any statement about all members of a class would be, by the verifiability principle, without factual meaning. And in response to that, was to claim that what we need is falsifiability principle, an empirical generalization is always falsifiable in principle. What this means is that you simply want a proposition to be amenable either to verification practices or falsification practices. You might say why not insists simply on falsifiability? Reason is while an empirical is not verifiable but falsifiable. A singular assertion about a particular case is verifiable but is not always falsifiable.
The second line of criticism was to do with the status of the verifiability criterion itself. The positivists tells us that all statements are either synthetic or analytic, factual or formal which is the statement of the verifiability principle, is verifiability statement a factual statement or is it a formal statement? It’s evident that the verifiability theory is not an empirical statement that is amenable to verification or falsification by empirical procedures. It is plainly a factually false statement, or it’s not a factual statement. Ayer got the point and he backs up from claiming that it is a factual statement about the meaning of factual statement and contends instead that it is a methodological stipulation. In other words, it’s a rule that the positivists adopts for methodological purposes. If that’s the case, if you don’t want to adopt, you don’t have to. And consequently the verifiability principle looses its hold on philosophical discourse. It’s hardly a definition, it’s more of an arbitrary principle, just because it is supposedly common to the empirical sciences does not mean that it is applicable to all factual statements.
But that led to a third line of criticism. You see the verifiability principle was developed on the assumption that it was the operative principle in the empirical sciences, but we began to get developments in philosophy of science which made it plain that the sciences are not purely empirical and so this is not even a principle that’s applicable to empirical sciences. There were developments that began to recognize subjectivity in natural science, developments that began to feel the influence of Kant’s a-priori grids, the Copernican revolution in the natural sciences, developments that began to reject the over-simplicity of the hypothetical deductive method. One the work of a man named Norwood Henson, a book called Patterns of Discovery. His historical research at Yale led him to the conclusion that all observations are theory laden. And you don’t have to have a very sophisticated appreciation of scientific method in order to see that. The scientist doesn’t just stand around gawking at all possible data, they come with a working hypothesis. So the relevance in their relevant data is defined by working hypothesis which in turn is suggested by a theory. In other words there are antecedent conceptual factors which determine what data you taken into account. Second example is with Thomas Kuhn on The Structure of Scientific Revolutions. Based on his studies of history of science, he began to recognize theory are part of a much larger conceptual paradigm and scientific revolutions occur when there are paradigm shifts. His point is that you may get a periods of progressive increase of scientific knowledge cumulatively within paradigms and granted there are paradigms that seem to be the empirical verifiability of certain theories that work, those are suggested by the paradigm. But then when you get a paradigm shift, a different framework of explanation is involved and the paradigm shift does not occur because of the weight of empirical evidence, it occurs because within the scientific community there develops, often for non-empirical reasons, dissatisfaction with the existing paradigm, it may lack explanatory power, it may lack coherence, it may proven to be needlessly complicated and we opt for a more simplistic one. So the hold of a pure objective empiricism on science is rejected by Thomas Kuhn. Third example is Michael Polanyi, who has two major books: The Tacit Dimension and Personal Knowledge. The Tacit Dimension makes a claim that there are a variety of tacit aspects of human knowledge that are not explicated by empirical research. And in his work in Personal Knowledge he is talking about the personal dimension in knowledge that affects motivation. That is why progress in science is unpredictable because we never know what the personal dimension may be or for that matter the socio-economic dimension that drive certain scientific research. Then more recently we have Feyerabend who adopts a conventionalist interpretation of science, that is to say scientific theory are simply conventional ways that scientists have of talking about things, a conventionalism that is entirely relativistic, the science does not tell us about reality, there is anti-realism in science. With those developments what you began to get then is the rejection of the view that all scientific explanation is purely objective empirical explanation in terms of general governing laws and empirical generalizations, that scientific knowledge is always empirically verified or at least verifiable in principle, that just doesn’t seem to be the case. And so the whole thesis of scientism begins to collapse. This is the post-modernism in philosophy of science.
There is a fourth objection. W.V.O. Quine, the Harvard philosopher, whose famous essay “On the two Dogmas of Empiricism” was a landmark in the demise of logical positivism. One of the dogmas is reductionism, the attempt to reduce to knowledge to empirical generalization. The verifiability principle is reductionist in that sense, it’s trying to reduce all factual statements to empirically verifiable statements. He rejects that because his view that observations are theory laden and not purely objective. The second dogma of empiricism is called the analytic synthetic dichotomy and plainly the verifiability principle hinges on the view that some statements are synthetic and other statements are analytic. What Quine does is to argue that dichotomy breaks down, that it’s a matter of degree, depending on the context. He sees human knowledge not as a collection of any isolable propositions which we interconnect within a deductive system, knowledge is rather more of a web of belief. The difference is that a deductive system moves with pretty military precision, from one proposition to another, whereas the web of belief would be web of mutually propositions woven in various ways that are not strictly formulatable in deductive system. That is to say the body of knowledge which we have is characterized by coherence, in the sense that it’s self consistent and internally self supporting. But it is a fallibilist view in as much as because of the paradigmatic nature of thought we may be working with a somewhat mistaken paradigm. So the overall pattern of interrelationships may be somewhat different from what we think. In addition to the fallibilism and the coherence which provides some justification he offers a pragmatic justification for the web of belief.
The final criticism came from Wittgenstein himself. Wittgenstein who in his earlier work “The Tractatus” has essentially been a Russell type logical atomist and apparently the verifiability type of person. In 1945, he published his second major work “The Philosophical Investigations”. He criticizes the positivism of his previous work in various ways. One is that the picture theory of meaning, or the verifiability theory is devoid of any clear meaning. He however adds to it the complaints that the insistence on an ideal logical language is too artificial. It’s artificial because language doesn’t fit into that sort of a narrow reductionist mold. In contrast, when you look at ordinary language usage, we find it is much more varied. So instead he talks about that there is a multiplicity of language games, because there is a diversity of forms of life. The analysis we want then is a functional analysis rather than the logical analysis. An analysis not of the logic of language, imposing a narrow positivist grids, but an analysis of the actual functions that language serves in ordinary discourse.