A FUNDAMENTAL FLAW IN METHODOLOGY
The methodological paradigm I will be assuming is based on the meta-theory of Karl Popper, the leading intellectual on this sub-field.
When conducting an inquiry of a scientific nature, the first thing we need to do is postulate a testable hypothesis. For a hypothesis to be testable, it has to be a strictly universal statement, meaning that a statement like Some sheep are white is meaningless from a scientific point of view. An existential statement of this nature is not useful to us because it does not make any predictions, and therefore does not lead us to have any expectations. Also, for a statement to have viable scientific standing, it needs to have potential falsifiers, which are identifiable beforehand. This should not be problem, despite the fact that there is indeed something unpalatable about saying what would show your hypothesis to be wrong; in fact, Popper encourages us to make bold and risky conjectures. That is why a strict universal statement is more useful to us.
Consider a proposition like All sheep are white. This statement is testable because it predicts that if we encounter a sheep anywhere in the world, it is going to be a white one. So it tells us something about the way the world is. In addition to that, it also tells us something about the way the world should not be: it should not contain any non-white sheep. In the event of a non-white sheep being encountered, one of two things would have to happen: either we would have to change our belief that all sheep are white (i.e. admit that we were wrong in believing that all sheep are white), or we would have to postulate an auxiliary hypothesis to explain this contradictory data. This auxiliary hypothesis would obviously need to have the same status as any other hypothesis in terms of being a universal statement, etc.
Now, if we really cared about what colours sheep are, we could try and construct our auxiliary hypothesis along geographical lines (All sheep from area X are black), and so on and so forth.
Another important point which is worth taking note of is the fact that it is scientifically vacuous to find any amount of confirming evidence, and use that as “proof” that your theory/hypothesis is correct. Whether I have seen ten sheep, a thousand sheep, or even ten thousand sheep, it does not entitle me to categorically conclude that all sheep are white. The reason is obvious: finding any amount of white sheep does not preclude the existence of at least one non-white sheep somewhere in the world. In fact, from a logical point of view, no amount of confirming evidence would suffice to show that it a particular hypothesis is true, or even that it is more likely that a particular proposition is true. It does not even make sense to assume that a hypothesis is more probable than another because it has more confirming evidence in its favour. For example, there is no logical reason to assume that just because the 7.00am bus has been on time every day for the past three years, it is more likely to be on time than another bus (which has been running for only two weeks) to be there on time tomorrow morning. As human beings we ‘feel’ that that is the way things should be, but that is not how the logic of scientific discovery works.
As researchers, then, we should not leave it to chance to stumble across a non-white sheep. We should take the onus upon ourselves to try and find non-white sheep in the world. If we encounter disconfirming evidence of this kind, we should either try to explain it by postulating an auxiliary hypothesis, which should also be subjected to the same empirical tests, or we should change our original hypothesis altogether.
It is for this reason that Albert Einstein, rather candidly acknowledged that his theory diverges from that of the then accepted paradigm of Newton, and that the viability of the entire theory rests “in its logical completeness”. The details of exactly what his postulates were are not relevant, but he went on to state that if “a single one of the conclusions drawn from it proves wrong, it must be given up”, because “to modify it without destroying the whole structure would be impossible” (my italics) – referring of course to the entire theory, not just a particular hypothesis thereof. His point here is simply that it would not be acceptable to insert arbitrary ad hoc hypotheses, or to conveniently adapt the theory in light of new evidence, as Chomsky has been doing over the years. Nobel prize-winning physicist, Richard Feynman insist that “we have a responsibility to have a sense of integrity. We need to bend over backwards to show how you’re maybe wrong […]. This is our responsibility as scientists, certainly to other scientists, and I think to laymen”. Feyman illustrates this by saying that “[i]f you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish both kinds of results”. Certainly, the nativists are guilty of this kind ‘misconduct’, if we may label it that.
When postulating hypotheses, it would seem obvious that common sense would be the most viable starting point. If the evidence overwhelmingly points in the direction of a paradigm that is at odds with common sense, then we obviously need to change our
beliefs; otherwise not. Popper agrees that we should use common sense as a starting point, and use it as a foundation to build on, but with the proviso “that our great instrument for progress is criticism”. He adds that “science, philosophy, rational thought, must all start from common sense”, and that “all science, and all philosophy, are enlightened common sense”. Most would agree that we ought to accept the common sense view of the world, unless there is convincing evidence to the contrary.
The burden of proof obviously lies with the party that makes a new claim, which is certainly encouraged if scientific progress is to be made; but if their respective arguments fail, or if their arguments are persuasive but unsound, then one should revert to the initial hypothesis since the new one does not do any better. Obviously common sense is not always a viable criterion for truth, and in many cases it is exactly the opposite that is the case (cf. quantum mechanics, for example). However, with regard to language acquisition, it seems clear that the common-sense view is the correct one.
In general, the nativists do not adopt the Popperian method of scientific inquiry, based on conjectures and refutations. For example, with regard to the so-called initial state of the Language Acquisition Device (LAD), Chomsky proposed many principles which were supposed to be present in all languages, and found many languages in which these principles held. Later it was discovered, often by “accident”, that some languages do not have these apparently universal properties (for example, not all languages have adjectives, not all phrases are headed, the island constraint makes false predictions, etc.).
My point is simply that if the nativists started out by identifying potential falsifiers, and then looked for counter-examples, they could have built a more solid base than they have. Clearly the status quo is unacceptable. Sampson even quotes Pinker as conceding that UG is not well argued for, despite being one of the key arguments in favour of nativism, and Pinker reiterates this sentiment in his latest book, THE STUFF OF THOUGHT.
Let us consider another example of how the need to find confirming evidence precludes us from seeing even the most blatant counter-examples. Roeper and Siegel wrote an article way back in 1978, based on the observation that the addition of prefixes to verbs rules out non-nominal complements, and this is ‘proven’ by giving examples that show this statement to be true. Once again, since this property is in no way a necessary property of language, we are once more urged to assume that this is part of our innate knowledge of language. Laurie Bauer’s 1990 article, Be-heading The Word, pointed out that there were numerous other papers written after this one with the same thesis in mind. Later in that paper, Bauer asks us to consider what is wrong with the non-nominal complement badly in
She has miscalculated badly
Evidently, this is a well-formed sentence in just about any dialect of English, yet instances like this are not given any consideration because linguists who have not studied philosophy of science spend their energies not in finding counter-examples, but by imagining that finding confirming evidence is getting them somewhere. Of course, the fact of the matter is that confirming evidence does not lend any viability to any hypothesis. Regardless of this, Chomsky, Pinker, and most other linguists continue to make generalisations about language and human nature based on a few instances that confirm their predictions.
Instead of taking certain doctrines as axiomatic truths, they need to look at even some their most fundamental tenets with a critical eye, and not latch onto them with a kind of fervour indicative of religious fanaticism. One needs to remember that there are some six thousand languages spoken in the world today, not just English.
As Mark Turner puts it, commenting on the scientific status of the Chomskyan school of thought: “The hard sciences do not lump apparently odd events into the category of what we don’t need to explain, but rather give them special attention. It is not clear that someone in the cognitive sciences who hopes her discipline will attain to the prestige of the hard sciences should behave any differently”.
In fact, many of Chomsky’s key doctrines rest on “unfalsifiable foundations”.