As a theory of knowledge, reliabilism on one formulation can be roughly stated as follows:
One knows that p (p stands for any proposition--e.g., the sky is blue) if, and only if, (1) one believes that p, (2) p is true, and (3) one has arrived at the belief that p through some reliable process.
As a theory of justified belief, reliabilism can be formulated roughly as follows:
One has a justified belief that p if, and only if, the belief is the result of a reliable process.
A further account of knowledge and truth is provided by reliabilism. This theory supposes that our main method of justifying our beliefs is to appeal to what has been reliable in the past. Thus, if I want to prove to someone else that I could speak Russian (and not just some string of made-up, Russian-sounding words), we could both go to a native Russian speaker or a lecturer in the languages department at a university who could confirm it. I could also translate some Russian books or attempt to display my knowledge by answering their questions.
These methods would be acceptable to different degrees depending on how reliable they have proven to be. For instance, the fact that I can 'prove' to a large group of people that I can make a coin disappear is not very reliable (a fact that stage magicians exploit).
There are two main methods of reliabilist justification: internal and external. External is obviously the most reliable because it deals with what is apparent to others. So, if I wish to establish some medical fact, I can visit a doctor, who has established scientific ways and means of confirming a diagnosis. Alternatively, I can rely on my own internal sensations to inform me of my own condition (which is obviously not so reliable or open to demonstration).
Problems with Reliabilism
The internalist form of reliabilism seems to be circular. How do we know that the methods we use to establish that something is true are really reliable? What method do I use to check that the means for establishing whether the reliable method is reliable, is itself reliable? And so on.
The externalist form is open to the criticism that just because a method - such as a thermometer - gives us a reliable response, this does not mean that the response is true. So, a computer with a bug in it might always provide the same response to a particular question, but that would not be the correct one.
Some find reliabilism objectionable because they believe it entails externalism, which is the view that one can have knowledge, or have a justified belief, despite not knowing (having "access" to) the evidence, or other circumstances, that make the belief justified. Most reliabilists maintain that a belief can be justified, or can constitute knowledge, even if the believer does not know about or understand the process that makes the belief reliable. In defending this view, reliabilists (and externalists generally) are apt to point to examples from simple acts of perception: if one sees a bird in the tree outside their window and thereby gains the belief that there is a bird in that tree, they might not at all understand the cognitive processes that explain for their successful act of perception; nevertheless, it is the fact that the processes worked reliably that accounts for why their belief is justified. In short, they find theirself with a belief about the bird, and that belief is justified if any is, but they am not acquainted at all with the processes that led to the belief and made them justified in having it. Of course, internalists do not let the debate rest there; see externalism (epistemology).
Another of the most common objections to reliabilism, made first to Goldman's knowledge reliable process theory of knowledge and later to other reliabilist theories, is the so-called generality problem, as follows. For any given justified belief (or instance of knowledge), one can easily identify many different (concurrently operating) "processes" from which the belief results. My belief that there is a bird in the tree outside my window might be accorded a result of the process of forming beliefs on the basis of sense-perception, of visual sense-perception, of visual sense-perception through opaque surfaces in daylight, and so forth, down to a variety of different very specifically-described processes. Some of these processes might be statistically reliable, while others might not. It would no doubt be better to say, in any case, that we are choosing not which process to say resulted in the belief, but instead how to describe the process, out of the many different levels of generality on which it can be accurately described.
perhaps distinguish between a person being justified in holding belief p, and belief p being justified? if so, then even when we rationally (and then justifiably) rely on a reliable process to establish a belief that is false, we are justified in holding that belief, though the belief itself cannot be justified as true. Hence one can trivially justify all sense-data, even if it doesn't correlate with the external world that others perceive and that we might, in a scientific context, think to be more accurate.
This doesn't solve contexts in which we demand more certain justifications, since it suggests that we must apply different criteria to justify the belief itself, external to our believing it, and so come full circle in trying to prove the process by which we came to know it to be reliable. But it at least provides a flexibile granularity that we can apply to different contexts, and makes the theory extremely attractive in most everyday contexts.
According to Dretske, reliable cognitive processes convey information, and thus endow not only humans, but (nonhuman) animals as well, with knowledge. He writes:
I wanted a characterization that would at least allow for the possibility that animals (a frog, rat, ape, or my dog) could know things without my having to suppose them capable of the more sophisticated intellectual operations involved in traditional analyses of knowledge.
Attributing knowledge to animals is certainly in accord with our ordinary practiceof using the word ?knowledge?. Dretske seems right, therefore, when he views the result that animals have knowledge as a desideratum.
A second advantage of his theory is, so Dretske claims, that it avoids Gettier problems. He says:
Gettier difficulties . . . arise for any account of knowledge that makes knowledge a product of some justificatory relationship (having good evidence, excellent reasons, etc.) that could relate one to something false . . . This is [a] problem for justificational accounts. The problem is evaded in the information-theoretic model, because one can get into an appropriate justificational relationship to something false, but one cannot get into an appropriate informational relationship to something false. (F. Dretske (1985), "Precis of Knowledge and the Flow of Information." In: Hilary Kornblith, ed., Naturalizing Epistemology. Cambridge: MIT Press)
Solving the Gettier-problem is, however, a bit more complex than this passage suggests. Consider again the case of Henry in Barn County. He sees a real barn in front of him, yet does not know that there is a barn near-by. Exactly how can Dretske's theory explain Henry's failure to know? After all, he perceives an actual barn, and so does not stand in any informational relationship to something false. So if perception, on account of its reliability, normally conveys information, it should do so in this case as well. Alas, it doesn't. Clearly, if a theory like Dretske's is to handle this case and others like it, it must be supplemented with a clause that makes it immune to the case of the fake barns, and other examples like it