The Bayesian philosophy involves a completely different approach to statistics.
The Bayesian version of estimation is considered for the basic situation concerning the estimation of a parameter given a random sample from a particular distribution. Classical estimation involves the method of maximum likelihood.
The fundamental difference between Bayesian and classical methods is that the parameter θ is considered to be a random variable in Bayesian methods.
In classical statistics θ is a fixed but unknown quantity. This leads to difficulties such as the careful interpretation required for classical confidence intervals, where it is the interval that is random. As soon as the data are observed and a numerical interval is calculated, there is no probability involved. A statement such as P(10.45 <θ < 13.26)= 0.95 cannot be made because θ is not a random variable.
In Bayesian statistics no such difficulties arise and probability statements can be made concerning the values of a parameter θ .
This means that it is quite possible to calculate a Bayesian confidence interval for a parameter.
Another advantage of Bayesian statistics is that it enables us to make use of any information that we already have about the situation under investigation. Often researchers investigating an unknown population parameter have information available
from other sources in advance of the study that provides a strong indication of what values the parameter is likely to take. This additional information might be in a form that cannot be incorporated directly in the current study. The classical statistical
approach offers no scope for the researchers to take this additional information into account. However, the Bayesian approach does allow additional information to be taken into account when trying to estimate a population parameter.