I've taken courses mainly on mathematical statistics/probability/computational statistics. Those courses only talk about theoretical well-defined problem settings, so although it helped me refine my mathematical skills, but didn't really help me build the philosophy. Now I'm taking 'STAT525: Intermediate Statistical Methodology', and it makes me to think more on foundations of Statistics.

At the time I first heard about Bayesian Statistics when I was an undergraduate, it didn't appealed much to me since Bayesians' notion of "personal" probability didn't make much sense. In most cases, nobody knows which prior to use. Then what does posterior probability mean?

But it seems that even frequentists are not that rigorous in interpreting

*probability*, when it comes to

*application*of their theory (which should be the ultimate goal of every statistical research). When using linear regression models, we all know that no error is really Gaussian, so every goodness-of-fit test fails when the number of data points is sufficiently large. However, we still use usual regression models since they are "robust" to deviation from normality: our estimates and confidence intervals may not very too much in mild conditions. How can we interpret

*probabilities*, then? For example, how can we interpret what 95% confidence interval means when it does not contain the true parameter for 95% frequency?

Are Statisticians really in the position to blame data mining/machine learning people because outcome of their methods do not have clear probabilistic interpretations? We know that SVM is asymptotically median classifier. Don't most of statistical methods rely only on asymptotic results, especially when they're analyzing practical problems?

How much "objective" a statistical analysis can be? How much of "objectivity" is required? Can I answer these questions before I get Ph.D degree?