If I can make up priors, why can't I make up posteriors?Maximum Likelihood Estimation (MLE) in layman termsWhat is an “uninformative prior”? Can we ever have one with truly no information?Do Bayesian priors become irrelevant with large sample size?Bayesian vs frequentist Interpretations of ProbabilityWhen are Bayesian methods preferable to Frequentist?How is data generated in the Bayesian framework and what is the nature on the parameter that generates the data?Can anyone explain conjugate priors in simplest possible terms?When do MAP inference and full Bayesian Inference give the same solution and why?Why can't the complete class theorem be easily generalized to all locally-compact spaces?Why is Bayesian data analysis done if we already know the distribution of the parameters?Why are weakly informative priors a good idea?Dual Bayesian InterpretationBayesian Inference, Posteriors Priors and Likelihoods
Can Dive Down protect a creature against Pacifism?
Do Veracrypt encrypted volumes have any kind of brute force protection?
I received a gift from my sister who just got back from
How can I find out about the game world without meta-influencing it?
Bullying by school - Submitted PhD thesis but not allowed to proceed to viva until change to new supervisor
Struggling to present results from long papers in short time slots
Jam with honey & without pectin has a saucy consistency always
How to adjust arrow head size of arrow-node?
What is the theme of analysis?
What is the context for Napoleon's quote "[the Austrians] did not know the value of five minutes"?
How can I detect if I'm in a subshell?
Print the phrase "And she said, 'But that's his.'" using only the alphabet
Are athletes' college degrees discounted by employers and graduate school admissions?
What is wind "CALM"?
How do you translate “talk shit”?
How do credit card companies know what type of business I'm paying for?
How to address players struggling with simple controls?
Where were Frodo and Sam captured by Faramir?
What is the color associated with lukewarm?
Where Tosafot of Rabbi Samson of Sens can be found?
How can this shape perfectly cover a cube?
Idiom for 'person who gets violent when drunk"
At zero velocity, is this object neither speeding up nor slowing down?
Must a CPU have a GPU if the motherboard provides a display port (when there isn't any separate video card)?
If I can make up priors, why can't I make up posteriors?
Maximum Likelihood Estimation (MLE) in layman termsWhat is an “uninformative prior”? Can we ever have one with truly no information?Do Bayesian priors become irrelevant with large sample size?Bayesian vs frequentist Interpretations of ProbabilityWhen are Bayesian methods preferable to Frequentist?How is data generated in the Bayesian framework and what is the nature on the parameter that generates the data?Can anyone explain conjugate priors in simplest possible terms?When do MAP inference and full Bayesian Inference give the same solution and why?Why can't the complete class theorem be easily generalized to all locally-compact spaces?Why is Bayesian data analysis done if we already know the distribution of the parameters?Why are weakly informative priors a good idea?Dual Bayesian InterpretationBayesian Inference, Posteriors Priors and Likelihoods
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
$endgroup$
add a comment |
$begingroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
$endgroup$
5
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26
6
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41
5
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58
2
$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung♦
Apr 15 at 11:04
add a comment |
$begingroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
$endgroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
bayesian mathematical-statistics
asked Apr 14 at 18:12
purpleostrichpurpleostrich
1899
1899
5
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26
6
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41
5
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58
2
$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung♦
Apr 15 at 11:04
add a comment |
5
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26
6
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41
5
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58
2
$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung♦
Apr 15 at 11:04
5
5
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26
6
6
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41
5
5
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58
2
2
$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung♦
Apr 15 at 11:04
$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung♦
Apr 15 at 11:04
add a comment |
4 Answers
4
active
oldest
votes
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
EDIT:
In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.
While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
$endgroup$
1
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
1
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
|
show 3 more comments
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
1
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbraceX)^textposterior = fracoverbracep(X^textlikelihood;overbracep(theta)^textpriorp(X)
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403013%2fif-i-can-make-up-priors-why-cant-i-make-up-posteriors%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
EDIT:
In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.
While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
$endgroup$
1
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
1
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
|
show 3 more comments
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
EDIT:
In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.
While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
$endgroup$
1
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
1
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
|
show 3 more comments
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
EDIT:
In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.
While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
$endgroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
EDIT:
In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.
While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
edited Apr 15 at 15:29
answered Apr 14 at 19:02
Cliff ABCliff AB
14.1k12667
14.1k12667
1
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
1
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
|
show 3 more comments
1
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
1
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
1
1
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
$begingroup$
I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
$endgroup$
– nanoman
Apr 15 at 1:01
1
1
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
@nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
$endgroup$
– Cliff AB
Apr 15 at 1:14
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
The point is that the prior is zero on all the models we aren't considering.
$endgroup$
– nanoman
Apr 15 at 1:32
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
$begingroup$
Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
$endgroup$
– nanoman
Apr 15 at 1:43
|
show 3 more comments
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
1
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
add a comment |
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
1
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
add a comment |
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
answered Apr 14 at 20:44
AksakalAksakal
40.5k454122
40.5k454122
1
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
add a comment |
1
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
1
1
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
Apr 15 at 0:01
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbraceX)^textposterior = fracoverbracep(X^textlikelihood;overbracep(theta)^textpriorp(X)
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbraceX)^textposterior = fracoverbracep(X^textlikelihood;overbracep(theta)^textpriorp(X)
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbraceX)^textposterior = fracoverbracep(X^textlikelihood;overbracep(theta)^textpriorp(X)
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbraceX)^textposterior = fracoverbracep(X^textlikelihood;overbracep(theta)^textpriorp(X)
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
answered Apr 14 at 20:29
Tim♦Tim
62k9136234
62k9136234
add a comment |
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
answered Apr 14 at 23:57
guyguy
4,76311338
4,76311338
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
add a comment |
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
$endgroup$
– Cliff AB
Apr 15 at 0:43
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
$begingroup$
@Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
$endgroup$
– guy
Apr 15 at 2:27
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403013%2fif-i-can-make-up-priors-why-cant-i-make-up-posteriors%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
5
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26
6
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41
5
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58
2
$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung♦
Apr 15 at 11:04