p-value random effect in glmer() in lme4 packageTranslate glmer (lme4) model specification into MCMCglmmglmer with binary response variable: how to select fixed effects?R - lmer vs glmerglmer vs lmer, what is best for a binomial outcome?How to obtain REML estimates in a glmer? Optimizing random effects structure in glmer function (lme4 R package)GLMER not convergingRandom effect variance differs between glmer() and lmer() function
Can set-like objects obeying ZFC be constructed in Euclidean geometry?
Matrix class in C#
How do we know what classical (older) music actually sounded like
Is it okay to request a vegetarian only microwave at work ? If, yes, what's the proper way to do it?
Company indirectly discriminating against introverts, specifically INTJ
How much does freezing grapes longer sweeten them more?
What is the white square near the viewfinder of the Fujica GW690?
Cutting a 4.5m long 2x6 in half with a circular saw
Why do baby boomers have to sell 5% of their retirement accounts by the end of the year?
Most optimal hallways with random gravity inside?
Who inspired the character Geordi La Forge?
Do any languages mark social distinctions other than gender and status?
Could you use uppercase or special characters in a password in early Unix?
Meaning of “Bulldog drooled courses through his jowls”
Is the phrase “You are requested” polite or rude?
What are these objects near the Cosmonaut's faces?
Conveying the idea of "tricky"
How to pronounce correctly [b] and [p]? As well [t]/[d] and [k]/[g]
How to cut a perfect shape out of 4cm oak?
What's the meaning of Electrical Inches?
Is it unusual that English uses possessive for past tense?
Why it is a big deal whether or not Adam Schiff talked to the whistleblower?
Limit of sequence (by definiton)
Is there any research on the development of attacks against artificial intelligence systems?
p-value random effect in glmer() in lme4 package
Translate glmer (lme4) model specification into MCMCglmmglmer with binary response variable: how to select fixed effects?R - lmer vs glmerglmer vs lmer, what is best for a binomial outcome?How to obtain REML estimates in a glmer? Optimizing random effects structure in glmer function (lme4 R package)GLMER not convergingRandom effect variance differs between glmer() and lmer() function
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
$begingroup$
I know that in order to test whether a random effect has a significant impact on a model it's necessary to sequentially remove one random effect at a time and check each model pair with anova()
function in lme4 package or through exactLRT()
function included in RLRsim package.
However this functions works me well when I worked with lmer() function but not in glmer() function.
In detail, I want to discover if the inclusion of a random effect in my model is significant or not.
model0<-glm(Feed_kg_DM_day~Week, data=dietdef2, family=gaussian(link=log))
model1<-glmer(Feed_kg_DM_day~Week+(1|rat), data=dietdef2, family=gaussian(link=log))
If I perform anova(model0, model1) doesn't show me the p-value:
Analysis of Deviance Table
Model: gaussian, link: log
Response: Feed_kg_DM_day
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 2756 1119.1
Week 14 1.5985 2742 1117.5
How can I know that the effect of random variable is significant?
Thanks a lot,
r mixed-model lme4-nlme
$endgroup$
add a comment
|
$begingroup$
I know that in order to test whether a random effect has a significant impact on a model it's necessary to sequentially remove one random effect at a time and check each model pair with anova()
function in lme4 package or through exactLRT()
function included in RLRsim package.
However this functions works me well when I worked with lmer() function but not in glmer() function.
In detail, I want to discover if the inclusion of a random effect in my model is significant or not.
model0<-glm(Feed_kg_DM_day~Week, data=dietdef2, family=gaussian(link=log))
model1<-glmer(Feed_kg_DM_day~Week+(1|rat), data=dietdef2, family=gaussian(link=log))
If I perform anova(model0, model1) doesn't show me the p-value:
Analysis of Deviance Table
Model: gaussian, link: log
Response: Feed_kg_DM_day
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 2756 1119.1
Week 14 1.5985 2742 1117.5
How can I know that the effect of random variable is significant?
Thanks a lot,
r mixed-model lme4-nlme
$endgroup$
$begingroup$
If there are repeated measures / clustering then you shouldnt be testing for significance. Unless the estimated variance is extremely small then retain it without seeking a p value.
$endgroup$
– Robert Long
May 26 at 10:33
$begingroup$
And to add to that please read about the many problems with declaring statistical significance, and even more problems when declaring statistical non-significance.
$endgroup$
– Frank Harrell
May 26 at 11:09
$begingroup$
@FrankHarrell even though in general I agree with you, how do you propose one should judge whether, e.g., a model with nonlinear effects (e.g., using splines) for BMI, age and LDL cholesterol is/fits better than a model with only the nonlinear effect of age? Aren't a likelihood ratio test between the two models and the corresponding p-value useful in determining which model fits better?
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:59
$begingroup$
Details in RMS book and course notes. In short, either user a chunk test to decide to keep all or remove all nonlinear terms, or better pre-specific a model that is as complex as the information content in the data will support, and don't look back. If you think the relationships may not be linear, allow them to be nonlinear.
$endgroup$
– Frank Harrell
May 26 at 20:47
add a comment
|
$begingroup$
I know that in order to test whether a random effect has a significant impact on a model it's necessary to sequentially remove one random effect at a time and check each model pair with anova()
function in lme4 package or through exactLRT()
function included in RLRsim package.
However this functions works me well when I worked with lmer() function but not in glmer() function.
In detail, I want to discover if the inclusion of a random effect in my model is significant or not.
model0<-glm(Feed_kg_DM_day~Week, data=dietdef2, family=gaussian(link=log))
model1<-glmer(Feed_kg_DM_day~Week+(1|rat), data=dietdef2, family=gaussian(link=log))
If I perform anova(model0, model1) doesn't show me the p-value:
Analysis of Deviance Table
Model: gaussian, link: log
Response: Feed_kg_DM_day
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 2756 1119.1
Week 14 1.5985 2742 1117.5
How can I know that the effect of random variable is significant?
Thanks a lot,
r mixed-model lme4-nlme
$endgroup$
I know that in order to test whether a random effect has a significant impact on a model it's necessary to sequentially remove one random effect at a time and check each model pair with anova()
function in lme4 package or through exactLRT()
function included in RLRsim package.
However this functions works me well when I worked with lmer() function but not in glmer() function.
In detail, I want to discover if the inclusion of a random effect in my model is significant or not.
model0<-glm(Feed_kg_DM_day~Week, data=dietdef2, family=gaussian(link=log))
model1<-glmer(Feed_kg_DM_day~Week+(1|rat), data=dietdef2, family=gaussian(link=log))
If I perform anova(model0, model1) doesn't show me the p-value:
Analysis of Deviance Table
Model: gaussian, link: log
Response: Feed_kg_DM_day
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 2756 1119.1
Week 14 1.5985 2742 1117.5
How can I know that the effect of random variable is significant?
Thanks a lot,
r mixed-model lme4-nlme
r mixed-model lme4-nlme
asked May 26 at 9:41
ribellesribelles
262 bronze badges
262 bronze badges
$begingroup$
If there are repeated measures / clustering then you shouldnt be testing for significance. Unless the estimated variance is extremely small then retain it without seeking a p value.
$endgroup$
– Robert Long
May 26 at 10:33
$begingroup$
And to add to that please read about the many problems with declaring statistical significance, and even more problems when declaring statistical non-significance.
$endgroup$
– Frank Harrell
May 26 at 11:09
$begingroup$
@FrankHarrell even though in general I agree with you, how do you propose one should judge whether, e.g., a model with nonlinear effects (e.g., using splines) for BMI, age and LDL cholesterol is/fits better than a model with only the nonlinear effect of age? Aren't a likelihood ratio test between the two models and the corresponding p-value useful in determining which model fits better?
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:59
$begingroup$
Details in RMS book and course notes. In short, either user a chunk test to decide to keep all or remove all nonlinear terms, or better pre-specific a model that is as complex as the information content in the data will support, and don't look back. If you think the relationships may not be linear, allow them to be nonlinear.
$endgroup$
– Frank Harrell
May 26 at 20:47
add a comment
|
$begingroup$
If there are repeated measures / clustering then you shouldnt be testing for significance. Unless the estimated variance is extremely small then retain it without seeking a p value.
$endgroup$
– Robert Long
May 26 at 10:33
$begingroup$
And to add to that please read about the many problems with declaring statistical significance, and even more problems when declaring statistical non-significance.
$endgroup$
– Frank Harrell
May 26 at 11:09
$begingroup$
@FrankHarrell even though in general I agree with you, how do you propose one should judge whether, e.g., a model with nonlinear effects (e.g., using splines) for BMI, age and LDL cholesterol is/fits better than a model with only the nonlinear effect of age? Aren't a likelihood ratio test between the two models and the corresponding p-value useful in determining which model fits better?
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:59
$begingroup$
Details in RMS book and course notes. In short, either user a chunk test to decide to keep all or remove all nonlinear terms, or better pre-specific a model that is as complex as the information content in the data will support, and don't look back. If you think the relationships may not be linear, allow them to be nonlinear.
$endgroup$
– Frank Harrell
May 26 at 20:47
$begingroup$
If there are repeated measures / clustering then you shouldnt be testing for significance. Unless the estimated variance is extremely small then retain it without seeking a p value.
$endgroup$
– Robert Long
May 26 at 10:33
$begingroup$
If there are repeated measures / clustering then you shouldnt be testing for significance. Unless the estimated variance is extremely small then retain it without seeking a p value.
$endgroup$
– Robert Long
May 26 at 10:33
$begingroup$
And to add to that please read about the many problems with declaring statistical significance, and even more problems when declaring statistical non-significance.
$endgroup$
– Frank Harrell
May 26 at 11:09
$begingroup$
And to add to that please read about the many problems with declaring statistical significance, and even more problems when declaring statistical non-significance.
$endgroup$
– Frank Harrell
May 26 at 11:09
$begingroup$
@FrankHarrell even though in general I agree with you, how do you propose one should judge whether, e.g., a model with nonlinear effects (e.g., using splines) for BMI, age and LDL cholesterol is/fits better than a model with only the nonlinear effect of age? Aren't a likelihood ratio test between the two models and the corresponding p-value useful in determining which model fits better?
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:59
$begingroup$
@FrankHarrell even though in general I agree with you, how do you propose one should judge whether, e.g., a model with nonlinear effects (e.g., using splines) for BMI, age and LDL cholesterol is/fits better than a model with only the nonlinear effect of age? Aren't a likelihood ratio test between the two models and the corresponding p-value useful in determining which model fits better?
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:59
$begingroup$
Details in RMS book and course notes. In short, either user a chunk test to decide to keep all or remove all nonlinear terms, or better pre-specific a model that is as complex as the information content in the data will support, and don't look back. If you think the relationships may not be linear, allow them to be nonlinear.
$endgroup$
– Frank Harrell
May 26 at 20:47
$begingroup$
Details in RMS book and course notes. In short, either user a chunk test to decide to keep all or remove all nonlinear terms, or better pre-specific a model that is as complex as the information content in the data will support, and don't look back. If you think the relationships may not be linear, allow them to be nonlinear.
$endgroup$
– Frank Harrell
May 26 at 20:47
add a comment
|
1 Answer
1
active
oldest
votes
$begingroup$
In an experiment with repeated measurements, or an observational study involving clustering, the non-independence of observations within clusters, subjects, participants, is often very well handled by specifying random intercepts for the grouping variable in question.
Unless the estimated variance is very small, the random intercepts should be retained.
Furthermore, testing for significance of the random effects is hindered because we would be trying to formulate a test where, under the null hypothesis, the parameter would be on the boundary of the parameter space (zero), and in any case, removing a patameter from a model in the basis of non-significance is a very questionable thing to do.
$endgroup$
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
2
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
1
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
1
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f410139%2fp-value-random-effect-in-glmer-in-lme4-package%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In an experiment with repeated measurements, or an observational study involving clustering, the non-independence of observations within clusters, subjects, participants, is often very well handled by specifying random intercepts for the grouping variable in question.
Unless the estimated variance is very small, the random intercepts should be retained.
Furthermore, testing for significance of the random effects is hindered because we would be trying to formulate a test where, under the null hypothesis, the parameter would be on the boundary of the parameter space (zero), and in any case, removing a patameter from a model in the basis of non-significance is a very questionable thing to do.
$endgroup$
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
2
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
1
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
1
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
add a comment
|
$begingroup$
In an experiment with repeated measurements, or an observational study involving clustering, the non-independence of observations within clusters, subjects, participants, is often very well handled by specifying random intercepts for the grouping variable in question.
Unless the estimated variance is very small, the random intercepts should be retained.
Furthermore, testing for significance of the random effects is hindered because we would be trying to formulate a test where, under the null hypothesis, the parameter would be on the boundary of the parameter space (zero), and in any case, removing a patameter from a model in the basis of non-significance is a very questionable thing to do.
$endgroup$
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
2
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
1
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
1
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
add a comment
|
$begingroup$
In an experiment with repeated measurements, or an observational study involving clustering, the non-independence of observations within clusters, subjects, participants, is often very well handled by specifying random intercepts for the grouping variable in question.
Unless the estimated variance is very small, the random intercepts should be retained.
Furthermore, testing for significance of the random effects is hindered because we would be trying to formulate a test where, under the null hypothesis, the parameter would be on the boundary of the parameter space (zero), and in any case, removing a patameter from a model in the basis of non-significance is a very questionable thing to do.
$endgroup$
In an experiment with repeated measurements, or an observational study involving clustering, the non-independence of observations within clusters, subjects, participants, is often very well handled by specifying random intercepts for the grouping variable in question.
Unless the estimated variance is very small, the random intercepts should be retained.
Furthermore, testing for significance of the random effects is hindered because we would be trying to formulate a test where, under the null hypothesis, the parameter would be on the boundary of the parameter space (zero), and in any case, removing a patameter from a model in the basis of non-significance is a very questionable thing to do.
answered May 26 at 15:23
Robert LongRobert Long
13.5k3 gold badges31 silver badges59 bronze badges
13.5k3 gold badges31 silver badges59 bronze badges
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
2
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
1
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
1
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
add a comment
|
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
2
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
1
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
1
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
$begingroup$
But you would still need to determine whether you should include random effects for some of the predictor variables included in the model? Unless you are in a confirmatory setting and are able to fit a maximal model (ncbi.nlm.nih.gov/pmc/articles/PMC3881361)?
$endgroup$
– Isabella Ghement
May 26 at 16:00
2
2
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
$begingroup$
@IsabellaGhement the question is limited to random intercepts, and that was all I wanted to address here. But as to the wider question, Barr et al have a lot to answer for with the terrible general advice to "Keep it Maximal", on the basis of such simple simulations that they employed. See Bates et al (2015l for a rebuttal.
$endgroup$
– Robert Long
May 26 at 16:52
1
1
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
$begingroup$
@IsabellaGhement it's hard to talk to much about this in a comment but my general approach is to include random effects (intercepts or slopes) ONLY when indicated by sound clinical/theoretical justification.
$endgroup$
– Robert Long
May 26 at 16:56
1
1
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@RobertLong First of all, I agree with you and Bates et al. (2015) that maximal models are a bad idea. But do you mean that you would not try to include, e.g., non-linear random slopes and see if they improve the model via, e.g., a likelihood ratio test? For example, in longitudinal datasets with 8-10 repeated measurements per subject on average often only random intercepts & random slopes are not enough to capture the correlations in the repeated measurements sufficiently well.
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:46
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
$begingroup$
@DimitrisRizopoulos No I didn't mean to imply that. In the type of situation you describe I would first want to plot visualise the data for each subject, to assess any possible non-linearity before doing what you suggest.
$endgroup$
– Robert Long
May 27 at 8:21
add a comment
|
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f410139%2fp-value-random-effect-in-glmer-in-lme4-package%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
If there are repeated measures / clustering then you shouldnt be testing for significance. Unless the estimated variance is extremely small then retain it without seeking a p value.
$endgroup$
– Robert Long
May 26 at 10:33
$begingroup$
And to add to that please read about the many problems with declaring statistical significance, and even more problems when declaring statistical non-significance.
$endgroup$
– Frank Harrell
May 26 at 11:09
$begingroup$
@FrankHarrell even though in general I agree with you, how do you propose one should judge whether, e.g., a model with nonlinear effects (e.g., using splines) for BMI, age and LDL cholesterol is/fits better than a model with only the nonlinear effect of age? Aren't a likelihood ratio test between the two models and the corresponding p-value useful in determining which model fits better?
$endgroup$
– Dimitris Rizopoulos
May 26 at 18:59
$begingroup$
Details in RMS book and course notes. In short, either user a chunk test to decide to keep all or remove all nonlinear terms, or better pre-specific a model that is as complex as the information content in the data will support, and don't look back. If you think the relationships may not be linear, allow them to be nonlinear.
$endgroup$
– Frank Harrell
May 26 at 20:47