-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate alternate meta-learners #68
Comments
Updated comparison of results for simulated data with both ANA and respondent quality estimated with a heterogenous, 1000-member ensemble:
Updated comparison of results for real data where we account for both ANA and respondent quality with a heterogeneous, 1000-member ensemble:
And to check for any issues with how we are currently implementing the respondent quality pathology, here are updated results for simulated data with ANA only with a heterogenous, 1000-member ensemble:
And here are results for real data where we account for ANA only with a heterogeneous, 1000-member ensemble:
|
FWIW, @RogerOverNOut I've re-run the model with just ANA for both simulated and real data. You said it might be easier to work with that when trying your alternative meta-learner. I had to make some much clearer names. See the shared folder. |
Thanks Marc
…On Fri, Jun 4, 2021, 2:31 AM Marc Dotson ***@***.***> wrote:
FWIW, @RogerOverNOut <https://github.com/RogerOverNOut> I've re-run the
model with just ANA for both simulated and real data. You said it might be
easier to work with that when trying your alternative meta-learner. I had
to make some much clearer names. See the shared folder.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#68 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHEP55B2FNQKO7T5VHS2JL3TRBXKLANCNFSM45N7IGWA>
.
|
@jeff-dotson @RogerOverNOut I made my first pass using an MNL as a meta-learner. The results are in the above tables under "Ensemble (Logit Weights)." I haven't done this before, so I wanted to describe what I'm doing. Please let me know if there's a red flag you see:
|
MNL using probabilities instead of the choices has been added to the tables above. I've also added weights using simple counts of the hits as well as a sum of probabilities for the hits. Note that weights produced using simple counts of the hits is the same as weights from an MNL meta-leaner using probabilities. |
Using {logitr} for the meta-learner would help boost speed. |
Using LOO for model stacking produces improvement in terms of LOO only for the conjoint ensemble. What about alternate meta-learners? Use the
meta-learner
branch.The text was updated successfully, but these errors were encountered: