-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fit.plot issue #30
Comments
Hmmmm... I'll look into this more... can you send an example dataset and
code you used so I can reproduce the problem...
This error comes from deep in ggplot() so I don't know if I can suppress
it, but I'll check...
Carl.
…On Thu, Oct 21, 2021 at 9:42 AM Kale Bentley ***@***.***> wrote:
Hi @cschwarz-stat-sfu-ca <https://github.com/cschwarz-stat-sfu-ca>,
I came across an issue with the post model run "fit.plot" within the
*TimeStratPetersenDiagError_fit.R* function. Specifically, I am getting
the message "Error : transformation for secondary axes must be monotonic".
Quickly searching online, I found this GitHub post
<tidyverse/ggplot2#3323> explaining the issue
though there isn't an obvious solution.
Out of the dozens of datasets I've run using BTSPAS, this is the first
time I've come across this issue. Interestingly, the error doesn't occur
until I run a large number of iterations, which I guess isn't too
surprising if it is an outlier issue.
If this "fit.plot" issue can't be easily fixed, I am curious if the
function could be modified so that it keeps executing despite an error
occurring with one of the post model plots and/or predictive checks. I
haven't fiddled around with this functionality too much but it does look
like there are several different ways to implement based on this post.
<https://stackoverflow.com/questions/8852406/r-script-how-to-continue-code-execution-on-error>
.
Again, if nothing else, it would be nice to have the model object saved
regardless of an post-model run error. This would at least partially solve
the third issue
<#28 (comment)>
I highlighted in issue #28
<#28> earlier this
year.
Thanks for the time.
Kale
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#30>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXIXRXFEO674DW37HDOJYDUIA7IXANCNFSM5GOSDJ3Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Hi Carl, Thanks for the fast response. Here's the dataset I'm using: And the code: start.time<-Sys.time() |
THanks... looking at the dataset, I'm assuming that
n1 = column E = marks
m2 = columns g:l
u2 = column D = Maiden
Carl
…On Thu, Oct 21, 2021 at 11:23 AM Kale Bentley ***@***.***> wrote:
Hi Carl,
Thanks for the fast response.
Here's the dataset I'm using:
table_Run4_Final_BTSPAS_inputs.csv
<https://github.com/cschwarz-stat-sfu-ca/BTSPAS/files/7392032/table_Run4_Final_BTSPAS_inputs.csv>
And the code:
n.chain<-4
n.iter<-200000
n.burnin<-100000
n.sims<-500
(n.thin<-(n.iter - n.burnin)/n.sims)
(n.draws<-n.sims*n.chain)
bad.n1 <- NULL # Denote marks by "Final_Periods" as "bad"
bad.m2 <- c(10) # Denote recaps by "Final_Periods" as "bad"
bad.u2 <- c(10) # Denote maidens by "Final_Periods" as "bad"
# include_flow_covar<-c("No") # Enter "Yes" or "No"
# include_panel_covar<-c("No") # Enter "Yes" or "No"
start.time<-Sys.time()
BTSPAS_Res <-
try(TimeStratPetersenNonDiagError_fit(
title= "BTSPAS_NonDiag"
, prefix= model_title
, time= Period_model
, n1= n1
, m2= m2
, u2= u2
, sampfrac= sampfrac
, jump.after= jump.after
, bad.n1= bad.n1
, bad.m2= bad.m2
, bad.u2= bad.u2
, logitP.cov = logitP.cov
, n.chains = n.chain
, n.iter = n.iter
, n.burnin = n.burnin
, n.sims = n.sims
, tauU.alpha = 1
, tauU.beta = 0.05
, taueU.alpha = 1
, taueU.beta = 0.05
# , mu_xiP = logit(sum(m2, na.rm = TRUE)/sum(n1, na.rm = TRUE))
# , tau_xiP = 1/var(logit((m2 + 0.5)/(n1 + 1)), na.rm = TRUE)
, tauP.alpha = 0.001
, tauP.beta = 0.001
, run.prob = seq(0, 1, 0.1)
, debug = FALSE
, debug2 = FALSE
#, InitialSeed= ceiling(runif(1,min=0,
max=if(engine=="jags"){1000000}else{14}))
))
if(inherits(BTSPAS_Res, "try-error")) setwd(here::here())
end.time<-Sys.time()
model_duration<-end.time-start.time`
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#30 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXIXRWBMJGMSYLDPKR5YWTUIBLAJANCNFSM5GOSDJ3Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Sorry about that - I left out a small chunk of code that renames the column headers... |
Got it and reproduced the error... this will be helpful in trying to
solve...Thanks. Carl
…On Thu, Oct 21, 2021 at 12:20 PM Kale Bentley ***@***.***> wrote:
Sorry about that - I left out a small chunk of code that renames the
column headers...
Your crosswalk is correct except that the recapture matrix (m2) is columns
F:I (not G:I).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#30 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXIXRT6FOMMMFHT2USSRA3UIBRXZANCNFSM5GOSDJ3Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Great! For what it's worth, the error hasn't occurred when a very small number of iterations are run. For example: |
Fixed.. you can download latest version from GitHub...on its way to CRAN
and should appear in a few day.
I wasn't able to replicate problem with the runtiming error with the
grouped data but did add a na.rm to the relevant computation so hope that
fixed it.
Carl
…On Thu, Oct 21, 2021 at 12:34 PM Kale Bentley ***@***.***> wrote:
Great!
For what it's worth, the error hasn't occurred when a very small number of
iterations are run. For example:
n.chain<-4
n.iter<-2500
n.burnin<-1250
n.sims<-1250
(n.thin<-(n.iter - n.burnin)/n.sims)
(n.draws<-n.sims*n.chain)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#30 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXIXRWBXYVF7VR6FRUVQXDUIBTODANCNFSM5GOSDJ3Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Great. The update(s) you made seems to have fixed the issue. However, I am running into another issue with the data set I previously shared and was curious if you have any insight. In short, the model is having issues converging on a "believable" estimate when I specify maiden catch (u2) as "bad" for 1-2 periods near the end of the time series (e.g., bad.u2 <-c(10,11)). For context, I want to identify these periods as bad because streamflow got so low that the trap essentially stopped spinning and yearling migrants can then avoid the trap. When I run the model without specifying periods 10 and/or 11 as bad, the model estimates abundance declining towards zero in the final periods, which is what we would expect. However, when I specify u2 in periods 10 and/or 11 as bad, the model estimates that abundance increases quite dramatically towards the tail end of the time series (and the precision blows up). I've run this model with up to 500K iterations/250 burn-in per chain (4 chains) and still have issues. Looking at the model outputs, there are clearly some issues (e.g., estimates of logitP >5000 and < -5000 - agh!!). This specific time series is pretty short (only 13 periods) and the 1-2 bad periods are right after peak outmigration both of which perhaps are contributing to the estimation problems. To troubleshoot, I toyed around with adding additional "fake periods" of zero catch with the idea of helping the model realize catch should decline to zero but this doesn't seem to help. Here's the problematic dataset: table_Run5_Final_Catch_Summary.csv Thanks, Carl! Kale |
Thanks... I'll have a look next week (8th November) at this issue.. Carl
…On Mon, Nov 1, 2021 at 3:12 PM Kale Bentley ***@***.***> wrote:
Hi @cschwarz-stat-sfu-ca <https://github.com/cschwarz-stat-sfu-ca>,
Great. The update(s) you made seems to have fixed the issue. However, I am
running into another issue with the data set I previously shared and was
curious if you have any insight.
In short, the model is having issues converging on a "believable" estimate
when I specify maiden catch (u2) as "bad" for 1-2 periods near the end of
the time series (e.g., bad.u2 <-c(10,11)). For context, I want to identify
these periods as bad because streamflow got so low that the trap
essentially stopped spinning and yearling migrants can then avoid the trap.
When I run the model without specifying periods 10 and/or 11 as bad, the
model estimates abundance declining towards zero in the final periods,
which is what we would expect. However, when I specify u2 in periods 10
and/or 11 as bad, the model estimates that abundance increases quite
dramatically towards the tail end of the time series (and the precision
blows up). I've run this model with up to 500K iterations/250 burn-in per
chain (4 chains) and still have issues. Looking at the model outputs, there
are clearly some issues (e.g., estimates of logitP >5000 and < -5000 -
agh!!).
This specific time series is pretty short (only 13 periods) and the 1-2
bad periods are right after peak outmigration both of which perhaps are
contributing to the estimation problems. To troubleshoot, I toyed around
with adding additional "fake periods" of zero catch with the idea of
helping the model realize catch should decline to zero but this doesn't
seem to help.
Here's the problematic dataset: table_Run5_Final_Catch_Summary.csv
<https://github.com/cschwarz-stat-sfu-ca/BTSPAS/files/7456410/table_Run5_Final_Catch_Summary.csv>
Thanks, Carl!
Kale
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#30 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXIXRS43BS533Y6IEI3HR3UJ4GDTANCNFSM5GOSDJ3Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Moved to a new issue so will close this issuel. |
Hi @cschwarz-stat-sfu-ca,
I came across an issue with the post model run "fit.plot" within the TimeStratPetersenDiagError_fit.R function. Specifically, I am getting the message "Error : transformation for secondary axes must be monotonic".
Quickly searching online, I found this GitHub post explaining the issue though there isn't an obvious solution.
Out of the dozens of datasets I've run using BTSPAS, this is the first time I've come across this issue. Interestingly, the error doesn't occur until I run a large number of iterations, which I guess isn't too surprising if it is an outlier issue.
If this "fit.plot" issue can't be easily fixed, I am curious if the function could be modified so that it keeps executing despite an error occurring with one of the post model plots and/or predictive checks. I haven't fiddled around with this functionality too much but it does look like there are several different ways to implement based on this post..
Again, if nothing else, it would be nice to have the model object saved regardless of an post-model run error. This would at least partially solve the third issue I highlighted in issue #28 earlier this year.
Thanks for the time.
Kale
The text was updated successfully, but these errors were encountered: