Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Estimate the MIDAS event length and uncertainty #200

Open
benkrikler opened this issue Aug 18, 2014 · 5 comments
Open

Estimate the MIDAS event length and uncertainty #200

benkrikler opened this issue Aug 18, 2014 · 5 comments
Labels
Milestone

Comments

@benkrikler
Copy link
Contributor

As raised in #110, we need a safe estimate of the MIDAS event length. We can achieve this by looking at the time distributions.

Naively I'd expect the event length to be the same for all channels and all runs, but that may not be the case if the various delays and reaction times vary.

@AndrewEdmonds11
Copy link
Contributor

100ms. Closing...

Joking aside, I don't understand the problem. We know that MIDAS events are roughly 110 ms and we know our pulses and delays are on much shorter timescales. If each channel did have different length events, would they be so different that it has a huge impact?

@jrquirk
Copy link
Contributor

jrquirk commented Aug 18, 2014

We don't know when acquisition stops in a digitizer. When the stop acquisition signal gets sent to the BU CAEN, it never sends back a timestamp saying "This is when the measurement window closed." So for the last muSc event, we don't know if there are no hits after it because we were live and there was no physics, or because our acquisition window closed. Now we'll obviously veto it because the rate was high enough, but for lower rate detectors it's trickier because the last event could be a couple of milliseconds from the end or just a microsecond.

Anyway, we'll carefully leave a large buffer before when we think the end of a window is and even then that's not much of the measurement window. The effect is small so maybe it's not important to fix it now, but it should be brought up.

Does this make sense?

@AndrewEdmonds11
Copy link
Contributor

Yes, I think so. So the real problem is that we might end up missing physics in the lower rate detectors? And at some point we will want to estimate the efficiency of this?

@benkrikler
Copy link
Contributor Author

It's about both estimating the efficiency of the DAQ + algorithms that create the TMEs and maximising it whilst being confident that we're accounting for pile-up correctly.

@litchfld
Copy link
Contributor

Andy is probably right that it shouldn't be too important, even in the worst case. As I mentioned in #110, we know the that about 0.5% of events are close to the end, and a similar fraction from the beginning. If we were to cut the events at the end and correct the efficiency with a simple toy MC it's probably fine down to a few per mil. In neutrino physics at least, 1% absolute normalisation is pretty good, (although admittedly livetime estimation is not normally the biggest factor).

But we can almost certainly do better by looking at the timing of noise pulses. In noisy channels we can probably get a decent estimate of where the event edges fall on a per-event basis, and use that to evaluate whether it is stable wrt other channels on a per-run (or longer) timescales; assuming stability is dependent only on the electronics type. If it is stable we can accumulate triggers over several runs and work out the envelopes for the quieter channels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants