You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello there, first off thanks for writing this awesome library, it saved me a lot of time getting my project off the ground!
The only "issue" I've had is with the fixed sample count. I'd like the option of using a more "adaptive" sampling strategy that will e.g. take as many samples as necessary to guarantee with some confidence level (e.g. 98%) that the true offset has been found to within some tolerance (e.g. 10ms). This would be really nice for applications like mine where you need some guarantees on the level of synchronization.
As currently implemented, we can only control the total number of samples taken. But for clients with very low variance in offset measurements, the sample count could be overkill, and for clients with very high variance in offset measurements, the sample count could be too low.
I ultimately ended up implementing this sampling strategy myself for my project; here is the code:
classSamplePolicy{shouldSample(samples){thrownewError('Not implemented');}}/** Take maxSamples (current timesync behavior). */classMaxSamplesPolicyextendsSamplePolicy{constructor(maxSamples){super();this.maxSamples=maxSamples;}shouldSample(samples){returnsamples.length<this.maxSamples;}}/** * Keep sampling until the offset is known to within the margin of error (MoE) * with the given confidence level. */classAutoSamplePolicyextendsSamplePolicy{constructor(config){super();this.minSamples=config.minSamples;this.maxSamples=config.maxSamples;this._z=CI_TO_Z[config.confidenceLevel];this.moe=config.moe;}shouldSample(samples){if(samples.length<this.minSamples){returntrue;}elseif(samples.length>=this.maxSamples){returnfalse;}conststddev_=stddev(samples);constn=Math.ceil(Math.pow(this._z*stddev_/this.moe,2));returnsamples.length<n;}}/** Confidence interval to Z-score. */constCI_TO_Z={0.80 : 1.281551565545,0.85 : 1.439755747982,0.90 : 1.644853626951,0.95 : 1.959963984540,0.98 : 2.326347874041,0.99 : 2.575829303549,0.995: 2.807033485986,0.998: 3.090232306168,0.999: 3.290526731492,};
The text was updated successfully, but these errors were encountered:
Thanks for your input. So basically, instead of the static repeat number of samples, you would like to implement a dynamic sample count that is determined on the fly? That makes sense.
I'm open for a PR implementing a solution for this.
Hello there, first off thanks for writing this awesome library, it saved me a lot of time getting my project off the ground!
The only "issue" I've had is with the fixed sample count. I'd like the option of using a more "adaptive" sampling strategy that will e.g. take as many samples as necessary to guarantee with some confidence level (e.g. 98%) that the true offset has been found to within some tolerance (e.g. 10ms). This would be really nice for applications like mine where you need some guarantees on the level of synchronization.
As currently implemented, we can only control the total number of samples taken. But for clients with very low variance in offset measurements, the sample count could be overkill, and for clients with very high variance in offset measurements, the sample count could be too low.
I ultimately ended up implementing this sampling strategy myself for my project; here is the code:
The text was updated successfully, but these errors were encountered: