Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sum over ghosts or use a diffrent formula for periodic cardinal sinus. #1267

Open
jecampagne opened this issue Dec 21, 2023 · 0 comments
Open
Labels
numerics Involving details of the numerical algorithms for performing some calculation(s) optimization/performance Related to the speed and/or memory consumption of some aspect of the code

Comments

@jecampagne
Copy link

Hi, working for the variation of the Quintic kernel #1265 along the line of Gary & Daniel 2014 paper, I mention that the code below in Interpolant.cpp

    double Interpolant::xvalWrapped(double x, int N) const
    {
        // sum over all arguments x+jN that are within range.
        // Start by finding x+jN closest to zero
        double xdown = x - N*std::floor(x/N + 0.5);
        xassert(std::abs(xdown) <= N);
        if (xrange() <= N) {
            // This is the usual case.
            return xval(xdown);
        } else {
            double xup = xdown+N;
            double sum = 0.;
            while (std::abs(xdown) <= xrange()) {
                sum += xval(xdown);
                xdown -= N;
            }
            while (xup <= xrange()) {
                sum += xval(xup);
                xup += N;
            }
            return sum;
        }
    }

implements the following equation of the 2014 paper
image

But, discussing with Gary, I've mebtioned that in fact the true kernel (aka periodic cardinal sinus)

$$K^{\mathrm{ideal}}_u(\nu) = \frac{1}{N}\sum_{j=-N/2}^{N/2-1} e^{-2\pi i j \nu} = e^{i\pi \nu} \frac{\mathrm{sinc}(N \nu)}{\mathrm{sinc}(\nu)}$$$$

can be directly approximated by any "sinc" approximated kernel (noted ker) with the following generic python code
(of course this is not an optimized code)

def k_p(x,N,ker,xmax):
  x = np.abs(x)
  th=xmax/N
  return np.piecewise(x, [x<th,x>=th],[lambda x: ker(N*x)/ker(x), lambda x:0])

def rkWrapped(u,N,ker,xmax):
  return np.sign(0.5-np.floor(u+0.5)%2)*k_p(u-np.floor(u+0.5),N,ker,xmax)

def kernelWrapped(u,N,ker,umax):
  return np.exp(1j * np.pi * u) * kWrapped(u,N,ker,xmax)

and then one perform a single convolution (Eq. of 11 in the paper) with the wrapped kernel above

$$\tilde{F}(u) = K_x(u) \times \sum_{k=-N/2}^{N/2-1} \hat{a}_k\times \tilde{K}_u(u-k/N)$$

One can reproduce the figure of the paper using such convolution.

Gary says that : "it would be interesting to see numerically how accurate this is. I suppose it’s a speedup of a few since you don’t have to sum over the aliased frequencies any more."

@rmjarvis rmjarvis added numerics Involving details of the numerical algorithms for performing some calculation(s) optimization/performance Related to the speed and/or memory consumption of some aspect of the code labels Mar 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
numerics Involving details of the numerical algorithms for performing some calculation(s) optimization/performance Related to the speed and/or memory consumption of some aspect of the code
Projects
None yet
Development

No branches or pull requests

2 participants