-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorized variants of to_bits
and from_bits
#1142
Vectorized variants of to_bits
and from_bits
#1142
Conversation
ptal! I haven't removed I'm debugging some Fxp issue right now, after that I'll move the above tests and remove the old functions. |
@@ -528,6 +581,13 @@ def from_bits(self, bits: Sequence[int]) -> Fxp: | |||
fxp_bin = "0b" + bits_bin[: -self.num_frac] + "." + bits_bin[-self.num_frac :] | |||
return Fxp(fxp_bin, dtype=self.fxp_dtype_str) | |||
|
|||
def from_bits_array(self, bits_array: NDArray[np.uint8]): | |||
assert isinstance(self.bitsize, int), "cannot convert to bits for symbolic bitsize" | |||
# TODO figure out why `np.vectorize` is not working here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open an issue and link? Do you have any theories? as I understand it: np.vectorize
just does a python for loop under-the-hood
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's something to do with how Fxp interacts with numpy. Fxp has some inbuilt support to operate over NDArrays, so perhaps mixing the order up causes issues. I didn't investigate more though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An Fxp
object can wrap a numpy array -- so to get a ND collection of Fxp objects, you construct a Fxp(numpy_array_of_int_or_float_values)
instead of np.array([Fxp(x) for x in array_of_int_or_float_values])
See https://github.com/francof2a/fxpmath?tab=readme-ov-file#arithmetic for more details
return np.vectorize( | ||
lambda x: np.asarray(self.to_bits(x), dtype=np.uint8), signature='()->(n)' | ||
)(x_array) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sure you know this, but as far as I understand it np.vectorize
will use a python for-loop under-the-hood and you don't get any special performance improvements by using it. You get the correct api and broadcasting behavior, however.
why is the signature argument needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without the signature, it tries to pack each output as a single entry in the array, and fails when we return a vector that needs to be treated as an additional dimension
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM from an API perspective
|
||
def on_classical_vals(self, **kwargs) -> Dict[str, 'ClassicalValT']: | ||
x, phase_grad = kwargs['x'], kwargs['phase_grad'] | ||
x, phase_grad = _extract_raw_int_from_fxp(x_fxp), _extract_raw_int_from_fxp(phase_grad_fxp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The self.scaled_val(x)
logic was written because on_classical_vals
expected an int
instead of Fxp
. Now that on_classical_vals
expects an Fxp
object directly, we can avoid converting Fxp -> int -> scaled_int -> scaled_fxp
and directly do Fxp -> Scaled Fxp
here.
Maybe rename self.scaled_val -> self._scaled_val_int
and call from self.apply
and add a self.scaled_val
that expects an Fxp object for x
and returns the scaled value as
def scaled_val(self, x: Fxp) -> Fxp:
"""Computes `phase_grad + x` using fixed point arithmetic."""
return x.like(_fxp(0, self.phase_bitsize)) >> self.right_shift
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM % updates to phase gadient bloqs to avoid fxp -> int -> scaled_int -> scaled_fxp
conversions and directly do fxp arithmetic
34c3ab2
to
013ff0f
Compare
# widen appropriately so that right shifting does not drop necessary bits | ||
x = x.like( | ||
QFxp(x.n_int + phase_grad.n_frac, phase_grad.n_frac, phase_grad.signed)._fxp_dtype | ||
) | ||
scaled_x = x >> self.right_shift |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Since self.x_dtype
is QFxp(self.x_bitsize, self.x_bitsize, signed=False)
, x.n_int
is always 0 and so we can simply use x.like(_fxp(self.phase_bitsize))
to do the resizing before shifting.
# widen appropriately so that right shifting does not drop necessary bits | |
x = x.like( | |
QFxp(x.n_int + phase_grad.n_frac, phase_grad.n_frac, phase_grad.signed)._fxp_dtype | |
) | |
scaled_x = x >> self.right_shift | |
scaled_x = x.like(_fxp(0, self.phase_bitsize)) >> self.right_shift |
This would also keep it consistent with the implementation of scaled_val_int(x)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was planning to remove _fxp
and only use QFxp
methods to construct Fxps so that there's only one way to construct them (easier to maintain).
I'll use x.like(self.phase_dtype._fxp_dtype)
here instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _fxp
helper here sets useful properties on the Fxp
config object (like shifting='trunc'
) which are important to guarantee correctness of the multiply via add logic. This is hard to enforce when constructing Fxp
objects via QFxp
; since this is relevant in the context of this arithmetic bloq; please use the _fxp
helper here instead of self.phase_dtype._fxp_dtype
(which would be wrong the way it's implemented right now; unless we change the default options on QFxp._fxp_dtype
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually fixed the _fxp_dtype
to match the _fxp
implementation, as we'd almost always want that.
def on_classical_vals(self, x: int, phase_grad: int) -> Dict[str, 'ClassicalValT']: | ||
phase_grad_out = (phase_grad + self.scaled_val(x)) % 2**self.phase_bitsize | ||
scaled_x = _mul_via_repeated_add( | ||
x.like(phase_grad), gamma_fxp=self.abs_gamma_fxp, out=self.phase_bitsize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's wrong to do the x.like(phase_grad)
truncation here. x
can be bigger than phase_grad and in this case, we should not discard the extra bits in x
. The _mul_via_repeated_add
will add shifted substrings of x
into phase_grad
but if we discard the extra bits here then the shifted substrings would just be 0
and lead to a larger approximation error.
Ideally, a test should be failing at this point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you're right. All the current tests have x.n_int = 0
. I'll add a test to catch this and fix it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue is not necessarily because of n_int
; but also because the fractional bitsize can be larger than phase_grad
and if gamma > 1; then you'd be doing a left shift and adding into phase_grad; so you'll add bits 5, 6 7 into a phase grad register of size 3 if left shift due to gamma is 4
AddScaledValIntoPhaseReg.from_bitsize(4, 7, 0.123, 6), | ||
AddScaledValIntoPhaseReg.from_bitsize(2, 8, 1.3868682, 8), | ||
AddScaledValIntoPhaseReg.from_bitsize(4, 9, -9.0949456, 5), | ||
AddScaledValIntoPhaseReg.from_bitsize(6, 4, 2.5, 2), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe try adding a test for AddScaledValIntoPhaseReg.from_bitsize(8, 3, 7.5, 1)
and see if that fails the classical simulation test due to x.like()
truncation above?
…i.e. with correct config)
for i, bit in enumerate(gamma_fxp.bin()): | ||
if bit == '0': | ||
continue | ||
shift = gamma_fxp.n_int - i - 1 | ||
# Left/Right shift by `shift` bits. | ||
res += x_fxp << shift if shift > 0 else x_fxp >> abs(shift) | ||
# Issue: Fxp << and >> does not preserve config, so we use `* 2**shift` instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a minimal example or link to documentation? Can you file an issue at the fxpmath
repo? This seems unexpected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking this because the scope of the PR has grown a lot since last LGTM. Let's pull out vectorization changes and merge them first and then deal with Fxp upgrades and nuances later
Implemented in #1215 |
#811
#606
#643
to_bits_array
andfrom_bits_array
ints_to_bits
andbits_to_ints
functions fromclassical_sim
toQUInt.to_bits_array
andQUInt.from_bits_array
more discussion: #1137 (comment)