Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design for supporting different hypervector types #25

Closed
mikeheddes opened this issue May 9, 2022 · 5 comments · Fixed by #81
Closed

Design for supporting different hypervector types #25

mikeheddes opened this issue May 9, 2022 · 5 comments · Fixed by #81
Labels
enhancement New feature or request

Comments

@mikeheddes
Copy link
Member

mikeheddes commented May 9, 2022

We can use the same design PyTorch uses, we can extend the dtypes, and extend the different tensor types i.e. FloatTensor. Then in the HDC operations we can check the instance type of the hypervector to change the behavior.

import torchhd

hv = torchhd.functional.random_hv(10, 1000)  # bipolar by default (torch.float)
hv = torchhd.functional.random_hv(10, 1000, dtype=torch.bool)
hv = torchhd.functional.random_hv(10, 1000, dtype=torch.complex64)

torchhd.functional.bind(hv[0], hv[1])  # works for any datatype
@mikeheddes mikeheddes added the enhancement New feature or request label May 13, 2022
@mikeheddes
Copy link
Member Author

Useful functions to change the behavior of the hypervector operations:

To find out if a torch.dtype is a floating point data type, the property is_floating_point can be used, which returns True if the data type is a floating point data type.

To find out if a torch.dtype is a complex data type, the property is_complex can be used, which returns True if the data type is a complex data type.

@rishikanthc
Copy link
Member

It would be good to support the following:

  • HRR and FHRR representations
  • Real valued hypervectors
  • Integer hypervectors with different bit-widths
  • Random hypervectors (with an option to choose the kernel parameters for the initial distribution) For eg. I usually start with a normal distribution of (0, 1/sqrt(D)) and then quantize with an arbitrary threshold. This is an important feature and changes the behavior of HD learning.

@mikeheddes
Copy link
Member Author

  • I am not very familiar yet with the HRR and FHRR representations so I will need to do some reading up on that to see how we can implement that.
  • Real valued hypervectors should be the default representation, or do you have something else in mind?
  • Most of the functions already support the default torch signed integer types. If we want to support arbitrary bit-widths we should look into working together with QPyTorch (see Investigate interoperability with QPyTorch #53) or PyTorch's build-in quantization methods.
  • For the last point I am not sure how we should provide this as part of the library because I don't see a clear pattern emerging from the literature yet other than the initialization method we have right now. In addition, changing the initialization is relatively easy, you can write a custom initialization method and still use all the other operations we provide. Can you give some examples that can help guide the design? Is torch.distributions something that you would like to use?

@rgayler
Copy link

rgayler commented May 28, 2022

I am not very familiar yet with the HRR and FHRR representations

The representations are just real and complex, respectively. The unique points are that HRR binding/unbinding operators are circular convolution/correlation respectively and you transform between HRR and FHRR with a Fourier transform. Tony Plate's PhD thesis (1994) is your friend for this. IMO these are historically important, and people tend to use them for that reason, but I think people should use straight real or complex VSAs for practicality.

Real valued hypervectors should be the default representation

FWIW I have recently come to the view that complex-valued VSA (phasor representation) is the fundamental VSA as most of the other VSA types (HRR, BSC, MAP, ...) can be interpreted as special cases (most varying in terms of quantisation of phase angle and vector magnitude of the individual complex elements).

On the topic of complex-valued VSAs: If you are looking to implement a wide variety of different VSA types you should include unconstrained complex values. However, practical applications of complex-valued VSA typically constrain the element magnitude to be 1, so that only the phase angle varies. Recent introduces a threshold on the element magnitude . If the bundling result magnitude is above the threshold parameter (per element) the magnitude is normalised to 1, otherwise it is set to 0 (see Eqn 3, http://www.pnas.org/lookup/doi/10.1073/pnas.1902653116).

On the topic of bit widths: The same thinking can be applied to the phase and magnitude of complex-valued VSAs. However, I think you need to decide whether the purpose of allowing choice of bit-width/resolution is to explore the impact of quantisation or for efficiency. If the former, it's probably easier to perform the calculations in full complex values then quantise the results to the desired resolution. If you're actually trying to maximise computational efficiency I suspect you would have to implement multiple special cases.

On the topic of random initialisation of hypervectors: I think the point here is sharpest when considering complex-valued elements. The distribution of phase values controls the shape of the similarity kernel, so is a major component of representation design (see Section 5.2 in http://arxiv.org/abs/2109.03429). Making it easy to specify some arbitrary distribution of values would be good, and maybe build-in some common choices of distribution.

@mikeheddes
Copy link
Member Author

With #81 I believe the most important hypervector types are now provided therefore I will close this issue. If there is a need to support types that are currently missing feel free to open a new issue specific to that representation such that we can have more focused conversations in each issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants