-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to save and load trained model (some workaround suggested) #2
Comments
Thanks for your suggestion. There is a simple Saving and loading |
It was a simple issue. When we load JSON from file, keys are stored as string, where as _deletes keys are int (hash value). We need to do something like
and good news is its working. The key here is, you need to convert hs into int ([int(hs)]) while transferring it to ss._deletes. I will be sending some pull request. I also implemented multi threading for creating the hash table in python. With four threads, it took 3 mins, intead of 7 min with out multi thread for loading 500K words in my local machine |
Raised a pull request |
Hi, thanks for the contribution, I appreciate it a lot. I will look through shortly and accept. Do you want to join forces to further improve it? |
Definitely would like to join.. I am now working on some more improvement. Will let you know once it is done. But, if you are merging my changes, revert the prime number back to original. I was not aware of specialty of those two numbers :-) FNV Hash algorithm. |
Hi
I tried to save the trained dictionary and reload it. It is not working. Do you have any idea how to do it? What I tried. To save the trained dictionary
Once saved, tried to reload it like
It is not working. It is loading, but spell correction is not working.
As a workaround, i added following two functions in main file, which are working
To use it, you can save like
and load like
Above is working, if anyone interested. But, if we have save and load deletes/words etc, it will be faster compared to training every time.
The text was updated successfully, but these errors were encountered: