This repository consists of all the codes developed for the experiments conducted in the paper entitled "Explainable Artificial Intelligence for Highlighting and Searching in Patent Text".
More details regarding the accessibility of Chrome extensions are available here. Further queries regarding the project will be answered via: [email protected]
- Chromium browser extension
- Flask API for our fine-tuned models
- Fine-tuned models used in API
- Manually labeled data and different evaluations
- Make sure API is deployed locally in your system, refer https://github.com/Renuk9390/expaai_model_api After installing, run the app.py by traversing to the directory while keeping tensorflow environment active as follows:
- To verify that API is built successfully, go to localhost:http://localhost:3000/hello as follows:
- Make sure Chrome extension is installed properly, refer https://github.com/Renuk9390/expaai_browser_extension_cli Successful installation and once the extension is pinned, you can verify by clicking the icon as follows:
- Hit the 'Analyse' button to automatically highlight the technical aspects, a successful run will look as below:
- Refreshing the opened patent page in Google Patents and reloading the extension helps sometimes if there are any issues!
- Runtime of this application is completely dependent on the users' local machine, for instance, in Windows machines with high hardware configurations with GPU, then the application is much faster as compared with only CPU-based machines.
- Runtime and responsiveness vary from a few seconds to even a few minutes for instance in machines like Nvidia RTX 2060 8Gb GPU, 32 GB RAM, and i7 CPU machine with 16GB RAM respectively.
- Runtime and responsiveness also vary based on the length of the patent document.
- This is just a working preliminary prototype with limited capabilities in terms of usability, however, can be modified and extended for personal customizations.
- This Chrome extension application is user-friendly if there is good hardware being used to handle large models like Google BERT Large during the inference time.
- Implementing this application in large-scale machines like cloud environments with 32 GB of GPU would provide results within a fraction of a second.