Powered by Evidence.
Steps:
- A webscraper running in Google Cloud Run Jobs is scraping auctions and writing the results to a SQLITE database in Google Cloud Storage.
- We need to download the sqlite database and convert it into a duckdb database file for efficient analytical quering. This also does some data cleaning and filtering using duckdb. See the
makefile
stepdownload-db
. For this we need to install the duckdb binary. - Then in order to create the dashboard in Evidence, we need to install
node
and then install evidence's dependencies by runningnpm install
. - Now that Node and the project dependencies are installed, we run
npm run build
which will in turn runevidence build
, connecting to the duckdb database file in the project dir (downloaded in the step before), run the sql queries defined in our evidence markdown pages, and cache the results and create an output directory calledbuild
. - We can host this directory as a website using Netlify and use the
nwtgck/[email protected]
github marketplace action to deploy it from the Github action.
Note: Deploying the website in Google Cloud Runner is not possible due to the hard-limit of 32MB per response size. When fetching data from the cache, the
json
responses for the auctions can be bigger than that, causing the navigation to break.