-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create mini datalake database #110
Conversation
Create models and handler to create mini datalake to store org and scan data
update some of the default parameters of the columns in the new database
rerun the backend linter
run lint on package-lock.json
run pretty-format-json pre-commit
update the cidr columns primary key to be a UUID
Connect new variables to AWS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comments
Alphabetize imports in connections and remoe blank space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking a look at those few comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, just make sure you check everything in the pre-approval checklist
Remove TODO comments
change serial PKs to UUIDs
Create models and handler to create mini datalake to store org and scan data
🗣 Description
This PR creates the models and code to generate the Mini Data Lake database, that will store data from the various teams and will eventually serve the Crossfeed application database data.
💭 Motivation and context
In order to expand Crossfeed's capabilities the database needs to be expanded to hold different vulnerability and scan types and hold data specific to each source, this new schema will allow us to house data from any source as it is expanded upon.
The schema will likely have some adjustments as time goes on and new sources are added.
✅ Pre-approval checklist
in code comments.
to reflect the changes in this PR.
✅ Pre-merge checklist
✅ Post-merge checklist