You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 11, 2023. It is now read-only.
We need the followin metadata for both training and predictions of the ML data
ID, a unique number (across of supplies), starting from 0 and going up. The unique numbers from the pv data provider is too large, and might not be unique across of pv data supplier.
the capacity of the system.
Essenitall we need a map from provider id and provider too ocf_id and capacity
We could do this in a number of different ways
1. Database + API
Add ocf_id as a column to the database (this could go in the pv_system table). We would need to make sure that the development and production databases have the same value. Then we could add an endpoint to the API, where (provider, provider_id) --> (ocf_id, capacity). This information is not publicity sensitive, so we are ok with security here.
We perhaps would want a wrapper function around the api endpoint that we can easily use (this could be in nowcasting_dataset)
This solution is good because
it is scalable.
The values will be hard to change
This solution is bad because:
you need access to the internet.
need to think a bit about security of API
2. CSV
We could add a CSV to either nowcasting_dataset, or pv consumer with the following 4 rows: (provider, id, ocf_id, capacity). Then we could write a function to to go from (provider, provider_id) --> (ocf_id, capacity) very easily. I think this all sits quite well in nowcasting_dataset.
This is good because:
simple
the CSV can be version controlled.
This solution is bad because:
Needs access to where this is saved,
3. Cloud CSV
Like 2, but the CSV could be in the cloud. This means we don't have to worry about installing any extra repos.
This is good because:
don't have to instal on repo
This solution is bad because:
need access to the internet
need access to files
4. Hybrid
add ocf_id to database
add enpoint to api
add wrapper function to end point api, but saves file locally (somewhere), so that if there's no internet, it just loads the last one.
We need the followin metadata for both training and predictions of the ML data
Essenitall we need a map from provider id and provider too ocf_id and capacity
We could do this in a number of different ways
1. Database + API
Add
ocf_id
as a column to the database (this could go in the pv_system table). We would need to make sure that the development and production databases have the same value. Then we could add an endpoint to the API, where (provider
,provider_id
) --> (ocf_id
,capacity
). This information is not publicity sensitive, so we are ok with security here.We perhaps would want a wrapper function around the api endpoint that we can easily use (this could be in nowcasting_dataset)
This solution is good because
This solution is bad because:
2. CSV
We could add a CSV to either nowcasting_dataset, or pv consumer with the following 4 rows: (provider, id, ocf_id, capacity). Then we could write a function to to go from (
provider
,provider_id
) --> (ocf_id
,capacity
) very easily. I think this all sits quite well in nowcasting_dataset.This is good because:
This solution is bad because:
3. Cloud CSV
Like 2, but the CSV could be in the cloud. This means we don't have to worry about installing any extra repos.
This is good because:
This solution is bad because:
4. Hybrid
ocf_id
to databaselinks:
The text was updated successfully, but these errors were encountered: