-
Notifications
You must be signed in to change notification settings - Fork 3
Site Backend
The iFEED app uses mod_wsgi to interface with apache. The setting up of mod_wsgi was done entirely by IT and so not much information is known about the server itself. The app interfaces with the mod_wsgi through the file application.py
which only has two lines in it - one being an import of the app
object from the Dash index script and another defining a variable named application
as being application=app.server
The environment on ifeed01 is set up using pip. This is because IT are reluctant to have outward facing sites based on conda environments. As such, there is a requirements.txt file in the iFEED repo to allow the environment to be built.
The iFEED site is nominally a flask app, however in reality it is two apps, one of which is a standard flask app while the other is a Dash app. The flask app was created first, using the (at the time) under construction comet site as a starting point. When data exploration was requested, Dash was added.
The most often used way of combining a pre-existing flask app with a Dash app is by putting the dash app in an iframe. When attempting this however, there were some fairly significant issues that arose from the interaction between flask and dash that made the iframe solution difficult, so instead an alternative setup was used whereby the flask app actually sits inside the dash app.
In the file app.py
, the flask app is defined as fl_app but then, rather than being imported directly by the wsgi system it is instead placed inside an object of the class CustomDash
as the server. This CustomDash
class is defined in the file Applications/DashApp/mainlayout.py
and contains a function interpolate_index
which determines the site page template used for all the data exploration pages, including header, navbar, and footer. In the creation of the CustomDash object, the stylesheets, scripts and meta tags are defined and the base path to all Dash pages is defined to be /data_exploration
This CustomDash object, called app
is then imported by the dashindex.py file and once the dash app has been set up on that app variable, it is imported by the wsgi system, where app.server
is defined as the object accessed by wsgi to serve the site.
All this means that if the url is valid and matches the pattern /data_exploration/*
, a Dash page is returned however if the pattern is not matched then the app falls back on the default behaviour, and as app.server == fl_app
this means that serving the flask app is the default behaviour.
While the flask app is defined in the file app.py
, the main part of the definition is importing and registering the main blueprint. There is an explanation of blueprints here, however for these purposes it can be assumed that the blueprints connect the urls to html jinja2 templates via python functions.
The blueprints are useful because they allow you to create multiple pages from a single function. For example, the pages /countries/MWI/Keyfindings
, /countries/TZA/Keyfindings
, /countries/ZAF/Keyfindings
, and /countries/ZMB/Keyfindings
are all generated by a single function which chooses a the correct country name and key findings pdf filename using dictionaries, and then renders the template KeyFindings.html.j2
wherein these variables are used by jinja2 to customise the page. A more complicated example is that of the scenarios, which have 16 different pages (four countries, four scenarios per country) defined by a single function and a single html.j2 template.
Dash is a library made by plotly to make it easy to create interactive data exploration apps with minimal JS, building on the flask framework. As such, Dash plays well with plotly libraries and with flask. The Dash backend for iFEED is a little more like a traditional python program than the flask app backend, but with the addition of callbacks, which are used to respond to user actions. The entry point into the Dash app is dashindex.py
. From here, different layouts are chosen based on the url, with those layouts generated by functions in the file layouts.py
. It should be noted that the Dash app at this point has already had the main layout defined, as described earlier.
The layout functions determine the text, data selectors and plots for the various pages. The dash_html_components library (now deprecated in favour of a library integrated into the dash module) is used in the building of layouts to define many of the usual html markups such as header levels H1-H6, divs, breaks, and paragraphs. The dash_core_components library (similarly deprecated and brought into the main Dash library) contains many useful things including dropdowns, sliders, radio buttons and other user interface components. It also contains loading animations and the Graph
interface. These are all used in the creation of layouts, which are returned from the function as a single object.
The graphs defined in the layouts are called not as functions directly from the layouts but instead a dash_core_components Graph object is defined with the invokation dcc.Graph(id='graph_id')
, and a callback is written which updates the graph and uses the graph id as the output. An example of this is
@app.callback(
Output('cropgraph', 'figure'),
[Input('crop', 'value'),
Input('field', 'value'),
Input('intermediate-value', 'children')])
def update_graph(crop, field, loclst):
ccode=loclst[0]
quad = loclst[1]
df = dataviz.get_cubedata(ccode, quad, crop, field)
fig = dataviz.cropgraph(ccode, quad, crop, df, field)
return fig
which will update the Graph object with id "cropgraph" as defined in the callback Output, using the fields 'crop' and 'field' as inputs alongside the 'intermediate-value' which is defined in the layout invocation in dashindex.py to be a list containing the country code and scenario number. The script calls the functions for retrieving data and then plotting it, and returns the figure which is assigned to the Output of the callback and updated in the layout. These callbacks are all held in the file callbacks.py
Data is extracted from netCDF4 files using the functions get_cubedata
and get_cubedata2
. The names of these functions are remnants of an earlier version of the site that used iris instead of xarray to extract the data. Xarray was used in the end purely because the iris library was not available on pip and as previously mentioned the use of conda for the final site was not permitted by IT. These functions extract subsetted data into a pandas dataframe, then creates three dataframes with min, max, median and quartile data for every year in one dataframe, and the same for the time periods 2000 +/- 10 years and 2050 +/- 10 years in the other two. These dataframes are returned as a list and passed to the graph plotting functions
The graph plotting functions use plotly graph objects to create graphs that are not only well integrated with the Dash system but also interactive, allowing the user to zoom, pan, save pngs, and read data from the screen. The plotly graph objects are as easy to use as matplotlib, and many resources exist online for how use the plotly library to make effective graphs in python.