Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
JulienPeloton committed Dec 5, 2024
1 parent b8de7fb commit d610c87
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 26 deletions.
Binary file modified .github/API_fink.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
36 changes: 10 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,43 +4,27 @@

![structure](.github/API_fink.png)

This repository contains the code source of the Fink REST API used to access object data stored in tables in Apache HBase.
The object API is a Flask application used to access object data stored in tables in Apache HBase. The application relies internally on two components, the Java Gateway and the Fink cutout API.

## Requirements and installation

You will need Python installed (>=3.11) with requirements listed in [requirements.txt](requirements.txt). You will also need [fink-cutout-api](https://github.com/astrolabsoftware/fink-cutout-api) fully installed (which implies Hadoop installed on the machine, and Java 11/17). For the full installation and deployment, refer as to the [procedure](install/README.md).
The Java Gateway enables the Flask application to communicate with a JVM using [py4j](https://www.py4j.org/), where the Fink HBase client based on [Lomikel](https://github.com/hrivnac/Lomikel) is available. This client simplifies the interaction with HBase tables, where Fink aggregated alert data is stored.

## Configuration
The Fink cutout API is a Flask application to access cutouts from the Fink datalake. We only store cutout metadata in HBase, and this API retrieves the data from the raw parquet files stored on HDFS.

First you need to configure the parameters in [config.yml](config.yml):
## User documentation

```yml
# Host and port of the application
HOST: localhost
PORT: 32000
TBD

# URL of the fink_cutout_api
CUTOUTAPIURL: http://localhost

# HBase configuration
HBASEIP: localhost
ZOOPORT: 2183
## Requirements and installation

# Table schema (schema_{fink_broker}_{fink_science})
SCHEMAVER: schema_3.1_5.21.14
You will need Python installed (>=3.9) with requirements listed in [requirements.txt](requirements.txt). You will also need [fink-cutout-api](https://github.com/astrolabsoftware/fink-cutout-api) fully installed (which implies Hadoop installed on the machine, and Java 11/17). For the full installation and deployment, refer as to the [procedure](install/README.md).

# Maximum number of rows to
# return in one call
NLIMIT: 10000
```
## Deployment

Make sure that the `SCHEMAVER` is the same you use for your tables in HBase.
The input parameters can be found in [config.yml](config.yml). Make sure that the `SCHEMAVER` is the same you use for your tables in HBase.

TODO:
- [ ] Find a way to automatically sync schema with tables.

## Deployment

### Debug

After starting [fink-cutout-api](https://github.com/astrolabsoftware/fink-cutout-api), you can simply test the API using:
Expand All @@ -51,7 +35,7 @@ python app.py

### Production

The application is managed by `gunicorn` and `systemd` (see [install](install/README.md)), and you can simply manage it using:
The application is simply managed by `gunicorn` and `systemd` (see [install](install/README.md)), and you can simply manage it using:

```bash
# start the application
Expand Down

0 comments on commit d610c87

Please sign in to comment.