Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset chapter #328

Merged
merged 7 commits into from
Sep 25, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 56 additions & 0 deletions best_practices/datasets.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Working with large datasets
egpbos marked this conversation as resolved.
Show resolved Hide resolved

There are several solutions available to you as an RSE, with their own pros and cons. You should evaluate which one works best for your project, and project partners, and pick one. Sometimes it might be, that you need to combine two different types of technologies. Here are some examples from our experience.

You will encounter datasets in various file formats like:
- CSV/Excel
- Parquet
- HDF5/NetCDF
- JSON/JSON-LD
egpbos marked this conversation as resolved.
Show resolved Hide resolved

Or local database files like SQLite. It is important to note, the various trade-offs between these formats. For instance, doing a random seek is difficult with a large dataset for non-binary formats like: CSV, Excel, or JSON. In such cases you should consider formats like Parquet, or HDF5/NetCDF. Non-binary files can also be imported into local databases like SQLite or DuckDB. Below we compare some options to work with datasets in these formats.
suvayu marked this conversation as resolved.
Show resolved Hide resolved

It's also good to know about [Apache Arrow](https://arrow.apache.org), which is not itself a file format, but a specification for a memory layout of (binary) data.
There is an ecosystem of libraries for all major languages to handle data in this format.
It is used as the back-end of [many data handling projects](https://arrow.apache.org/powered_by/), among which a few others mentioned in this chapter.

## Local database

When you have a relational dataset, it is recommended that you use a database. Using local databases like SQLite and DuckDB can be very easy because of no setup requirements. But they come with some some limitations; for instance, multiple users cannot write to the database simultaneously.

SQLite is a transactional database, so if you have a dataset that is changing with time (e.g. you are adding new rows), it would be more appropriate. However in research often we work with static databases, and are interested mostly in analytical tasks. For such a case, DuckDB is a more appropriate alternative. Between the two,
- DuckDB can also create views (virtual tables) from other sources like files, other databases, but with SQLite you always have to import the data before running any queries.
- DuckDB is multi-threaded. This can be an advantage for large databases, where aggregation queries tend to be faster than sqlite.
- However if you have a really large dataset, say 100Ms of rows, and want to perform a deeply nested query, it would require substantial amount of memory, making it unfeasible to run on personal laptops.
- There are options to customize memory handling, and push what is possible on a single machine.

You need to limit the memory usage to prevent the operatings system, or shell from preemptively killing it. You can choose a value about 50% of your system's RAM.
```sql
SET memory_limit = '5GB';
```
By default, DuckDB spills over to disk when memory usage grows beyond the above limit. You can verify the temporary directory by running:
```sql
SELECT current_setting('temp_directory') AS temp_directory;
```
Note, if your query is deeply nested, you should have sufficient disk space for DuckDB to use; e.g. for 4 nested levels of `INNER JOIN` combined with a `GROUP BY`, we observed a disk spill over of 30x the original dataset. However we found this was not always reliable.

In this kind of borderline cases, it might be possible to address the limitation by splitting the workload into chunks, and aggregating later, or by considering one of the alternatives mentioned below.
- You can also optimize the queries for DuckDB, but that requires a deeper dive into the documentation, and understanding how DuckDB query optimisation works.
- Both databases support setting (unique) indexes. Indexes are useful and sometimes necessary
- For both DuckDB and SQLite, unique indexes allow to ensure data integrity
- For SQLite, indexes are crucial to improve the performance of queries. However, having more indexes makes writing new records to the database slower. So it's again a trade-off between query and write speed.

## Data processing libraries on a single machine
- Pandas
- The standard tool for working with dataframes, and widely used in analytics or machine learning workflows. Note however how Pandas uses memory, because certain APIs create copies, while others do not. So if you are chaining multiple operations, it is preferable to use APIs that avoid copies.
- Vaex
- Vaex is an alternative that focuses on out-of-core processing (larger than memory), and has some lazy evaluation capabilities.
- Polars
- An alternative to Pandas (started in 2020), which is primarily written in Rust. Compared to pandas, it is multi-threaded and does lazy evaluation with query optimisation, so much more performant. However since it is newer, documentation is not as complete. It also allows you to write your own custom extensions in Rust.

egpbos marked this conversation as resolved.
Show resolved Hide resolved
## Distributed/multi-node data processing libraries
- Dask
- `dask.dataframe` and `dask.array` provides the same API as pandas and numpy respectively, making it easy to switch.
- When working with multiple nodes, it requires communication across nodes (which is network bound).
- Ray
- Apache Spark
9 changes: 0 additions & 9 deletions best_practices/language_guides/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,15 +300,6 @@ It is good practice to restart the kernel and run the notebook from start to fin
- [altair](https://github.com/ellisonbg/altair) is a _grammar of graphics_ style declarative statistical visualization library. It does not render visualizations itself, but rather outputs Vega-Lite JSON data. This can lead to a simplified workflow.
- [ggplot](https://github.com/yhat/ggpy) is a plotting library imported from R.

### Database Interface

* [psycopg](https://www.psycopg.org/) is a [PostgreSQL](http://www.postgresql.org) adapter
* [cx_Oracle](http://cx-oracle.sourceforge.net) enables access to [Oracle](https://www.oracle.com/database/index.html) databases
* [monetdb.sql](https://www.monetdb.org/Documentation/SQLreference/Programming/Python)
is [monetdb](https://www.monetdb.org) Python client
* [pymongo](https://pymongo.readthedocs.io) and [motor](https://motor.readthedocs.io) allow for work with [MongoDB](http://www.mongodb.com) database
* [py-leveldb](https://code.google.com/p/py-leveldb/) are thread-safe Python bindings for [LevelDb](https://github.com/google/leveldb)

### Parallelisation

CPython (the official and mainstream Python implementation) is not built for parallel processing due to the [global interpreter lock](https://wiki.python.org/moin/GlobalInterpreterLock). Note that the GIL only applies to actual Python code, so compiled modules like e.g. `numpy` do not suffer from it.
Expand Down
Loading