A collection of interesting datasets and the tools to convert them into ready-to-use formats.
- curated and cleaned datasets: quality over quantity
- all tools and pipelines are streaming: first results are available immediately
- fields and units are clearly labeled and properly-typed
- data is output in immediately usable formats (Parquet, Arrow, DuckDB, SQLite)
- datasets conform to reasonable standards (UTF-8, RFC3339 dates, decimal lat/long coords, SI units)
Requires Python 3.8+.
git clone https://github.com/saulpw/readysetdata.git
cd readysetdata
Then from within the repository,
make setup
or
pip install .
or
python3 setup.py install
Output is generated for all available formats and put in the OUTPUT
directory (output/
by default).
Size and time estimates are for JSONL output on a small instance.
- 84k movies and 28m ratings from MovieLens
- 9m movies/tv (1m rated), 7m tv episodes, 12m people from imdb.
- 4m Wikipedia infoboxes organized by type, in JSONL format
- Xm article summaries (first paragraph and first sentence)
See results immediately as they accumulate in output/wp-infoboxes
.
- generated with Faker
- joinable products, customers, and orders tables for a fake business
- unicode data, including japanese and arabic names and addresses
- includes geo lat/long coords, numeric arrays, and arrays of structs
All available formats will be output by default.
Specify a subset of formats by setting the FORMATS
envvar, or pass -f <formats>
to individual scripts.
Separate multiple formats with ,
.
- Apache Parquet:
parquet
- Apache Arrow IPC format:
arrow
andarrows
- DuckDB:
duckdb
- SQLite:
sqlite
These live in the scripts/
directory. Some of them require the readysetdata
module to be installed. For the moment, set PYTHONPATH=.
and run from the toplevel directory.
Extract <filename>
from .zip file at <url>
, and stream to stdout. Only downloads the one file; does not need to download the entire .zip.
Download from <url>
and stream to stdout. The data for e.g. https://example.com/path/to/file.csv
will be cached at cache/example.com/path/to/file.csv
.
Parse XML from stdin, and emit JSONL to stdout for the given <tag>
.
Parse JSONL from stdin, and append each JSONL verbatim to its <field-value>.jsonl
.
Created and curated by Saul Pwanson. Licensed for use under Apache 2.0.
Enabled by Apache Arrow and Voltron Data.
Toponymic information is based on the Geographic Names Database, containing official standard names approved by the United States Board on Geographic Names and maintained by the National Geospatial-Intelligence Agency.More information is available at the Resources link at www.nga.mil. TheNational Geospatial-Intelligence Agencyname, initials, and seal are protected by 10 United States Code � Section 425.