Skip to content

jet2007/gpdb

This branch is 1 commit ahead of, 40946 commits behind greenplum-db/gpdb-archive:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

1950c22 · Aug 5, 2016
Jul 28, 2016
Jun 15, 2016
Jul 4, 2016
Jun 15, 2016
Jul 6, 2016
Aug 4, 2016
Jul 28, 2016
Aug 4, 2016
Jun 29, 2016
Mar 31, 2016
Mar 18, 2016
Mar 18, 2016
Oct 28, 2015
Jun 24, 2016
Jun 15, 2016
Oct 28, 2015
Jun 1, 2016
Jan 27, 2016
Oct 28, 2015
Mar 18, 2016
Mar 3, 2016
Mar 18, 2016
Jul 12, 2016
Oct 28, 2015
Jun 15, 2016
Jun 15, 2016
Oct 28, 2015
Oct 28, 2015

Repository files navigation

Build Status


Greenplum

The Greenplum Database (GPDB) is an advanced, fully featured, open source data warehouse. It provides powerful and rapid analytics on petabyte scale data volumes. Uniquely geared toward big data analytics, Greenplum Database is powered by the world’s most advanced cost-based query optimizer delivering high analytical query performance on large data volumes.

The Greenplum project is released under the Apache 2 license. We want to thank all our current community contributors and are really interested in all new potential contributions. For the Greenplum Database community no contribution is too small, we encourage all types of contributions.

Overview

A Greenplum cluster consists of a master server, and multiple segment servers. All user data resides in the segments, the master contains only metadata. The master server, and all the segments, share the same schema.

Users always connect to the master server, which divides up the query into fragments that are executed in the segments, sends the fragments to the segments, and collects the results.

Requirements

  • From the GPDB doc set, review Configuring Your Systems and Installing Greenplum and perform appropriate updates to your system for GPDB use.

  • gpMgmt utilities - command line tools for managing the cluster.

    You will need to add the following Python modules (2.7 & 2.6 are supported) into your installation

    • psutil
    • lockfile (>= 0.9.1)
    • paramiko
    • setuptools
    • epydoc

    If necessary, upgrade modules using "pip install --upgrade". pip should be at least version 7.x.x.

Code layout

The directory layout of the repository follows the same general layout as upstream PostgreSQL. There are changes compared to PostgreSQL throughout the codebase, but a few larger additions worth noting:

  • gpMgmt/

    Contains Greenplum-specific command-line tools for managing the cluster. Scripts like gpinit, gpstart, gpstop live here. They are mostly written in Python.

  • gpAux/

    Contains Greenplum-specific extensions such as gpfdist and gpmapreduce. Some additional directories are submodules and will be made available over time.

  • doc/

    In PostgreSQL, the user manual lives here. In Greenplum, the user manual is distributed separately (see http://gpdb.docs.pivotal.io), and only the reference pages used to build man pages are here.

  • ci/

    Contains configuration files for the GPDB continuous integration system.

  • src/backend/cdb/

    Contains larger Greenplum-specific backend modules. For example, communication between segments, turning plans into parallelizable plans, mirroring, distributed transaction and snapshot management, etc. cdb stands for Cluster Database - it was a workname used in the early days. That name is no longer used, but the cdb prefix remains.

  • src/backend/gpopt/

    Contains the so-called translator library, for using the ORCA optimizer with Greenplum. The translator library is written in C++ code, and contains glue code for translating plans and queries between the DXL format used by ORCA, and the PostgreSQL internal representation. This goes unused, unless building with --enable-orca.

  • src/backend/gp_libpq_fe/

    A slightly modified copy of libpq. The master node uses this to connect to segments, and to send fragments of a query plan to segments for execution. It is linked directly into the backend, it is not a shared library like libpq.

  • src/backend/fts/

    FTS is a process that runs in the master node, and periodically polls the segments to maintain the status of each segment.

Build GPDB with Planner

# Clean environment
make distclean

# Configure build environment to install at /usr/local/gpdb
./configure --prefix=/usr/local/gpdb

# Compile and install
make
make install

# Bring in greenplum environment into your running shell
source /usr/local/gpdb/greenplum_path.sh

# Start demo cluster (gpdemo-env.sh is created which contain
# __PGPORT__ and __MASTER_DATA_DIRECTORY__ values)
cd gpAux/gpdemo
make cluster
source gpdemo-env.sh

The directory and the TCP ports for the demo cluster can be changed on the fly:

DATADIRS=/tmp/gpdb-cluster MASTER_PORT=15432 PORT_BASE=25432 make cluster

The TCP port for the regression test can be changed on the fly:

PGPORT=15432 make installcheck-good

Build GPDB with GPORCA

Only need to change the configure with additional option --enable-orca.

# Configure build environment to install at /usr/local/gpdb
# Enable GPORCA
# Build with perl module (PL/Perl)
# Build with python module (PL/Python)
# Build with XML support
./configure --enable-orca --with-perl --with-python --with-libxml --prefix=/usr/local/gpdb

Once build and started, run psql and check the GPOPT (e.g. GPORCA) version:

select gp_opt_version();

Build GPDB with code generation enabled

To build GPDB with code generation (codegen) enabled, you will need cmake 2.8 or higher and a recent version of llvm and clang (include headers and developer libraries). Codegen utils is currently developed against the LLVM 3.7.X release series. You can find more details about the codegen feature, including details about obtaining the prerequisites, building and testing GPDB with codegen in the Codegen README.

In short, you can change the configure with additional option --enable-codegen, optionally giving the path to llvm and clang libraries on your system.

# Configure build environment to install at /usr/local/gpdb
# Enable CODEGEN
./configure --enable-codegen --prefix=/usr/local/gpdb --with-codegen-prefix="/path/to/llvm;/path/to/clang"

Regression tests

  • The default regression tests
make installcheck-good
  • optional extra/heavier regression tests
make installcheck-bugbuster
  • The PostgreSQL check target does not work. Setting up a Greenplum cluster is more complicated than a single-node PostgreSQL installation, and no-one's done the work to have make check create a cluster. Create a cluster manually or use gpAux/gpdemo/ (example below) and run make installcheck-good against that. Patches are welcome!

  • The PostgreSQL installcheck target does not work either, because some tests are known to fail with Greenplum. The installcheck-good schedule excludes those tests.

Development with Docker

We provide a docker image with all dependencies required to compile and test GPDB. You can view the dependency dockerfile at ./docker/base/Dockerfile. The image is hosted on docker hub at pivotaldata/gpdb-devel. This docker image is currently under heavy development.

A quickstart guide to Docker can be found on the Pivotal Engineering Journal.

Known issues:

  • The installcheck-good make target has at least 4 failures, some of which are non-deterministic

Running regression tests with Docker

  1. Create a docker host with 8gb RAM and 4 cores

    docker-machine create -d virtualbox --virtualbox-cpu-count 4 --virtualbox-disk-size 50000 --virtualbox-memory 8192 gpdb
    eval $(docker-machine env gpdb)
  2. Build your code on gpdb-devel rootfs

    cd [path/to/gpdb]
    docker build .
    # image beefc4f3 built

    The top level Dockerfile will automatically sync your current working directory into the docker image. This means that any code you are working on will automatically be built and ready for testing in the docker context

  3. Log into docker image

    docker run -it beefc4f3
  4. As gpadmin user run installcheck-good

    su gpadmin
    cd /workspace/gpdb
    make installcheck-good

Caveats

  • No Space Left On Device On macOS the docker-machine vm can periodically become full with unused images. You can clear these images with a combination of docker commands.

    # assuming no currently running containers
    # remove all stopped containers from cache
    docker ps -aq | xargs -n 1 docker rm
    # remove all untagged images
    docker images -aq --filter dangling=true | xargs -n 1 docker rmi

    Alternatively you can use the (beta) Native macOS docker client now available in docker 1.12.

Glossary

  • QD

    Query Dispatcher. A synonym for the master server.

  • QE

    Query Executor. A synonym for a segment server.

Documentation

For Greenplum Database documentation, please check online docs: http://gpdb.docs.pivotal.io

There is also a Vagrant-based quickstart guide for developers in vagrant/README.md.

Releases

No releases published

Packages

No packages published

Languages

  • PLpgSQL 47.2%
  • C 34.1%
  • C++ 7.8%
  • Python 5.4%
  • Java 0.9%
  • Gherkin 0.9%
  • Other 3.7%