This is a collection of skeleton projects and code generators used to get a new
user started with ScienceDb. To
example sandbox data model definitions have been provided. You find them in
./data_model_definitions/
.
You should have basic knowledge of the following technology-stack:
Note, that this project is meant to be used on a *nix
system, preferably
Linux.
First you need to git clone
this project into a local directory on your host
system:
git clone https://github.com/ScienceDb/ScienceDbStarterPack.git
The skeleton GraphQL server and the single page application server projects are
managed as git submodule
s. "Skeleton" means that these projects provide all
the code needed to start a server, but actually have no code particular to any
data model. This "particular" code you will generate with ScienceDb's code
generators (see below).
Note that using git submodules is a good solution for this Starter-Pack project. Nonetheless, as git developers themselves admit "Using submodules isn’t without hiccups, however." See the official git book, chapter submodules for more details on git submodules.
Setup the skeleton servers:
git submodule init
git submodule update --init --recursive
If you want to update your skeleton server projects managed as git submodules to the latest remote repository version, run the following command:
git submodule foreach git pull origin master
To correctly manage your code with git you will need to create your own branches of the servers. Furthermore, you might have to add your own remote repository to which to push your new code.
To achieve this the most recommendable way is to fork the two server projects on github:
Then update your submodules (the servers) to track your own forked version of
the two repositories. To update the git URLs simply edit the file .gitmodules
.
For example change
[submodule "graphql-server"]
path = graphql-server
url = https://github.com/ScienceDb/graphql-server.git
to
[submodule "graphql-server"]
path = graphql-server
url = https://github.com/MyGitHubName/graphql-server.git
Then run the following commands:
git submodule sync
git submodule update --init --recursive --remote
git submodule foreach 'git checkout -b featureA'
To avoid having to install the ScienceDb code-generators on your host system we provide a dedicated Docker image in which two code generators are installed and ready to be used.
docker build -f Dockerfile.code-generators -t sciencedb-code-generators:latest .
Within the directory ./data_model_definitions
you can place your data model
definitions in respective JSON
files. To learn more about how to define data
models with ScienceDb please see our manual and
documentation.
We strongly recommend using the default data models user.json
and
role.json
. This gives you a ready to login and secure set of servers.
If you choose to follow the recommendation, you should edit the Sequelize
seeder ./seeders/20190225162439-create_roles_n_users.js
to create your
default admin-user and default roles.
It is most important that you then copy the seeder into the graphql-server code dir:
cp -r ./seeders ./graphql-server
Using the dedicated Docker image in which the code generators are installed you
can invoke them on the data model definitions you placed in the
data_model_definitions
directory.
Whenever you make changes to your data model definitions or update the code
generators and/or skeleton server projects graphql-server
or
single-page-app
, you should repeat the following code generation.
To generate code files from within a Docker container into which external host folders are mounted, you need to start the respective Docker container as the yourself.
First, find out your user and your group identifiers by running id
in a terminal.
Remember your user ID (uid
) and your group ID (gid
).
docker run --rm -it -v `pwd`:/opt --user <your_uid>:<your_gid> sciencedb-code-generators:latest
graphql-server-model-codegen --jsonFiles /opt/data_model_definitions -o /opt/graphql-server
where <your_uid>
and <your_gid>
are your user ID and your group ID, respectively.
docker run --rm -it -v `pwd`:/opt --user <your_uid>:<your_gid> sciencedb-code-generators:latest
single-page-app-codegen --jsonFiles /opt/data_model_definitions -o /opt/single-page-app
where <your_uid>
and <your_gid>
are your user ID and your group ID, respectively.
Be very carefull when running the code generators multiple times on the same data model definitions. Two nasty things can happen:
- You might overwrite manual changes you might have made to come of the code that was automatically generated.
- In the case of relational databases, ScienceDb code generators also create migrations (using Sequelize). As these are named using the current date, you might have several migrations to create the same tables. This will lead to errors. Make sure you delete the migrations folder content, if you want to run the code generators multiple times on the same model definitions:
rm ./graphql-server/migrations/*
.
Upon starting the servers in any mode development or production any pending
database migrations and seeding is automatically triggered. See file
./graphql-server/migrateDbAndStartServer.sh
, and the two docker-compose files
docker-compose-dev.yml
(development) and docker-compose.yml
(production).
If you do not run the development, and definitely later the production
environment, on localhost
, you need to tell the single page application which
URLs to use for login and to send GraphQL queries to. This is controlled by the
following environment variables of sdb_science_db_app_server
in the two
docker-compose files.
VUE_APP_SERVER_URL=http://localhost:3000/graphql
VUE_APP_LOGIN_URL=http://localhost:3000/login
VUE_APP_MAX_UPLOAD_SIZE=500
For more details see our manual and the
single-page-application
README
.
ScienceDb can be used checking access rights for every single GraphQL query
received by the currently logged in user identified by the respective JSON Web
Token found in the request header. The user is decoded and
his roles are loaded to check his access rights. This step is carried out by
the NPM acl package. Respective access
rights can and must be declared in the file
./graphql-server/acl_rules.js
.
You can run ScienceDb with or without this access control check. The default is to run it without checking access rights.
To switch access right check on, you must uncomment the command line switch
acl
and change the following line in
./graphql-server/migrateDbAndStartServer.sh
npm start # acl
to
npm start acl
If you decide not to use access control, we strongly recommend to restrict
access to the GraphiQL interface through the graphql-server
. Switch off the
support for GraphiQL in ./graphql-server/server.js
:
// Excerpt from server.js
app.use('/graphql', cors(), graphqlHTTP((req) => ({
schema: Schema,
rootValue: resolvers,
pretty: true,
graphiql: false, // SWITCH OFF SUPPORT FOR GraphiQL by setting this to 'false'
context: {
request: req,
acl: acl
},
formatError(error){
return {
message: error.message,
details: error.originalError && error.originalError.errors ? error.originalError.errors : "",
path: error.path
};
}
})));
As long as you are developing your applications, you want the servers to react
to any changes you make to your code immediately. Hence, in the development
environment, the single-page-application is served through a dedicated server
and not compiled with webpack
to be served statically.
docker-compose -f docker-compose-dev.yml up --force-recreate --remove-orphans
Basically we now switch to production environment. The single-page-application
will be compiled with webpack
and served statically with an nginx
server.
The graphql-server
will no longer be using the mounted local code but be
serving the code as present within the respective Docker image.
Note, that you may have to delete the graphql-server and/or nginx image and rebuild them to have them using it your latest code!
docker-compose -f docker-compose-dev.yml run --user 1000:1000 sdb_science_db_app_server bash
npm run build
See the environment
section of the sdb_nginx
image in docker-compose.yml
.
MY_SERVER_URL
- url where your backend server will be running, default value is http://localhost:3000/graphqlMY_LOGIN_URL
- url where your backend will check authentication, default value is http://localhost:3000/login.MAX_UPLOAD_SIZE
- maximum size(in MB) of a file intended to be uploaded, default value is 500, which means that user can not upload a file larger than 500MB.
The above is taken from the single-page-app README
# Optionally remove 'old' images:
docker images | grep sciencedbstarterpack_ | awk '{print "docker rmi " $1}' | sh
# Build the images:
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d --force-recreate --remove-orphans
Have a look at the following examples, please.
If you want to generate a new Sequelize migration or seeder you need to do that
from within a Docker container created from the respective
sdb_science_db_graphql_server
Docker image:
docker-compose -f docker-compose-dev.yml run --rm sdb_science_db_graphql_server bash
./node_modules/.bin/sequelize seed:generate --name my_new_seeder
Note how we use docker-compose-dev.yml
to have the local directory mounted
inside the Docker container, so that newly created files, like migrations or
seeder files, are actually persisted on the host file-system.
docker-compose -f docker-compose.yml run --rm sdb_postgres psql -h sdb_postgres -U sciencedb -W sciencedb_development
There is a Minio CLI documented in detail. You can use it for example to upload local files into a designated bucket on the minio server.
You need the Docker image from minio. See above manual for installation details.
Assuming your local files are on your Desktop
, launch the Minio container mounting you Desktop to opt.
docker run -v ~/Desktop:/opt --rm -it --entrypoint=/bin/sh minio/mc
Now register your Minio instance:
mc config host add my_minio http://my.sciencedb.org minioUser minioPw --api S3v4
The above minioUser
and minioPw
are set as environment variables in your docker-compose files. The URL depends on your server setup.
List all content on your Minio server
mc ls my_minio
List all commands
mc -h
Create a bucket
mc mb my_minio/my_bucket
Copy files to bucket
mc cp opt/my_file1 my_minio/my_bucket
mc cp opt/my_file2 my_minio/my_bucket
Have fun!
If you have started your docker-compose with -d
or if you just want to delete the created containers, execute:
docker-compose -f docker-compose[-dev].yml down
The above [-dev]
has to be used or not, depending on whether you ran the development or production environment.
To remove the docker images execute (see above):
docker images | grep sciencedbstarterpack_ | awk '{print "docker rmi " $1}' | sh
To delete the volumes permanently in which your data has been stored execute:
docker volume ls | grep sciencedbstarterpack | awk '{print "docker volume rm " $2}' | sh
Be warned: All your data will be lost!
If you also want to delete the Docker image holding the code generators execute:
docker rmi sciencedb-code-generators:latest
If you want to start from scratch, and generate the code for your model definitions again, we recommend to remove your local copies of graphql-server
and single-page-application
and check these sub-modules out again using git.