diff --git a/README.md b/README.md
index 6ea3b4c8..55504068 100644
--- a/README.md
+++ b/README.md
@@ -1,16 +1,15 @@
-# Code-generator
+
Code Generator
-Command line utility to auto-generate the structure files that [this server](https://github.com/ScienceDb/graphql-server)
-will use to perform CRUD operations for each model created.
+Command line utility to generate the structure files that Zendro [graphql-server](https://github.com/Zendro-dev/graphql-server) will use to perform CRUD operations for each model created.
-## Set up:
+## Set up
Clone the repository and run:
```
$ npm install -g
```
If you only want to install it locally run `npm install` instead
-## Usage:
+## Usage
To run the unit-test case:
```
@@ -22,82 +21,103 @@ To run the integration-test case
$ npm run test-integration [-- OPTIONS]
```
Note:
-Intergation-test case creates a docker-compose ambient with three servers:
+Integration-test case creates a docker-compose environment with three servers:
- gql_postgres
- gql_science_db_graphql_server
- gql_ncbi_sim_srv
-
-By default, after the test run, all corresponding Docker images will be completely removed from the docker, this cleanup step can be skiped with __-k__ option.
+```
+gql_postgres
+gql_science_db_graphql_server
+gql_ncbi_sim_srv
+```
-Please run this utility with __-h__ option to see the full documentation in manpage style.
+### Examples of use - Test integration
-## Examples of use - Test integration:
To see full test-integration info:
-```
+```bash
$ npm run test-integration -- -h
```
-To restart containers:
-```
-$ npm run test-integration -- -r
+
+To execute a default test run:
+```bash
+$ npm run test-integration
```
-To generate code and start containers:
+
+To test a specific `graphql-server` branch:
+```bash
+$ npm run test-integration -- b
```
+
+To generate code:
+```bash
$ npm run test-integration -- -g
```
-To do the tests only and keep the containers running at end:
-```
-$ npm run test-integration -- -t -k
-```
+
To generate code and do the tests, removing all Docker images at end:
-```
-$ npm run test-integration -- -T
-```
-To do a full clean up (removes containers, images and code):
-```
+```bash
$ npm run test-integration -- -T
```
-
-And to generate the structure files:
+To generate code, do the tests, and keep the containers running at end:
+```bash
+$ npm run test-integration -- -T -k
```
-$ code-generator -f -o
+
+To do the tests only and keep the containers running at end:
+```bash
+$ npm run test-integration -- -t -k
```
+
+To restart containers:
+```bash
+$ npm run test-integration -- -r
```
-INPUT:
- - directory where json models are stored
- - directory where the generated code will be written
+
+To clean generated code, remove containers and volumes:
+```bash
+$ npm run test-integration -- -C
```
-This command will create(if doesn't exist) four folders containing the generated files for each model in the ```input-json-files```:
-* models ----> sequelize model
-* schemas ----> graphQL schema
-* resolvers ----> basic CRUD resolvers
-* migrations ----> create and delete table migration file
+To do a full clean up (removes containers, images and code):
+```bash
+$ npm run test-integration -- -c
+```
+### Examples of use - Code Generator
-## Examples of use - Code generator:
In the same directory of this repository run:
-```
+```bash
+# -f directory where json models are stored
+# -o directory where the generated code will be written
$ code-generator -f ./example_json_files -o /your_path_directory
```
-If you want to complete the example with the [server](https://github.com/ScienceDb/graphql-server)
-make ```/your_path_directory``` the same directory where the server repository is stored.
-NOTE: For displaying the explanation about usage we can run the help flag:
-```
+For help using this command:
+
+```bash
$ code-generator -h
+
+# Code generator for the GraphQL server
+#
+# Options:
+#
+# -f, --jsonFiles Folder containing one json file for each model
+# -o, --outputDirectory Directory where generated code will be written
+# -h, --help output usage information
```
-```
-Code generator for GraphQL server
- Options:
+This command will create four sub-folders within the `output-directory` folder, containing the generated files for each model:
+
- -f, --jsonFiles Folder containing one json file for each model
- -o, --outputDirectory Directory where generated code will be written
- -h, --help output usage information
```
+models/ -> sequelize model
+schemas/ -> graphQL schema
+resolvers/ -> basic CRUD resolvers
+migrations/ -> create and delete table migration file
+```
+
+To use the code generator with the [graphql-server](https://github.com/Zendro-dev/graphql-server),
+use its path in the `output-directory`.
+
## JSON files Spec
@@ -107,15 +127,15 @@ For each model we need to specify the following fields in the json file:
Name | Type | Description
------- | ------- | --------------
-*model* | String | Name of the model (it is recommended uppercase for the initial character).
-*storageType* | String | Type of storage where the model is stored. So far can be one of __sql__ or __Webservice__ or __zendro\_server__
-*url* | String | This field is only mandatory for __zendro\_server__ stored models. Indicates the url where the zendro server storing the model is runnning.
-*attributes* | Object | The key of each entry is the name of the attribute and theres two options for the value . Either can be a string indicating the type of the attribute or an object where the user can indicates the type of the attribute(in the _type_ field) but also can indicates an attribute's description (in the _description_ field). See the [table](#types-spec) below for allowed types. Example of option one: ```{ "attribute1" : "String", "attribute2: "Int" }``` Example of option two: ``` { "attribute1" : {"type" :"String", "description": "Some description"}, "attribute2: "Int ```
+*model* | String | Name of the model (it is recommended to Capitalize the name).
+*storageType* | String | Type of storage where the model is stored. Currently supported types are __sql__, __Webservice__, and __zendro\_server__
+*url* | String | This field is only mandatory for __zendro\_server__ stored models. Indicates the URL of the Zendro server storing the model.
+*attributes* | Object | The key of each entry is the name of the attribute. There are two options for the value: a string indicating the type of the attribute, or an object with two properties: _type_ (the type of the attribute) and _description_ (attribute description). See [types-spec](#types-spec) table below for allowed types. Example of option one: ```{ "attribute1" : "String", "attribute2: "Int" }``` Example of option two: ``` { "attribute1" : {"type" :"String", "description": "Some description"}, "attribute2: "Int ```
*associations* | Object | The key of each entry is the name of the association and the value should be an object describing the associations. See [Associations Spec](associations-spec) section below for the specifications of the associations.
EXAMPLES OF VALID JSON FILES
-```
+```jsonc
//Dog.json
{
"model" : "Dog",
@@ -139,7 +159,7 @@ EXAMPLES OF VALID JSON FILES
```
-```
+```jsonc
//Publisher.json
{
"model" : "Publisher",
@@ -175,7 +195,7 @@ Date |
Time |
DateTime |
-For more info about `Date`, `Time`, and `DateTime` types, please see this [info](https://github.com/excitement-engineer/graphql-iso-date/blob/HEAD/rfc3339.txt).
+For more info about `Date`, `Time`, and `DateTime` types, please see the [graphql-iso-date/rfc3339.txt](https://github.com/excitement-engineer/graphql-iso-date/blob/HEAD/rfc3339.txt).
Example:
* Date: A date string, such as `2007-12-03`.
@@ -211,7 +231,7 @@ name | Type | Description
## NOTE:
Be aware that in the case of this type of association the user is required to describe the cross table used in the field _keysIn_ as a model in its own. For example, if we have a model `User` and a model `Role` and they are associated in a _manytomany_ way, then we also need to describe the `role_to_user` model:
-```
+```jsonc
//User model
{
"model" : "User",
@@ -235,7 +255,7 @@ Be aware that in the case of this type of association the user is required to de
}
```
-```
+```jsonc
//Role model
{
"model" : "Role",
@@ -258,7 +278,7 @@ Be aware that in the case of this type of association the user is required to de
}
```
-```
+```jsonc
//role_to_user model
{
"model" : "role_to_user",
@@ -275,7 +295,7 @@ Be aware that in the case of this type of association the user is required to de
It's important to notice that when a model involves a foreign key for the association, this key should be explicitly written into the attributes field of the given local model.
Example:
-```
+```jsonc
{
"model" : "book",
"storageType" : "sql",
@@ -301,7 +321,7 @@ THE SAME DATA MODELS DESCRIPTION(.json files) WILL BE USEFUL FOR GENERATING BOTH
Fields *`label`* and *`sublabel`* in the specification are only needed by the GUI generator, but backend generator will only read required information, therefore extra fields such as *`label`* and *`sublabel`* will be ignored by the backend generator.
Example:
-```
+```jsonc
//book.json
{
"model" : "Book",
@@ -329,27 +349,3 @@ Example:
## Testing
For relevant files see `package.json` (section scripts), directories `.test` and `docker`. Test framework is `mocha` and `chai`.
-
-### Unit tests
-
-Run all existing unit tests with
-```
-npm run test-unit
-```
-
-### Integration tests
-
-#### Requirements
-
-You need to be on a \*nix operating system, and have bash and Docker installed and running.
-
-Integration tests are carried out using Docker to setup a GraphQL web server and generate code for example data models. The last step of the setup is to create databases and migrate schemas. After that the server is started using `localhost:3000`, which can than be accessed using HTTP. Solely via such HTTP connections the generated API (GraphQL web server) is tested, just as a user might be doing with e.g. `curl`.
-
-All related Docker files are stored in `./docker`; especially `docker-compose-test.yml`.
-
-The test pipeline is defined and executed in `./test/integration-test.bash` for reasons of simplicity. The actual integration tests are written using `mocha` and can be found in `./test/mocha_integration_test.js`, which is invoked by the above bash script.
-
-To ecexute the integration tests run
-```
-npm run test-integration
-```
diff --git a/docker/Dockerfile.graphql_server b/docker/Dockerfile.graphql_server
index 31b2046e..ae0318b1 100644
--- a/docker/Dockerfile.graphql_server
+++ b/docker/Dockerfile.graphql_server
@@ -3,10 +3,9 @@ FROM node:14.3.0-stretch-slim
# Create app directory
WORKDIR /usr/src/app
+ENV JQ_PATH=/usr/bin/jq
+
# Clone the skeleton project and install dependencies
-RUN apt-get update && apt-get install -y git procps autoconf libtool make &&\
- git clone --branch master https://github.com/ScienceDb/graphql-server.git . && \
- chmod u+x ./migrateDbAndStartServer.sh && \
- npm install --save
+RUN apt-get update && apt-get install -y git procps autoconf libtool make jq
EXPOSE 3000
diff --git a/docker/data_models_storage_config1.json b/docker/data_models_storage_config1.json
new file mode 100644
index 00000000..9711932e
--- /dev/null
+++ b/docker/data_models_storage_config1.json
@@ -0,0 +1,11 @@
+{
+ "default-sql": {
+ "storageType": "sql",
+ "username": "sciencedb",
+ "password": "sciencedb",
+ "database": "sciencedb_development",
+ "host": "gql_postgres1",
+ "dialect": "postgres",
+ "operatorsAliases": false
+ }
+}
\ No newline at end of file
diff --git a/docker/data_models_storage_config2.json b/docker/data_models_storage_config2.json
new file mode 100644
index 00000000..d1c582e4
--- /dev/null
+++ b/docker/data_models_storage_config2.json
@@ -0,0 +1,11 @@
+{
+ "default-sql": {
+ "storageType": "sql",
+ "username": "sciencedb",
+ "password": "sciencedb",
+ "database": "sciencedb_development",
+ "host": "gql_postgres2",
+ "dialect": "postgres",
+ "operatorsAliases": false
+ }
+}
\ No newline at end of file
diff --git a/docker/docker-compose-test.yml b/docker/docker-compose-test.yml
index bf0bde1c..5839664c 100644
--- a/docker/docker-compose-test.yml
+++ b/docker/docker-compose-test.yml
@@ -17,27 +17,16 @@ services:
build:
context: .
dockerfile: Dockerfile.graphql_server
-
volumes:
- - ./integration_test_run/instance1/models/adapters:/usr/src/app/models/adapters
- - ./integration_test_run/instance1/schemas:/usr/src/app/schemas
- - ./integration_test_run/instance1/resolvers:/usr/src/app/resolvers
- - ./integration_test_run/instance1/models/sql:/usr/src/app/models/sql
- - ./integration_test_run/instance1/models/distributed:/usr/src/app/models/distributed
- - ./integration_test_run/instance1/models/zendro-server:/usr/src/app/models/zendro-server
- - ./integration_test_run/instance1/models/generic:/usr/src/app/models/generic
- - ./integration_test_run/instance1/migrations:/usr/src/app/migrations
- - ./integration_test_run/instance1/validations:/usr/src/app/validations
- - ./integration_test_run/instance1/patches:/usr/src/app/patches
- - ./sequelize_config_instance1.json:/usr/src/app/config/config.json
-
+ - ./integration_test_run/servers/instance1:/usr/src/app
+ - ./data_models_storage_config1.json:/usr/src/app/config/data_models_storage_config.json
ports:
- "3000:3000"
-
environment:
PORT: 3000
REQUIRE_SIGN_IN: "false"
LIMIT_RECORDS: 25
+ JQ_PATH: /usr/bin/jq
# Await POSTGRES role and DB creation, migrate schema, then start web server:
networks:
- integrationtest
@@ -47,7 +36,6 @@ services:
- /bin/sh
- -c
- |
- npm install
./migrateDbAndStartServer.sh
gql_ncbi_sim_srv1:
@@ -96,17 +84,8 @@ services:
dockerfile: Dockerfile.graphql_server
volumes:
- - ./integration_test_run/instance2/models/adapters:/usr/src/app/models/adapters
- - ./integration_test_run/instance2/schemas:/usr/src/app/schemas
- - ./integration_test_run/instance2/resolvers:/usr/src/app/resolvers
- - ./integration_test_run/instance2/models/sql:/usr/src/app/models/sql
- - ./integration_test_run/instance2/models/distributed:/usr/src/app/models/distributed
- - ./integration_test_run/instance2/models/zendro-server:/usr/src/app/models/zendro-server
- - ./integration_test_run/instance2/models/generic:/usr/src/app/models/generic
- - ./integration_test_run/instance2/migrations:/usr/src/app/migrations
- - ./integration_test_run/instance2/validations:/usr/src/app/validations
- - ./integration_test_run/instance2/patches:/usr/src/app/patches
- - ./sequelize_config_instance2.json:/usr/src/app/config/config.json
+ - ./integration_test_run/servers/instance2:/usr/src/app
+ - ./data_models_storage_config2.json:/usr/src/app/config/data_models_storage_config.json
ports:
- "3030:3030"
@@ -115,6 +94,7 @@ services:
PORT: 3030
REQUIRE_SIGN_IN: "false"
LIMIT_RECORDS: 25
+ JQ_PATH: /usr/bin/jq
# Await POSTGRES role and DB creation, migrate schema, then start web server:
networks:
- integrationtest
@@ -123,7 +103,6 @@ services:
- /bin/sh
- -c
- |
- npm install
./migrateDbAndStartServer.sh
networks:
diff --git a/docker/integration_test_run/.gitignore b/docker/integration_test_run/.gitignore
index ac0ae0d6..37c310a1 100644
--- a/docker/integration_test_run/.gitignore
+++ b/docker/integration_test_run/.gitignore
@@ -1,6 +1,2 @@
-# Ignore everything in this directory
-*
-# Except this file
-!.gitignore
-!instance1
-!instance2
+# graphql-server instances
+servers/
diff --git a/docker/integration_test_run/instance1/.gitignore b/docker/integration_test_run/instance1/.gitignore
deleted file mode 100644
index e69de29b..00000000
diff --git a/docker/integration_test_run/instance2/.gitignore b/docker/integration_test_run/instance2/.gitignore
deleted file mode 100644
index e69de29b..00000000
diff --git a/docker/integration_test_run/scripts/checkout-branch.sh b/docker/integration_test_run/scripts/checkout-branch.sh
new file mode 100644
index 00000000..20e956d8
--- /dev/null
+++ b/docker/integration_test_run/scripts/checkout-branch.sh
@@ -0,0 +1,35 @@
+#/usr/bin/env bash
+
+# This script assumes the initial working directory is "integration_test_run"
+
+TARGET_BRANCH=$1
+
+
+if [[ -d "servers" ]]; then
+
+ for instance in $(ls servers)
+ do
+
+ INSTANCE_PATH="servers/$instance"
+
+ if [[ -d "$INSTANCE_PATH" ]]; then
+
+ # Change into instance directory
+ cd $INSTANCE_PATH
+
+ # Forcefully checkout to target branch
+ # WARNING: discards all changes when switching branches!
+ git checkout --force $TARGET_BRANCH
+
+ # Synchronize with remote
+ git fetch --all
+ git reset --hard
+
+ # Return to previous working directory
+ cd - 1>/dev/null
+
+ fi
+
+ done
+
+fi
diff --git a/docker/integration_test_run/scripts/clean-workspace.sh b/docker/integration_test_run/scripts/clean-workspace.sh
new file mode 100644
index 00000000..43d1e282
--- /dev/null
+++ b/docker/integration_test_run/scripts/clean-workspace.sh
@@ -0,0 +1,41 @@
+#!/usr/bin/env bash
+
+PATHS=(
+ migrations
+ models/adapters
+ models/distributed
+ models/generic
+ models/sql
+ models/zendro-server
+ patches
+ resolvers
+ schemas
+ validations
+ acl_rules.js
+)
+
+for instance in servers/*
+do
+
+ # Change to graphql-server directory
+ if [ -d $instance ]
+ then
+
+ cd $instance
+
+ # Forcefully delete all generated files
+ for d in ${PATHS[@]}
+ do
+ echo $instance/${d}
+ rm -rf ${d}
+ done
+
+ # Checkout deleted static files
+ git checkout $(git diff --no-renames --name-only --diff-filter=D)
+
+ # Return to previous working directory
+ cd - 1>/dev/null
+
+ fi
+
+done
diff --git a/docker/sequelize_config_instance1.json b/docker/sequelize_config_instance1.json
deleted file mode 100644
index 9d3eb492..00000000
--- a/docker/sequelize_config_instance1.json
+++ /dev/null
@@ -1,26 +0,0 @@
-{
- "development": {
- "username": "sciencedb",
- "password": "sciencedb",
- "database": "sciencedb_development",
- "host": "gql_postgres1",
- "dialect": "postgres",
- "operatorsAliases": false
- },
- "test": {
- "username": "sciencedb",
- "password": "sciencedb",
- "database": "sciencedb_test",
- "host": "gql_postgres1",
- "dialect": "postgres",
- "operatorsAliases": false
- },
- "production": {
- "username": "sciencedb",
- "password": "sciencedb",
- "database": "sciencedb_production",
- "host": "gql_postgres1",
- "dialect": "postgres",
- "operatorsAliases": false
- }
-}
diff --git a/docker/sequelize_config_instance2.json b/docker/sequelize_config_instance2.json
deleted file mode 100644
index f3ebb6c4..00000000
--- a/docker/sequelize_config_instance2.json
+++ /dev/null
@@ -1,26 +0,0 @@
-{
- "development": {
- "username": "sciencedb",
- "password": "sciencedb",
- "database": "sciencedb_development",
- "host": "gql_postgres2",
- "dialect": "postgres",
- "operatorsAliases": false
- },
- "test": {
- "username": "sciencedb",
- "password": "sciencedb",
- "database": "sciencedb_test",
- "host": "gql_postgres2",
- "dialect": "postgres",
- "operatorsAliases": false
- },
- "production": {
- "username": "sciencedb",
- "password": "sciencedb",
- "database": "sciencedb_production",
- "host": "gql_postgres2",
- "dialect": "postgres",
- "operatorsAliases": false
- }
-}
diff --git a/funks.js b/funks.js
index 6d88626e..981937a7 100644
--- a/funks.js
+++ b/funks.js
@@ -2,10 +2,12 @@ let fs = require('fs');
const ejs = require('ejs');
const inflection = require('inflection');
const jsb = require('js-beautify').js_beautify;
+const { join, parse } = require('path');
const {promisify} = require('util');
const ejsRenderFile = promisify( ejs.renderFile );
const stringify_obj = require('stringify-object');
const colors = require('colors/safe');
+const { getModelDatabase } = require('./lib/generators-aux');
/**
@@ -495,28 +497,29 @@ convertToType = function(many, model_name){
* @return {object} Object with all extra info that will be needed to create files with templates.
*/
module.exports.getOptions = function(dataModel){
-
+
let opts = {
- name : dataModel.model,
- nameCp: capitalizeString(dataModel.model),
- storageType : getStorageType(dataModel),
- table: inflection.pluralize(uncapitalizeString(dataModel.model)),
- nameLc: uncapitalizeString(dataModel.model),
- namePl: inflection.pluralize(uncapitalizeString(dataModel.model)),
- namePlCp: inflection.pluralize(capitalizeString(dataModel.model)),
- attributes: getOnlyTypeAttributes(dataModel.attributes),
- jsonSchemaProperties: attributesToJsonSchemaProperties(getOnlyTypeAttributes(dataModel.attributes)),
- associationsArguments: module.exports.parseAssociations(dataModel),
- arrayAttributeString: attributesArrayString( getOnlyTypeAttributes(dataModel.attributes) ),
- indices: dataModel.indices,
- definitionObj : dataModel,
- attributesDescription: getOnlyDescriptionAttributes(dataModel.attributes),
- url: dataModel.url || "",
- externalIds: dataModel.externalIds || [],
- regex: dataModel.regex || "",
- adapterName: dataModel.adapterName || "",
- registry: dataModel.registry || [],
- idAttribute: getIdAttribute(dataModel)
+ name : dataModel.model,
+ nameCp: capitalizeString(dataModel.model),
+ storageType : getStorageType(dataModel),
+ database: getModelDatabase(dataModel),
+ table: inflection.pluralize(uncapitalizeString(dataModel.model)),
+ nameLc: uncapitalizeString(dataModel.model),
+ namePl: inflection.pluralize(uncapitalizeString(dataModel.model)),
+ namePlCp: inflection.pluralize(capitalizeString(dataModel.model)),
+ attributes: getOnlyTypeAttributes(dataModel.attributes),
+ jsonSchemaProperties: attributesToJsonSchemaProperties(getOnlyTypeAttributes(dataModel.attributes)),
+ associationsArguments: module.exports.parseAssociations(dataModel),
+ arrayAttributeString: attributesArrayString( getOnlyTypeAttributes(dataModel.attributes) ),
+ indices: dataModel.indices,
+ definitionObj : dataModel,
+ attributesDescription: getOnlyDescriptionAttributes(dataModel.attributes),
+ url: dataModel.url || "",
+ externalIds: dataModel.externalIds || [],
+ regex: dataModel.regex || "",
+ adapterName: dataModel.adapterName || "",
+ registry: dataModel.registry || [],
+ idAttribute: getIdAttribute(dataModel)
};
opts['editableAttributesStr'] = attributesToString(getEditableAttributes(opts.attributes, getEditableAssociations(opts.associationsArguments), getIdAttribute(dataModel)));
@@ -709,7 +712,7 @@ generateAssociationsMigrations = function( opts, dir_write){
// assoc["source"] = opts.table;
// assoc["cross"] = false;
// let generatedMigration = await module.exports.generateJs('create-association-migration',assoc);
- // let name_migration = createNameMigration(dir_write, 'z-column-'+assoc.targetKey+'-to-'+opts.table);
+ // let name_migration = createNameMigration(dir_write, '', 'z-column-'+assoc.targetKey+'-to-'+opts.table);
// fs.writeFile( name_migration, generatedMigration, function(err){
// if (err)
// {
@@ -725,7 +728,7 @@ generateAssociationsMigrations = function( opts, dir_write){
if(assoc.targetStorageType === 'sql'){
assoc["source"] = opts.table;
let generatedMigration = await module.exports.generateJs('create-through-migration',assoc);
- let name_migration = createNameMigration(dir_write, 'z-through-'+assoc.keysIn);
+ let name_migration = createNameMigration(dir_write, '', 'z-through-'+assoc.keysIn);
fs.writeFile( name_migration, generatedMigration, function(err){
if (err)
{
@@ -746,11 +749,11 @@ generateAssociationsMigrations = function( opts, dir_write){
* @param {string} model_name Name of the model.
* @return {string} Path where generated file will be written.
*/
-createNameMigration = function(dir_write, model_name){
+createNameMigration = function(rootDir, migrationsDir, model_name){
let date = new Date();
- date = date.toISOString().slice(0,19).replace(/[^0-9]/g, "");
+ date = date.toISOString().slice(0,19).replace(/[^0-9]/g, "");
//return dir_write + '/migrations/' + date + '-create-'+model_name +'.js';
- return dir_write + '/migrations/' + date + '-'+model_name +'.js';
+ return join(rootDir, migrationsDir, `${date}-${model_name}.js`);
};
/**
@@ -760,9 +763,15 @@ createNameMigration = function(dir_write, model_name){
* @param {object} opts Object with options needed for the template that will generate the section
* @param {string} dir_write Path (including name of the file) where the generated section will be written as a file.
*/
-generateSection = async function(section, opts, dir_write ){
+generateSection = async function(section, opts, filePath ){
let generatedSection = await module.exports.generateJs('create-'+section ,opts);
- fs.writeFileSync(dir_write, generatedSection);
+
+ const parsedPath = parse(filePath);
+ if (!fs.existsSync(parsedPath.dir)) {
+ fs.mkdirSync(parsedPath.dir);
+ }
+
+ fs.writeFileSync(filePath, generatedSection);
};
/**
@@ -803,7 +812,7 @@ generateSections = async function(sections, opts, dir_write) {
break;
//migrations
case 'migrations':
- file_name = createNameMigration(dir_write, section.fileName);
+ file_name = createNameMigration(dir_write, section.dir, section.fileName);
break;
//validations & patches
case 'validations':
@@ -846,7 +855,8 @@ generateSections = async function(sections, opts, dir_write) {
writeCommons = async function(dir_write, models, adapters){
writeSchemaCommons(dir_write);
console.log(path.join(dir_write,'models'))
- writeIndexAdapters(path.join(dir_write,'models'));
+ //deprecated due to static adapters index, to be removed
+ // writeIndexAdapters(path.join(dir_write,'models'));
await writeIndexResolvers(dir_write, models);
await writeAcls(dir_write, models, adapters);
//deprecated due to static global index, to be removed
@@ -1042,15 +1052,16 @@ module.exports.generateCode = async function(json_dir, dir_write, options){
//set sections
let sections = []; //schemas, resolvers, models, migrations, validations, patches
+ const migrationsDir = join('migrations', opts.database);
switch(opts.storageType) {
case 'sql':
sections = [
- {dir: 'schemas', template: 'schemas', fileName: opts.nameLc},
- {dir: 'resolvers', template: 'resolvers', fileName: opts.nameLc},
- {dir: 'models/sql', template: 'models', fileName: opts.nameLc},
- {dir: 'migrations', template: 'migrations', fileName: opts.nameLc},
+ {dir: 'schemas', template: 'schemas', fileName: opts.nameLc},
+ {dir: 'resolvers', template: 'resolvers', fileName: opts.nameLc},
+ {dir: 'models/sql', template: 'models', fileName: opts.nameLc},
+ {dir: migrationsDir, template: 'migrations', fileName: opts.nameLc},
{dir: 'validations', template: 'validations', fileName: opts.nameLc},
- {dir: 'patches', template: 'patches', fileName:opts.nameLc},
+ {dir: 'patches', template: 'patches', fileName: opts.nameLc},
]
break;
@@ -1099,9 +1110,9 @@ module.exports.generateCode = async function(json_dir, dir_write, options){
case 'sql-adapter':
sections = [
- {dir: 'models/adapters', template: 'sql-adapter', fileName: opts.adapterName},
- {dir: 'migrations', template: 'migrations', fileName: opts.nameLc},
- {dir: 'patches', template: 'patches', fileName:opts.adapterName},
+ {dir: 'models/adapters', template: 'sql-adapter', fileName: opts.adapterName},
+ {dir: migrationsDir, template: 'migrations', fileName: opts.nameLc },
+ {dir: 'patches', template: 'patches', fileName: opts.adapterName},
]
break;
diff --git a/lib/generators-aux.js b/lib/generators-aux.js
new file mode 100644
index 00000000..e5d76438
--- /dev/null
+++ b/lib/generators-aux.js
@@ -0,0 +1,21 @@
+
+/**
+ * Get the default database key of a given model.
+ * @param {string} dataModel data model definition object
+ */
+exports.getModelDatabase = function (dataModel) {
+
+ // Sanity check: storageType is a required property, but database
+ // should be set only for supported storage types.
+ const validStorage = {
+ 'sql': 'default-sql',
+ 'sql-adapter': 'default-sql',
+ }
+
+ const storageType = dataModel.storageType.toLowerCase();
+
+ const defaultDb = validStorage[storageType];
+
+ return dataModel.database || defaultDb || '';
+
+}
\ No newline at end of file
diff --git a/package.json b/package.json
index 733155e4..b9d562d9 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "graphql-server-model-codegen",
- "version": "0.2.0",
+ "version": "0.3.0",
"description": "Command line utility which auto-generates code to setup and run a Science-DB GraphQL web-server. See sciencedb.github.io.",
"repository": {
"type": "git",
diff --git a/test/mocha_integration_test.js b/test/mocha_integration_test.js
index 4a210da2..6cab55ed 100644
--- a/test/mocha_integration_test.js
+++ b/test/mocha_integration_test.js
@@ -777,7 +777,7 @@ describe(
expect(trCounts).to.deep.equal([]);
/**
- * Check:
+ * Check:
* max_limit records validation
*/
//Add another 30 individuals:
@@ -874,10 +874,10 @@ describe(
res = itHelpers.request_graph_ql_post(individualAdding);
expect(res.statusCode).to.equal(200);
ids.push(JSON.parse(res.body.toString('utf8')).data.addIndividual.id);
-
+
//test 1: count = 30
res = itHelpers.request_graph_ql_post(`{ countIndividuals(search:{field:name operator:eq value:{value:"${individualName}"}}) }`);
- resBody = JSON.parse(res.body.toString('utf8'));
+ resBody = JSON.parse(res.body.toString('utf8'));
expect(res.statusCode).to.equal(200);
expect(resBody.data.countIndividuals).equal(30);
@@ -930,16 +930,16 @@ describe(
expect(resBody.errors[0].path).to.deep.equal(err3_path);
//test 8: pagination (cursor-based): with first: 24
- res = itHelpers.request_graph_ql_post(`{individualsConnection(pagination:{first:24}, search:{field:name operator:eq value:{value:"${individualName}"}})
+ res = itHelpers.request_graph_ql_post(`{individualsConnection(pagination:{first:24}, search:{field:name operator:eq value:{value:"${individualName}"}})
{
pageInfo {
startCursor
endCursor
hasPreviousPage
hasNextPage
- }
+ }
edges{
- cursor
+ cursor
node{id}
}}}`);
expect(res.statusCode).to.equal(200);
@@ -952,14 +952,14 @@ describe(
res = itHelpers.request_graph_ql_post(`{individualsConnection(pagination:{first:5, after:"${cursor20}"}, search:{field:name operator:eq value:{value:"${individualName}"}}) {edges{cursor node{id}}}}`);
expect(res.statusCode).to.equal(200);
resBody = JSON.parse(res.body.toString('utf8'));
-
+
expect(ids).to.include.members(resBody.data.individualsConnection.edges.map((item) => item.node.id));
expect(resBody.data.individualsConnection.edges.map((item) => item.node.id)).to.have.deep.members(ids.slice(-10, 25));
//test 10: pagination (cursor-based): with : last:5
res = itHelpers.request_graph_ql_post(`{individualsConnection(pagination:{last:5}, search:{field:name operator:eq value:{value:"${individualName}"}}) {edges{cursor node{id}}}}`);
expect(res.statusCode).to.equal(200);
- resBody = JSON.parse(res.body.toString('utf8'));
+ resBody = JSON.parse(res.body.toString('utf8'));
expect(ids).to.include.members(resBody.data.individualsConnection.edges.map((item) => item.node.id));
expect(resBody.data.individualsConnection.edges.map((item) => item.node.id)).to.have.deep.members(ids.slice(-5));
@@ -990,7 +990,7 @@ describe(
individuals = JSON.parse(res.body.toString('utf8')).data.individuals;
expect(individuals).to.deep.equal([]);
- }).timeout(5000);
+ }).timeout(10000);
//one_to_one associations where foreignKey is in the target model
it('22. one_to_one associations setup', function() {
diff --git a/test/sh_integration_test_run.sh b/test/sh_integration_test_run.sh
index 6955dfb0..144f9ec6 100755
--- a/test/sh_integration_test_run.sh
+++ b/test/sh_integration_test_run.sh
@@ -15,7 +15,7 @@
#
# Execution via npm:
#
-# npm run test-integration [-- OPTIONS]
+# npm run test-integration -- [-- OPTIONS]
#
# cleaup:
# npm run test-integration-clear
@@ -25,29 +25,67 @@
# DESCRIPTION
# Command line utility to perform graphql server's integration-test.
#
-# Intergation-test case creates a docker-compose ambient with three servers:
+# The integration-test command creates a docker-compose environment with three servers:
#
# gql_postgres
# gql_science_db_graphql_server
# gql_ncbi_sim_srv
#
-# By default, after the test run, all corresponding Docker images will be completely removed from the docker, this cleanup step can be skiped with -k option as described below.
+# The default behavior performs the following actions:
#
-# Default behavior performs the following actions:
+# 0) Checks the local testing environment (./docker/integration_test_run) and performs an initial setup the first time the command is run.
+# 1) Stops and removes Docker containers with docker-compose down command. It also removes Docker images (--rmi) and volumes (-v) created in previous runs.
+# 2) Removes any previously generated code located on the project's testing environment: ./docker/integration_test_run.
+# 3) Generates the code using the test models located on the project's test directory: ./test/integration_test_models.
+# 4) Creates and starts containers with docker-compose up command.
+# 5) Execcutes integration tests.
+# 6) Do a cleanup as described on steps 1) and 2) (use -k option to skip this step).
#
-# 1) Stop and removes Docker containers with docker-compose down command, also removes Docker images (--rmi) and named or anonymous volumes (-v).
-# 2) Removes any previously generated code located on current project's local directory: ./docker/integration_test_run.
+# The options are as follows:
+#
+# -b, --branch
+#
+# This option changes the testing branch of the zendro server instances. Changing the branch is a permanent side effect.
+#
+# It can be used alone to execute the default script, or in conjunction with -s or -T.
+#
+# The default branch is set to "master". When running tests for the first time, make sure this option is set to the desired branch.
+#
+# -c, --cleanup
+#
+# This option performs the following actions:
+#
+# 1) Stops and removes Docker containers with docker-compose down command, also removes Docker images (--rmi) and named or anonymous volumes (-v).
+# 2) Removes the testing environment server instances: ./docker/integration_test_run/{graphql-server,servers}.
+#
+# -C, --softCleanup
+#
+# This option performs the following actions:
+#
+# 1) Stops and removes Docker containers and volumes with docker-compose down command.
+# 2) Removes any previously generated code located on the testing environment server instances: ./docker/integration_test_run/servers.
+#
+# -g, --generate-code
+#
+# This option performs the following actions:
+#
+# 1) Stop and removes containers with docker-compose down command (without removing images).
+# 2) Removes any previously generated code located on the testing environment server instances: ./docker/integration_test_run/servers.
# 3) Re-generates the code from the test models located on current project's local directory: ./test/integration_test_models. The code is generated on local directory: ./docker/integration_test_run.
# 4) Creates and start containers with docker-compose up command.
-# 5) Excecutes integration tests. The code should exists, otherwise the integration tests are not executed.
-# 6) Do cleanup as described on 1) and 2) steps (use -k option to skip this step).
-#
-# The options are as follows:
#
# -h, --help
#
# Display this help and exit.
#
+# -k, --keep-running
+#
+# This option skips the cleanup step at the end of the integration-test-suite and keeps the Docker containers running.
+#
+# It can be used alone, or in conjunction with the options -t or -T.
+#
+# If this option is not specified, then, by default, the cleanup step is performed at the end of the tests (see -c option).
+#
# -r, --restart-containers
#
# This option performs the following actions:
@@ -57,14 +95,12 @@
#
# Because the containers that manages the test-suite's databases do not use docker named volumes, but transient ones, the databases will be re-initialized by this command, too.
#
-# -g, --generate-code
+# -s, --setup
#
# This option performs the following actions:
#
-# 1) Stop and removes containers with docker-compose down command (without removing images).
-# 2) Removes any previously generated code located on current project's local directory: ./docker/integration_test_run.
-# 3) Re-generates the code from the test models located on current project's local directory: ./test/integration_test_models. The code is generated on local directory: ./docker/integration_test_run.
-# 4) Creates and start containers with docker-compose up command.
+# 1) Clones the graphql-server repository, optionally switching to the specified "-b BRANCH".
+# 2) Uses the cloned repository to create the server instances necessary for the integration tests.
#
# -t, --run-test-only
#
@@ -88,21 +124,6 @@
#
# If option -k is also specified, then cleanup step is skipped at the end of the integration-test-suite, otherwise, the cleanup step is performed at the end of the tests (see -c option).
#
-# -k, --keep-running
-#
-# This option skips the cleanup step at the end of the integration-test-suite and keeps the Docker containers running.
-#
-# This option can be used alone, or in conjunction with the options -t or -T.
-#
-# If this option is not specified, then, by default, the cleanup step is performed at the end of the tests (see -c option).
-#
-# -c, --cleanup
-#
-# This option performs the following actions:
-#
-# 1) Stops and removes Docker containers with docker-compose down command, also removes Docker images (--rmi) and named or anonymous volumes (-v).
-# 2) Removes any previously generated code located on current project's local directory: ./docker/integration_test_run.
-#
# EXAMPLES
# Command line utility to perform graphql server's integration-test.
#
@@ -118,7 +139,7 @@
# To restart containers:
# $ npm run test-integration -- -r
#
-# To generate code and start containers:
+# To generate code:
# $ npm run test-integration -- -g
#
# To do the tests only and keep the containers running at end:
@@ -126,10 +147,15 @@
#
# To generate code and do the tests, removing all Docker images at end:
# $ npm run test-integration -- -T
-
+#
# To do a full clean up (removes containers, images and code):
# $ npm run test-integration -- -c
#
+# To setup a new testing environment
+# $ npm run test-integration -- -s [BRANCH]
+#
+# To do a soft clean up (removes containers, volumes and code, but preserves images):
+# $ npm run test-integration -- -C
#
# exit on first error
@@ -138,35 +164,21 @@ set -e
#
# Constants
#
-DOCKER_SERVICES=(gql_postgres \
- gql_science_db_graphql_server \
- gql_ncbi_sim_srv)
+DOCKER_SERVICES=(
+ gql_postgres
+ gql_science_db_graphql_server
+ gql_ncbi_sim_srv
+)
+TARGET_BRANCH=master
+TARGET_DIR="./docker/integration_test_run"
+INSTANCE_DIRS=(
+ "servers/instance1"
+ "servers/instance2"
+)
TEST_MODELS_INSTANCE1="./test/integration_test_models_instance1"
TEST_MODELS_INSTANCE2="./test/integration_test_models_instance2"
-TARGET_DIR_INSTANCE1="./docker/integration_test_run/instance1"
-TARGET_DIR_INSTANCE2="./docker/integration_test_run/instance2"
-CODEGEN_DIRS=($TARGET_DIR_INSTANCE1"/models/adapters" \
- $TARGET_DIR_INSTANCE1"/models/sql" \
- $TARGET_DIR_INSTANCE1"/models/distributed" \
- $TARGET_DIR_INSTANCE1"/models/generic" \
- $TARGET_DIR_INSTANCE1"/models/zendro-server"
- $TARGET_DIR_INSTANCE1"/migrations" \
- $TARGET_DIR_INSTANCE1"/schemas" \
- $TARGET_DIR_INSTANCE1"/resolvers" \
- $TARGET_DIR_INSTANCE1"/validations" \
- $TARGET_DIR_INSTANCE1"/patches" \
- $TARGET_DIR_INSTANCE2"/models/adapters" \
- $TARGET_DIR_INSTANCE2"/models/sql" \
- $TARGET_DIR_INSTANCE2"/models/distributed" \
- $TARGET_DIR_INSTANCE2"/models/generic" \
- $TARGET_DIR_INSTANCE2"/models/zendro-server"
- $TARGET_DIR_INSTANCE2"/migrations" \
- $TARGET_DIR_INSTANCE2"/schemas" \
- $TARGET_DIR_INSTANCE2"/resolvers" \
- $TARGET_DIR_INSTANCE2"/validations" \
- $TARGET_DIR_INSTANCE2"/patches")
MANPAGE="./man/sh_integration_test_run.man"
-T1=180
+SERVER_CHECK_WAIT_TIME=60
DO_DEFAULT=true
KEEP_RUNNING=false
NUM_ARGS=$#
@@ -181,195 +193,172 @@ NC='\033[0m'
#
#
-# Function: deleteGenCode()
+# Function: checkGqlServer()
#
-# Delete generated code.
+# Check if Zendro GraphQL servers respond to requests.
#
-deleteGenCode() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Removing generated code...${NC}"
+checkGqlServer() {
- # Remove generated code.
- for i in "${CODEGEN_DIRS[@]}"
+ host="localhost:${1}/graphql"
+
+ logTask msg "Testing GraphQL server connection @ $host"
+
+ elapsedTime=0
+ until curl "$host" > /dev/null 2>&1
do
- if [ -d $i ]; then
- rm -rf $i
- if [ $? -eq 0 ]; then
- echo -e "@ Removed: $i ... ${LGREEN}done${NC}"
- else
- echo -e "!!${RED}ERROR${NC}: trying to remove: ${RED}$i${NC} fails ... ${YEL}exit${NC}"
- exit 0
- fi
+
+ # Exit with error code 1
+ if [ $elapsedTime == $SERVER_CHECK_WAIT_TIME ]; then
+
+ logTask error "zendro graphql web server does not start, the wait time limit was reached (${SERVER_CHECK_WAIT_TIME}s)"
+ return 1
+
fi
+
+ # Wait 2s and rety
+ sleep 2
+ elapsedTime=$(expr $elapsedTime + 2)
done
+ return 0
- # Msg
- echo -e "@@ All code removed ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
}
#
-# Function: checkCode()
+# Function: checkWorkspace()
#
-# Check if generated code exists.
+# Check if graphql-server instance folders exist.
#
-checkCode() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Check generated code...${NC}"
+checkWorkspace() {
- # Remove generated code.
- for i in "${CODEGEN_DIRS[@]}"
- do
- # Check if directory exists
- if [ -d $i ]; then
- echo -e "Code directory ${LGREEN}$i${NC} exists."
-
- # Check if directory is empty
- #if [ -n "$(ls -A ${i} 2>/dev/null)" ]; then
- # echo -e "@@ Code at: $i ... ${LGREEN}ok${NC}"
- #else
- # echo -e "!!${RED}ERROR${NC}: Code directory: ${RED}$i${NC} exists but is empty!, please try -T option ... ${YEL}exit${NC}"
- # echo -e "${LGRAY}---------------------------- @@${NC}\n"
- # exit 0
- #fi
+ logTask begin "Check graphql-server instances"
+
+ FAIL=false
+
+ for instance in ${INSTANCE_DIRS[@]}; do
+
+ instance_name=$(basename $instance)
+
+ if [[ ! -d "$TARGET_DIR/$instance" ]]; then
+
+ FAIL=true
+ logTask error "Server directory: ${RED}${instance_name}${NC} does not exist!"
else
- echo -e "!!${RED}ERROR${NC}: Code directory: ${RED}$i${NC} does not exists!, please try -T option ... ${YEL}exit${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
- exit 0
+ logTask msg "${instance_name} exists"
fi
+
done
- # Msg
- echo -e "@@ Code check ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ if [[ $FAIL == true ]]; then
+ logTask quit "One or more server instances were not installed. Please use ${YEL}-s${NC} or execute a full test run with ${YEL}-k${NC} before using this command."
+ exit 0
+ fi
+
+ logTask end "Instances check"
}
#
-# Function: restartContainers()
+# Function: consumeArgs()
#
-# Downs and ups containers
+# Shift the remaining arguments on $# list, and sets the flag KEEP_RUNNING=true if
+# argument -k or --keep-running is found.
#
-restartContainers() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Restarting containers...${NC}"
+consumeArgs() {
- # Soft down
- docker-compose -f ./docker/docker-compose-test.yml down
- # Msg
- echo -e "@@ Containers down ... ${LGREEN}done${NC}"
+ while [[ $NUM_ARGS -gt 0 ]]
+ do
+ a="$1"
- # Install
- npm install
- # Msg
- echo -e "@@ Installing ... ${LGREEN}done${NC}"
+ case $a in
+ -k|--keep-running)
- # Up
- docker-compose -f ./docker/docker-compose-test.yml up -d
- # Msg
- echo -e "@@ Containers up ... ${LGREEN}done${NC}"
+ # set flag
+ KEEP_RUNNING=true
- # List
- docker-compose -f ./docker/docker-compose-test.yml ps
+ logTask begin "Keep containers running at end: $KEEP_RUNNING"
- # Msg
- echo -e "@@ Containers restarted ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ # Remove last argument
+ shift
+ let "NUM_ARGS--"
+ ;;
+
+ *)
+ logTask msg "Discarding option: ${RED}$a${NC}"
+
+ # Remove last argument
+ shift
+ let "NUM_ARGS--"
+ ;;
+ esac
+ done
}
#
-# Function: cleanup()
+# Function: deleteDockerSetup()
#
-# Default actions (without --keep-running):
-# Remove docker items (containers, images, etc.).
-# Remove generated code.
+# Removes docker containers, images, and volumes.
#
-cleanup() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Starting cleanup...${NC}"
+deleteDockerSetup() {
+
+ logTask begin "Removing docker setup"
- # Hard down
docker-compose -f ./docker/docker-compose-test.yml down -v --rmi all
- # Delete code
- deleteGenCode
+ logTask end "Removed docker setup"
+
+}
+
+#
+# Function: deleteGenCode()
+#
+# Delete generated code.
+#
+deleteGenCode() {
+
+ logTask begin "Removing generated code"
+
+ # Change to workspace root
+ cd $TARGET_DIR
+
+ # Remove generated files
+ bash scripts/clean-workspace.sh
+
+ # Change to project root
+ cd - 1>/dev/null
- # Msg
- echo -e "@@ Cleanup ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ logTask end "All code removed"
}
#
-# Function: softCleanup()
+# Function: deleteServerSetup()
#
-# restart & removeCodeGen
+# Delete testing environment.
#
-softCleanup() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Starting soft cleanup...${NC}"
+deleteServerSetup() {
- # Down
- docker-compose -f ./docker/docker-compose-test.yml down
- # Msg
- echo -e "@@ Containers down ... ${LGREEN}done${NC}"
+ logTask begin "Removing Zendro instances"
+
+ # Remove workspace modules and server instances
+ rm -rf $TARGET_DIR/{graphql-server,servers}
- # Delete code
- deleteGenCode
+ logTask end "Zendro instances deleted"
- # Msg
- echo -e "@@ Soft cleanup ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
}
#
-# Function: waitForGql()
+# Function: doTests()
#
-# Waits for GraphQL Server to start, for a maximum amount of T1 seconds.
+# Run integration tests using mocha.
#
-waitForGql() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Waiting for GraphQL server to start...${NC}"
+doTests() {
- # Wait until the Science-DB GraphQL web-server is up and running
- waited=0
- until curl 'localhost:3000/graphql' > /dev/null 2>&1
- do
- if [ $waited == $T1 ]; then
- # Msg: error
- echo -e "!!${RED}ERROR${NC}: science-db graphql web server does not start, the wait time limit was reached ($T1).\n"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
- exit 0
- fi
- sleep 2
- waited=$(expr $waited + 2)
- done
+ logTask begin "Starting mocha tests"
- # Msg
- echo -e "@@ First GraphQL server is up! ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ mocha ./test/mocha_integration_test.js
- until curl 'localhost:3030/graphql' > /dev/null 2>&1
- do
- if [ $waited == $T1 ]; then
- # Msg: error
- echo -e "!!${RED}ERROR${NC}: science-db graphql web server does not start, the wait time limit was reached ($T1).\n"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
- exit 0
- fi
- sleep 2
- waited=$(expr $waited + 2)
- done
+ logTask end "Mocha tests"
- # Msg
- echo -e "@@ Second GraphQL server is up! ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
}
#
@@ -378,21 +367,18 @@ waitForGql() {
# Generate code.
#
genCode() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Generating code...${NC}"
- # Install
- npm install
- # Msg
- echo -e "@@ Installing ... ${LGREEN}done${NC}"
+ logTask begin "Generating code"
- # Generate
+ TARGET_DIR_INSTANCE1="${TARGET_DIR}/${INSTANCE_DIRS[0]}"
+ TARGET_DIR_INSTANCE2="${TARGET_DIR}/${INSTANCE_DIRS[1]}"
+
+ # Generate code
node ./index.js -f ${TEST_MODELS_INSTANCE1} -o ${TARGET_DIR_INSTANCE1}
node ./index.js -f ${TEST_MODELS_INSTANCE2} -o ${TARGET_DIR_INSTANCE2}
# Patch the resolver for web-server
- #patch -V never ${TARGET_DIR_INSTANCE1}/resolvers/aminoacidsequence.js ./docker/ncbi_sim_srv/amino_acid_sequence_resolver.patch
+ # patch -V never ${TARGET_DIR_INSTANCE1}/resolvers/aminoacidsequence.js ./docker/ncbi_sim_srv/amino_acid_sequence_resolver.patch
patch -V never ${TARGET_DIR_INSTANCE1}/models/generic/aminoacidsequence.js ./docker/ncbi_sim_srv/model_aminoacidsequence.patch
# Add monkey-patching validation with AJV
patch -V never ${TARGET_DIR_INSTANCE1}/validations/individual.js ./test/integration_test_misc/individual_validate.patch
@@ -402,119 +388,227 @@ genCode() {
# Add patch for sql model accession validation
patch -V never ${TARGET_DIR_INSTANCE1}/validations/accession.js ./test/integration_test_misc/accession_validate_instance1.patch
- # Msg
- echo -e "@@ Code generated on ${TARGET_DIR_INSTANCE1} and ${TARGET_DIR_INSTANCE2}: ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ logTask end "Code generated on ${TARGET_DIR_INSTANCE1} and ${TARGET_DIR_INSTANCE2}"
+
}
#
-# Function: upContainers()
+# Function: logTask()
#
-# Up docker containers.
+# Logs a task begin or end message to stdout.
#
-upContainers() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Starting up containers...${NC}"
+# USAGE:
+#
+# $ logTask ""
+#
+# = begin | check | end | msg | quit
+#
+logTask() {
- # Install
- npm install
- # Msg
- echo -e "@@ Installing ... ${LGREEN}done${NC}"
+ case $1 in
+ begin)
+ echo -e "\n${LGRAY}@@ ----------------------------${NC}"
+ echo -e "${LGRAY}@@ $2...${NC}"
+ ;;
+ check)
+ echo -e "@@ $2 ... ${LGREEN}done${NC}"
+ ;;
+ end)
+ echo -e "@@ $2 ... ${LGREEN}done${NC}"
+ echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ ;;
+ error)
+ echo -e "!!${RED}ERROR${NC}: $2"
+ ;;
+ msg)
+ echo -e "@@ $2"
+ ;;
+ quit)
+ echo -e "@@ $2 ... ${YEL}exit${NC}"
+ ;;
+ esac
+
+}
+
+#
+# Function: man()
+#
+# Show man page of this script.
+#
+man() {
+ # Show
+ more ${MANPAGE}
+}
+
+#
+# Function: restartContainers()
+#
+# Downs and ups containers
+#
+restartContainers() {
+
+ logTask begin "Restarting containers"
+
+ # Soft down
+ docker-compose -f ./docker/docker-compose-test.yml down
+ logTask check "Containers down"
# Up
- docker-compose -f ./docker/docker-compose-test.yml up -d --no-recreate
- # Msg
- echo -e "@@ Containers up ... ${LGREEN}done${NC}"
+ docker-compose -f ./docker/docker-compose-test.yml up -d
+ logTask check "Containers up"
# List
docker-compose -f ./docker/docker-compose-test.yml ps
- # Msg
- echo -e "@@ Containers up ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
+ logTask end "Containers restarted"
+
}
#
-# Function: doTests()
+# Function: resetDockerSetup()
#
-# Do the mocha integration tests.
+# Stops docker-compose testing environment, removes containers and volumes.
#
-doTests() {
- # Msg
- echo -e "\n${LGRAY}@@ ----------------------------${NC}"
- echo -e "${LGRAY}@@ Starting mocha tests...${NC}"
+resetDockerSetup() {
- # Wait for graphql server
- waitForGql
+ logTask begin "Removing docker containers and volumes"
- # Do tests
- mocha ./test/mocha_integration_test.js
+ docker-compose -f ./docker/docker-compose-test.yml down -v
+
+ logTask end "Containers down"
- # Msg
- echo -e "@@ Mocha tests ... ${LGREEN}done${NC}"
- echo -e "${LGRAY}---------------------------- @@${NC}\n"
}
#
-# Function: consumeArgs()
+# Function: setupTestingEnvironment
#
-# Shift the remaining arguments on $# list, and sets the flag KEEP_RUNNING=true if
-# argument -k or --keep-running is found.
+# Clones and initializes a two-server environment workspace.
#
-consumeArgs() {
+setupTestingEnvironment() {
- while [[ $NUM_ARGS -gt 0 ]]
- do
- a="$1"
+ # Remove any existing setup
+ deleteServerSetup
- case $a in
- -k|--keep-running)
+ logTask begin "Cloning main Zendro server"
- # set flag
- KEEP_RUNNING=true
- # Msg
- echo -e "@@ Keep containers running at end: $KEEP_RUNNING"
- # Past argument
- shift
- let "NUM_ARGS--"
- ;;
+ # Declare main server path
+ MAIN_SERVER="${TARGET_DIR}/graphql-server"
+
+ # Clone graphql-server and checkout the feature branch
+ git clone \
+ --branch $TARGET_BRANCH \
+ git@github.com:Zendro-dev/graphql-server.git \
+ $MAIN_SERVER
+
+ # Force "node-jq" to use the docke image "jq"
+ export NODE_JQ_SKIP_INSTALL_BINARY=true
+
+ # Install module dependencies
+ npm install --prefix $MAIN_SERVER
+
+ logTask end "Installed Zendro server"
+
+ # Copy graphql-server instances
+ logTask begin "Creating Zendro instances"
+
+ for instance in ${INSTANCE_DIRS[@]}; do
+
+ mkdir -p $TARGET_DIR/servers
+ cp -r $MAIN_SERVER ${TARGET_DIR}/${instance}
- *)
- # Msg
- echo -e "@@ Discarting option: ${RED}$a${NC}"
- # Past argument
- shift
- let "NUM_ARGS--"
- ;;
- esac
done
+
+ logTask end "Zendro instances created"
+
}
+
#
-# Function: man()
+# Function: upContainers()
#
-# Show man page of this script.
+# Up docker containers.
#
-man() {
- # Show
- more ${MANPAGE}
+upContainers() {
+
+ logTask begin "Starting up containers"
+
+ # Up
+ docker-compose -f ./docker/docker-compose-test.yml up -d --no-recreate
+
+ # List
+ docker-compose -f ./docker/docker-compose-test.yml ps
+
+ logTask end "Containers up"
+
+}
+
+#
+# Function: waitForGql()
+#
+# Waits for GraphQL Server to start, for a maximum amount of SERVER_CHECK_WAIT_TIME seconds.
+#
+waitForGql() {
+
+ logTask begin "Waiting for GraphQL servers to start"
+
+ hosts=(3000 3030)
+ pids=( )
+
+ for h in ${hosts[@]}; do
+
+ checkGqlServer $h &
+ pids+="$! "
+
+ done
+
+ # Wait until Zendro GraphQL servers are up and running
+ for id in ${pids[@]}; do
+
+ wait $id || exit 0
+
+ done
+
+ logTask end "GraphQL servers are up!"
+
}
#
# Main
#
if [ $# -gt 0 ]; then
- #Processes comand line arguments.
+ # Process comand line arguments.
while [[ $NUM_ARGS -gt 0 ]]
do
key="$1"
case $key in
+ -b|--branch)
+
+ shift
+ let "NUM_ARGS--"
+
+ TARGET_BRANCH=$1
+
+ if [[ -z $TARGET_BRANCH || $TARGET_BRANCH =~ ^-|^-- ]]; then
+ logTask quit "-b requires a value: ... ${key} ${RED}${NC} $@"
+ exit 0
+ fi
+
+ logTask msg "setting test environment branch to: $TARGET_BRANCH"
+
+ # Forcefully checkout instances to the specified branch
+ cd $TARGET_DIR
+ bash scripts/checkout-branch.sh $TARGET_BRANCH
+ cd - 1>/dev/null
+
+ shift
+ let "NUM_ARGS--"
+ ;;
+
-k|--keep-running)
# Set flag
KEEP_RUNNING=true
# Msg
- echo -e "@@ keep containers running at end: $KEEP_RUNNING"
+ logTask msg "keep containers running at end: $KEEP_RUNNING"
# Past argument
shift
@@ -529,6 +623,15 @@ if [ $# -gt 0 ]; then
exit 0
;;
+ -s|--setup)
+
+ # Setup testing environment
+ setupTestingEnvironment
+
+ # Done
+ exit 0
+ ;;
+
-r|--restart-containers)
# Restart containers
restartContainers
@@ -538,22 +641,24 @@ if [ $# -gt 0 ]; then
;;
-g|--generate-code)
- # Light cleanup
- softCleanup
- # Generate code
+ # Check server instances
+ checkWorkspace
+ # Remove previously generated code
+ deleteGenCode
+ # Run code generator
genCode
- # Ups containers
- upContainers
# Done
exit 0
;;
-t|--run-tests-only)
- # Check code
- checkCode
+ # Check workspace folders
+ checkWorkspace
# Restart containers
upContainers
+ # Wait for graphql servers
+ waitForGql
# Do the tests
doTests
@@ -569,12 +674,15 @@ if [ $# -gt 0 ]; then
;;
-T|--generate-code-and-run-tests)
- # Light cleanup
- softCleanup
- # Generate code
+ # Reset containers and volumes
+ resetDockerSetup
+ # Re-generate code
+ deleteGenCode
genCode
# Up containers
upContainers
+ # Wait for graphql servers
+ waitForGql
# Do the tests
doTests
@@ -590,16 +698,26 @@ if [ $# -gt 0 ]; then
;;
-c|--cleanup)
- # Cleanup
- cleanup
+ # Docker cleanup
+ deleteDockerSetup
+ # Testing environment cleanup
+ deleteServerSetup
+ # Done
+ exit 0
+ ;;
+ -C|--soft-cleanup)
+ # Reset containers and volumes
+ resetDockerSetup
+ # Remove generated code
+ deleteGenCode
# Done
exit 0
;;
*)
# Msg
- echo -e "@@ Bad option: ... ${RED}$key${NC} ... ${YEL}exit${NC}"
+ logTask quit "Bad option: ... ${RED}$key${NC} ... ${YEL}exit${NC}"
exit 0
;;
esac
@@ -610,15 +728,27 @@ fi
# Default
#
if [ $DO_DEFAULT = true ]; then
- # Default: no arguments
- # Cleanup
- cleanup
- # Generate code
- genCode
- # Ups containers
- upContainers
- # Do the tests
- doTests
+
+ # Default run: no arguments #
+
+ # Docker cleanup
+ deleteDockerSetup
+
+ # Reset testing environment
+ setupTestingEnvironment
+
+ # Generate code
+ genCode
+
+ # Ups containers
+ upContainers
+
+ # Wait for graphql servers
+ waitForGql
+
+ # Do the tests
+ doTests
+
fi
#
@@ -626,13 +756,19 @@ fi
#
if [ $KEEP_RUNNING = false ]; then
- # Msg
- echo -e "@@ Doing final cleanup..."
- # Cleanup
- cleanup
+ logTask msg "Doing final cleanup"
+
+ # Docker cleanup
+ deleteDockerSetup
+
+ # Testing environment cleanup
+ deleteServerSetup
+
else
- # Msg
- echo -e "@@ Keeping containers running ... ${LGREEN}done${NC}"
- # List
+
+ # List containers
docker-compose -f ./docker/docker-compose-test.yml ps
+
+ logTask end "Keep containers running"
+
fi
diff --git a/views/create-migrations.ejs b/views/create-migrations.ejs
index 5d8868e2..7134bf09 100644
--- a/views/create-migrations.ejs
+++ b/views/create-migrations.ejs
@@ -1,5 +1,5 @@
'use strict';
-const dict = require('../utils/graphql-sequelize-types');
+const dict = require('../../utils/graphql-sequelize-types');
/**
* @module - Migrations to create and to undo a table correpondant to a sequelize model.
diff --git a/views/create-models.ejs b/views/create-models.ejs
index f3ef90b8..093f8fbb 100644
--- a/views/create-models.ejs
+++ b/views/create-models.ejs
@@ -325,7 +325,7 @@ module.exports = class <%- name -%> extends Sequelize.Model{
//validate input
await validatorUtil.validateData('validateForCreate', this, input);
try{
- const result = await sequelize.transaction( async(t) =>{
+ const result = await this.sequelize.transaction( async(t) =>{
let item = await super.create(input, {transaction:t});
return item;
});
@@ -351,7 +351,7 @@ module.exports = class <%- name -%> extends Sequelize.Model{
//validate input
await validatorUtil.validateData('validateForUpdate', this, input);
try{
- let result = await sequelize.transaction( async (t) =>{
+ let result = await this.sequelize.transaction( async (t) =>{
let updated = await super.update( input, { where:{ [this.idAttribute()] : input[this.idAttribute()] }, returning: true, transaction: t } );
return updated;
});
diff --git a/views/create-sql-adapter.ejs b/views/create-sql-adapter.ejs
index 186f52a8..8033d565 100644
--- a/views/create-sql-adapter.ejs
+++ b/views/create-sql-adapter.ejs
@@ -261,7 +261,7 @@ module.exports = class <%- adapterName -%> extends Sequelize.Model{
static async addOne(input){
try{
- const result = await sequelize.transaction( async(t) =>{
+ const result = await this.sequelize.transaction( async(t) =>{
let item = await super.create(input, {transaction:t});
return item;
});
@@ -283,7 +283,7 @@ module.exports = class <%- adapterName -%> extends Sequelize.Model{
static async updateOne(input){
try{
- let result = await sequelize.transaction( async (t) =>{
+ let result = await this.sequelize.transaction( async (t) =>{
let updated = await super.update( input, { where:{ [this.idAttribute()] : input[this.idAttribute()] }, returning: true, transaction: t } );
return updated;
});
diff --git a/views/includes/create-models-fieldMutations.ejs b/views/includes/create-models-fieldMutations.ejs
index 3f925231..1ab0168e 100644
--- a/views/includes/create-models-fieldMutations.ejs
+++ b/views/includes/create-models-fieldMutations.ejs
@@ -1,10 +1,10 @@
<%for(let i=0; i < associationsArguments["to_one"].length; i++){-%>
<% if (associationsArguments["to_one"][i].holdsForeignKey) { -%>
/**
- * <%- op %>_<%-associationsArguments["to_one"][i].targetKey-%> - field Mutation (model-layer) for to_one associationsArguments to <%- op %>
+ * <%- op %>_<%-associationsArguments["to_one"][i].targetKey-%> - field Mutation (model-layer) for to_one associationsArguments to <%- op %>
*
* @param {Id} <%- idAttribute-%> IdAttribute of the root model to be updated
- * @param {Id} <%-associationsArguments["to_one"][i].targetKey-%> Foreign Key (stored in "Me") of the Association to be updated.
+ * @param {Id} <%-associationsArguments["to_one"][i].targetKey-%> Foreign Key (stored in "Me") of the Association to be updated.
*/
static async <%- op -%>_<%-associationsArguments["to_one"][i].targetKey-%>(<%- idAttribute-%>, <%-associationsArguments["to_one"][i].targetKey-%>) {
let updated = await <%- name -%> .update({ <%-associationsArguments["to_one"][i].targetKey-%>: <% if (op == 'remove') { -%>null<% } else { %><%-associationsArguments["to_one"][i].targetKey-%><%}-%>},{where: {<%- idAttribute -%>: <%- idAttribute -%><% if (op == 'remove') { -%>,<%-associationsArguments["to_one"][i].targetKey-%>: <%-associationsArguments["to_one"][i].targetKey-%> <%}-%>}});
@@ -14,13 +14,13 @@
<%}-%>
<%for(let i=0; i < associationsArguments["to_many_through_sql_cross_table"].length; i++){-%>
/**
- * <%- op %>_<%-associationsArguments["to_many_through_sql_cross_table"][i].targetKey-%> - field Mutation (model-layer) for to_one associationsArguments to <%- op %>
+ * <%- op %>_<%-associationsArguments["to_many_through_sql_cross_table"][i].targetKey-%> - field Mutation (model-layer) for to_one associationsArguments to <%- op %>
*
* @param {Id} <%- idAttribute-%> IdAttribute of the root model to be updated
- * @param {Id} <%-associationsArguments["to_many_through_sql_cross_table"][i].targetKey-%> Foreign Key (stored in "Me") of the Association to be updated.
+ * @param {Id} <%-associationsArguments["to_many_through_sql_cross_table"][i].targetKey-%> Foreign Key (stored in "Me") of the Association to be updated.
*/
static async <%- op -%>_<%-associationsArguments["to_many_through_sql_cross_table"][i].targetKey-%>(record, <%- op -%><%-associationsArguments["to_many_through_sql_cross_table"][i].name_cp-%>){
- const updated = await sequelize.transaction(async (transaction) => {
+ const updated = await this.sequelize.transaction(async (transaction) => {
return await record.<%-op-%><%-associationsArguments["to_many_through_sql_cross_table"][i].name_cp-%>(<%- op -%><%-associationsArguments["to_many_through_sql_cross_table"][i].name_cp-%>, {transaction: transaction});
});
return updated;