At a URL path on existing WWW server (ex: foo.com/gitlab) This guide illustrates the installation of Gitlab at: foo.com/gitlab. But the instructions can be adapted to install Gitlab at a dedicated domain name."},{"location":"admin/gitlab/index.html#download-release-package","title":"Download Release Package","text":"
The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
"},{"location":"admin/gitlab/index.html#configure-and-install","title":"Configure and Install","text":"cd DTaaS/deploy/services/gitlab\n
This directory contains files needed to set up the docker container containing the local GitLab instance.
./data
, ./config
, ./logs
are the directories that will contain data for the GitLab instance compose.gitlab.yml
and .env
are the Docker compose and environment files to manage the containerized instance of gitlab
If the DTaaS application and gitlab are to be hosted at https://foo.com, then the client config file (deploy/config/client/env.js
) needs to use the https://foo.com/gitlab as REACT_APP_AUTH_AUTHORITY
. In addition, this hosting at https://foo.com also requires changes to config file (.env.server
).
If the DTaaS application and gitlab are to be hosted at https://localhost, then the client config file (deploy/config/client/env.local.js
) needs to use the https://localhost/gitlab as REACT_APP_AUTH_AUTHORITY
. If the application and the integrated gitlab are to be hosted at https://localhost/gitlab
, then .env.server
need not be modified.
Edit the .env
file available in this directory to contain the following variables:
Variable Example Value Explanation GITLAB_HOME '/home/Desktop/DTaaS/deploy/services/gitlab' Full path to the DTaaS gitlab directory. This is an absolute path with no trailing slash. SERVER_DNS either foo.com
or localhost
The server DNS, if you are deploying with a dedicated server. Remember not use http(s) at the beginning of the DNS string. NOTE: The DTaaS client uses the react-oidc-context
node package, which incorrectly causes redirects to use the HTTPS
URL scheme. This is a known issue with the package, and forces us to use HTTPS
for the DTaaS server. If you are hosting the DTaaS locally, your GitLab instance should be available at https://localhost/gitlab. If you are hosting the DTaaS at https://foo.com, then you Gitlab instance should be available at https://foo.com/gitlab.
"},{"location":"admin/gitlab/index.html#run","title":"Run","text":"NOTE: The GitLab instance operates with the dtaas-frontend
network, which requires the DTaaS server to be running before you start it. You may refer to secure installation scenarios (../overview.md) for the same.
The commands to start and stop the instance are:
# (cd deploy/services/gitlab)\ndocker compose -f compose.gitlab.yml up -d\ndocker compose -f compose.gitlab.yml down\n
Each time you start the container, it may take a few minutes. You can monitor the progress with watch docker ps
and check if the gitlab container is healthy
.
"},{"location":"admin/gitlab/index.html#post-install-configuration","title":"Post-install Configuration","text":"Gitlab also requires post-installation configuration.
Information
This configuration needs to be done from within the running container.
docker exec -it gitlab bash\n
The configuration file to change is /etc/gitlab/gitlab.rb. The variables to change are:
external_url 'http(s)://foo.com/gitlab'\nnginx['listen_port'] = 80\nnginx['enable'] = true\n\nnginx['listen_https'] = false\nnginx['redirect_http_to_https'] = false\nletsencrypt['enable'] = false\n
The external_url
mentioned about indicates hosting of gitlab at https://foo.com/gitlab. If the gitlab needs to be available at https://localhost/gitlab, then the external_url
should be https://localhost/gitlab.
Save the changes and reconfigure gitlab by running:
# inside the gitlab docker container\ngitlab-ctl reconfigure\nexit\n
The administrator username for GitLab is: root
. The password for this user account will be available in: /etc/gitlab/initial_root_password. Be sure to save this password somewhere, as this file will be deleted after 24 hours from the first time you start the local instance.
After running the container, your local GitLab instance will be available at external_url
specified in gitlab.rb, i.e., either at https://foo.com/gitlab or at https://localhost/gitlab.
"},{"location":"admin/gitlab/index.html#create-users","title":"Create Users","text":"The newly installed gitlab only contains root
user. More users need to be created for use with DTaaS. Please see the Gitlab docs for further help.
"},{"location":"admin/gitlab/index.html#pending-tasks","title":"Pending Tasks","text":"This README helps with installation of Gitlab along side DTaaS application. But the OAuth2 integration between Gitlab and DTaaS will still be still pending. Follow the integration guide and runner setup to setup the Gitlab integration.
"},{"location":"admin/gitlab/integration.html","title":"GitLab Integration Guide","text":"This guide helps with integration of a local GitLab instance with a DTaaS server installation, and integrating the OAuth Authorization feature with the DTaaS installation.
After following this guide, the GitLab instance will be integrated as OAuth provider for both DTaaS client application and Traefik Forward Auth backend authorization.
Note
The DTaaS client uses the react-oidc-context
node package, which incorrectly causes authorization redirects to use the HTTPS
URL scheme. This is a known issue with the package, and forces us to use HTTPS
for the DTaaS server. This means your server should be set up to use either https://localhost or https://foo.com. This guide will henceforth use foo.com
to represent either localhost or a custom domain.
"},{"location":"admin/gitlab/integration.html#integration-steps","title":"Integration Steps","text":""},{"location":"admin/gitlab/integration.html#1-set-up-the-dtaas-server-over-https","title":"1. Set up the DTaaS server over HTTPS","text":"Follow the existing guides to set up the DTaaS web application over HTTPS connection on either localhost (https://localhost) or a custom domain (https://foo.com).
Note
You may ignore steps related to configuring OAuth application tokens at https://gitlab.com. We will be using the initial installation to host the local GitLab instance, on which we will later create the OAuth application tokens.
"},{"location":"admin/gitlab/integration.html#2-set-up-the-gitlab-instance","title":"2. Set up the GitLab Instance","text":"Follow the guide to set up a GitLab instance.
After this step, and once you run gitlab-ctl reconfigure
, you will have a functioning GitLab instance (at either https://localhost/gitlab or https://foo.com/gitlab). Login credentials of the root user.
"},{"location":"admin/gitlab/integration.html#3-create-oauth-tokens-in-gitlab","title":"3. Create OAuth Tokens in GitLab","text":"Follow these guides to create OAuth Application Tokens for - backend and client. Please note that backend is not required for https://localhost installation.
After this step you will have credentials for the application tokens titled \"DTaaS Server Authorization\" and \"DTaaS Client Authorization\", which we will use in the next step.
"},{"location":"admin/gitlab/integration.html#4-use-valid-oauth-application-tokens","title":"4. Use Valid Oauth Application Tokens","text":"We can now use the OAuth tokens generated on the GitLab instance to enable authorization.
If the DTaaS application is hosted at https://localhost, then configure the following files:
- DTaaS Client Authorization token in deploy/config/client/env.local.js.
- deploy/docker/.env.local Add localpath and username.
If the DTaaS application is hosted at https://foo.com, then configure the following files:
- DTaaS Client Authorization token in deploy/config/client/env.js.
- deploy/docker/.env.server - Add localpath and username, OAuth client ID and client secret from the DTaaS Server Authorization token
"},{"location":"admin/gitlab/integration.html#restart-services","title":"Restart Services","text":""},{"location":"admin/gitlab/integration.html#localhost-installation","title":"Localhost Installation","text":"The updated OAuth application configuration needs to be loaded into the client website service.
cd deploy/docker\ndocker compose -f compose.local.yml --env-file .env.local up -d --force-recreate client\n
"},{"location":"admin/gitlab/integration.html#production-server-installation","title":"Production Server Installation","text":"The updated OAuth application configuration needs to be loaded into the client website and the forward-auth services.
The production server can be installed with either http or https option. If it is installed with http option, run the following commands.
cd deploy/docker\ndocker compose -f compose.server.yml --env-file .env.server up -d --force-recreate client\ndocker compose -f compose.server.yml --env-file .env.server up -d --force-recreate traefik-forward-auth\n
If the production server is installed with https option, run the following commands.
cd deploy/docker\ndocker compose -f compose.server.secure.yml --env-file .env.server up -d --force-recreate client\ndocker compose -f compose.server.secure.yml --env-file .env.server up -d --force-recreate traefik-forward-auth\n
"},{"location":"admin/gitlab/integration.html#post-setup-usage","title":"Post Setup Usage","text":"If you have set up everything correctly:
- You will have a functioning path-prefixed GitLab instance available at
https://foo.com/gitlab
that you may use in a similar manner to https://gitlab.com. - Data, configuration settings and logs pertaining to the GitLab installation will be available on the DTaaS server within the directory: deploy/services/gitlab.
- Traefik Forward Auth will use the path-prefixed GitLab instance for authorization on the multi-user installation scenario i.e.
foo.com
(but not on localhost
).
"},{"location":"admin/gitlab/runner.html","title":"Gitlab Runner Integration","text":"To properly use the Digital Twins Preview Page, you need to configure at least one project runner in your GitLab profile. The first step is to configure the CI/CD pipeline in gitlab project. The second step is to install the runner and integrate it with the selected gitlab project.
"},{"location":"admin/gitlab/runner.html#configure-gitlab-project","title":"Configure Gitlab Project","text":"Follow the steps below:
- Navigate to the DTaaS group and select the project named after your GitLab username.
- In the project menu, go to Settings and select CI/CD.
- Expand the Runners section and click on New project runner. Follow the configuration instructions carefully:
- Add linux as a tag during configuration.
- Click on Create runner. A runner authentication token is generated. This token will be used later for registering a runner.
"},{"location":"admin/gitlab/runner.html#runner","title":"Runner","text":""},{"location":"admin/gitlab/runner.html#install-runner","title":"Install Runner","text":"A detailed guide on installation of gitlab runners on Linux OS is available on gitlab website. Remember to use linux
as tag for the runner.
"},{"location":"admin/gitlab/runner.html#register-runner","title":"Register Runner","text":"Please see this gitlab guide on registering a runner.
Remember to choose docker as executor and ruby:2.7 as the default docker image.
$sudo gitlab-runner register --url https://gitlab.foo.com \\\n --token xxxxx\n
Or, you can also register the runner in non-interactive mode by running
$sudo gitlab-runner register \\\n --non-interactive \\\n --url \"https://gitlab.foo.com/\" \\\n --token \"xxxx\" \\\n --executor \"docker\" \\\n --docker-image ruby:2.7 \\\n --description \"docker-runner\"\n
"},{"location":"admin/gitlab/runner.html#start-runner","title":"Start Runner","text":"You can manually verify that the runner is available to pick up jobs by running the following command:
$sudo gitlab-runner run\n
It can also be used to reactivate offline runners during subsequent sessions.
"},{"location":"admin/gitlab/runner.html#pipeline-trigger-token","title":"Pipeline Trigger Token","text":"You also need to create a pipeline trigger token. This token is required to trigger pipelines by using the API. You can create this token in your GitLab project's CI/CD settings under the Pipeline trigger tokens section.
"},{"location":"admin/guides/add_service.html","title":"Add other services","text":"Pre-requisite
You should read the documentation about the already available services
This guide will show you how to add more services. In the following example we will be adding MongoDB as a service, but these steps could be modified to install other services as well.
Adding other services requires more RAM and CPU power. Please make sure the host machine meets the hardware requirements for running all the services.
1. Add the configuration:
Select configuration parameters for the MongoDB service.
Configuration Variable Name Description username the username of the root user in the MongoDB password the password of the root user in the MongoDB port the mapped port on the host machine (default is 27017) datapath path on host machine to mount the data from the MongoDB container Open the file /deploy/services/services.yml
and add the configuration for MongoDB:
services:\n rabbitmq:\n username: \"dtaas\"\n password: \"dtaas\"\n vhost: \"/\"\n ports:\n main: 5672\n management: 15672\n ...\n mongodb:\n username: <username>\n password: <password>\n port: <port>\n datapath: <datapath>\n ...\n
2. Add the script:
The next step is to add the script that sets up the MongoDB container with the configuraiton.
Create new file named /deploy/services/mongodb.js
and add the following code:
#!/usr/bin/node\n/* Install the optional platform services for DTaaS */\nimport { $ } from \"execa\";\nimport chalk from \"chalk\";\nimport fs from \"fs\";\nimport yaml from \"js-yaml\";\n\nconst $$ = $({ stdio: \"inherit\" });\nconst log = console.log;\nlet config;\n\ntry {\n log(chalk.blue(\"Load services configuration\"));\n config = await yaml.load(fs.readFileSync(\"services.yml\", \"utf8\"));\n log(\n chalk.green(\n \"configuration loading is successful and config is a valid yaml file\"\n )\n );\n} catch (e) {\n log(chalk.red(\"configuration is invalid. Please rectify services.yml file\"));\n process.exit(1);\n}\n\nlog(chalk.blue(\"Start MongoDB server\"));\nconst mongodbConfig = config.services.mongodb;\n\ntry {\n log(\n chalk.green(\n \"Attempt to delete any existing MongoDB server docker container\"\n )\n );\n await $$`docker stop mongodb`;\n await $$`docker rm mongodb`;\n} catch (e) {}\n\nlog(chalk.green(\"Start new Mongodb server docker container\"));\nawait $$`docker run -d -p ${mongodbConfig.port}:27017 \\\n --name mongodb \\\n -v ${mongodbConfig.datapath}:/data/db \\\n -e MONGO_INITDB_ROOT_USERNAME=${mongodbConfig.username} \\\n -e MONGO_INITDB_ROOT_PASSWORD=${mongodbConfig.password} \\\n --restart always \\\n mongo:7.0.3`;\nlog(chalk.green(\"MongoDB server docker container started successfully\"));\n
3. Run the script:
Go to the directory /deploy/services/
and run services script with the following commands:
yarn install\nnode mongodb.js\n
The MongoDB should now be available on services.foo.com:<port>.
"},{"location":"admin/guides/add_user.html","title":"Add User","text":"This page provides steps to adding a user from a DTaaS installation. The username alice is used here to illustrate the steps involved in removing a user account.
Please do the following:
1. Add user to Gitlab instance:
Add a new account for the new user on the Gitlab instance. Note the username and email of the new account.
2. Create User Workspace:
Use the DTaaS CLI to bring up the workspaces for new users. This brings up the containers, without the backend authorization.
3. Add backend authorization for the user:
- Go to the docker directory
cd <DTaaS>/docker\n
- Add three lines to the
conf.server
file
rule.onlyu3.action=auth\nrule.onlyu3.rule=PathPrefix(`/alice`)\nrule.onlyu3.whitelist = alice@foo.com\n
4. Restart the docker container responsible for backend authorization.
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
5. The new users are now added to the DTaaS instance, with authorization enabled.
"},{"location":"admin/guides/common_workspace_readonly.html","title":"Make Common Assets Read Only","text":""},{"location":"admin/guides/common_workspace_readonly.html#why","title":"Why","text":"In some cases you might want to restrict the access rights of some users to the common assets. In order to make the common area read only, you have to change the install script section performing the creation of user workspaces.
Note
These step needs to be performed before installation of the application.
"},{"location":"admin/guides/common_workspace_readonly.html#how","title":"How","text":"To make the common assets read-only for a user, the following changes need to be made to the compose.server.yml
file.
...\n user1:\n ....\n volumes:\n - ${DTAAS_DIR}/files/common:/workspace/common:ro\n ....\n\n user2:\n ....\n volumes:\n - ${DTAAS_DIR}/files/common:/workspace/common:ro\n ....\n
Please note the :ro
at the end of the line. This suffix makes the common assets read only.
If you want to have the same kind of read only restriction for new users as well, please make a similar change in cli/users.server.yml
.
"},{"location":"admin/guides/link_service.html","title":"Link services to local ports","text":"Requirements
- User needs to have an account on server2.
- SSH server must be running on server2
To link a port from the service machine (server2) to the local port on the user workspace. You can use ssh local port forwarding technique.
1. Step:
Go to the user workspace, on which you want to map from localhost to the services machine
2. Step:
Open a terminal in your user workspace.
3. Step:
Run the following command to map a port:
ssh -fNT -L <local_port>:<destination>:<destination_port> <user>@<services.server.com>\n
Here's an example mapping the RabbitMQ broker service available at 5672 of services.foo.com to localhost port 5672.
ssh -fNT -L 5672:localhost:5672 vagrant@services.foo.com\n
Now the programs in user workspace can treat the RabbitMQ broker service as a local service running within user workspace.
"},{"location":"admin/guides/localhost_portainer.html","title":"Install DTaaS on localhost (GUI)","text":"The installation instructions provided in this README are ideal for running the DTaaS on localhost via a Graphical User Interface (GUI). This installation is ideal for single users intending to use DTaaS on their own computers.
"},{"location":"admin/guides/localhost_portainer.html#design","title":"Design","text":"An illustration of the docker containers used and the authorization setup is shown here.
"},{"location":"admin/guides/localhost_portainer.html#requirements","title":"Requirements","text":"The installation requirements to run this docker version of the DTaaS are:
- docker desktop / docker CLI with compose plugin
- User account on gitlab.com
Tip
The frontend website requires authorization. The default authorization configuration works for gitlab.com. If you desire to use locally hosted gitlab instance, please see the client docs.
"},{"location":"admin/guides/localhost_portainer.html#download-release-package","title":"Download Release Package","text":"The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
cd DTaaS/deploy/services/gitlab\n
In this guide we will assume the contents of the zip file have been extracted to the directory: /home/DTaaS
.
Tip
The path given here is for Linux OS. It can be Windows compatible as well, for example: C:\\\\DTaaS
. Make sure to use this path and format in place of /home/DTaaS
in this guide.
"},{"location":"admin/guides/localhost_portainer.html#starting-portainer","title":"Starting Portainer","text":"The GUI used to run the application and docker containers will be provided by Portainer Community Edition. It is itself a Docker container that will create a website at https://localhost:9443
, which will present a graphical interface for starting and stopping the application.
You may follow the official documentation for setting up a Portainer CE Server . Alternatively, open a terminal on your system (Terminal on Linux / MacOS, Powershell on Windows, etc) and copy the following commands into it:
docker volume create portainer_data\ndocker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.21.4\n
This will start the Portainer server on your system, which will host its dashboard at https://localhost:9443
. Follow the Initial Setup Guide to set up an administrator account for Portainer on your system.
Portainer should now be set up on your system, and you can access the dashboard:
Tip
The next time you wish to start the Portainer server, run docker start portainer
.
"},{"location":"admin/guides/localhost_portainer.html#configuration","title":"Configuration","text":""},{"location":"admin/guides/localhost_portainer.html#create-user-workspace","title":"Create User Workspace","text":"The existing filesystem for installation is setup for user1
. A new filesystem directory needs to be created for the selected user.
You may use your file explorer or an equivalent application to duplicate the files/user1
directory and rename it as files/username
where username is the selected username registered on https://gitlab.com.
ALternatively, you may execute the following commands from the top-level directory of the DTaaS project.
cp -R files/user1 files/username\n
"},{"location":"admin/guides/localhost_portainer.html#creating-the-portainer-stack","title":"Creating the Portainer Stack","text":"Portainer Stacks are equivalent to using docker compose
commands to manage containers.
- Navigate to the Stacks tab on the side panel, and click on the Add Stack button.
- Name the Stack anything descriptive, for example:
dtaas-localhost
. - Select the Upload build method.
- Upload the compose file located at
deploy/docker/compose.local.yml
. - Select the option to load variables from a .env file, and upload the file
deploy/docker/.env.local
.
Tip
Sometimes the .env.local
file does not show up in the file explorer. You may fix this by selecting the option to show All Files rather than those with the extension .env.
The .env.local
file contains environment variables that are used by the compose file. Portainer allows you to modify them as shown in the screenshot above, here is a summary:
URL Path Example Value Explanation DTAAS_DIR '/home/Desktop/DTaaS' Full path to the DTaaS directory. This is an absolute path with no trailing slash. username1 'user1' Your gitlab username Tip
Important points to note:
- The path examples given here are for Linux OS. These paths can be Windows OS compatible paths as well.
- The client configuration file is located at
deploy/config/client/env.local.js
. If you are following the guide to use HTTPS on localhost, edit the URLs in this file by replacing http
with https
.
Once you have configured the environment variables, click on the button Deploy the stack.
"},{"location":"admin/guides/localhost_portainer.html#use","title":"Use","text":"The application will be accessible at: http://localhost from web browser. Sign in using your https://gitlab.com account.
All the functionality of DTaaS should be available to you through the single page client now.
"},{"location":"admin/guides/localhost_portainer.html#limitations","title":"Limitations","text":"The library microservice is not included in the localhost installation scenario.
"},{"location":"admin/guides/localhost_portainer.html#references","title":"References","text":"Image sources: Traefik logo, ml-workspace, reactjs, gitlab
"},{"location":"admin/guides/remove_user.html","title":"Remove User","text":"This page provides steps to remove a user from a DTaaS installation. The username alice is used here to illustrate the steps involved in removing a user account.
Please do the following:
1. Remove an existing user with the DTaaS CLI.
2. Remove backend authorization for the user:
- Go to the docker directory
cd <DTaaS>/docker\n
- Remove these three lines from the
conf.server
file
rule.onlyu3.action=auth\nrule.onlyu3.rule=PathPrefix(`/alice`)\nrule.onlyu3.whitelist = alice@foo.com\n
- Run the command for these changes to take effect:
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
The extra users now have no backend authorization.
3. Remove users to Gitlab instance (optional):
Please see gitlab docs for more help.
4. The user account is now deleted.
"},{"location":"admin/guides/remove_user.html#caveat","title":"Caveat","text":"You cannot delete the two base users that the DTaaS software is installed with. You can only delete the extra users that have been added to the software.
"},{"location":"admin/servers/auth.html","title":"OAuth for Traefik Gateway","text":"The traefik gateway is used to serve the DTaaS. All the services provided as part of the application are secured at the traefik gateway. The security is based on Traefik forward-auth.
An illustration of the docker containers used and the authorization setup is shown here.
The traefik forward-auth can use any OAuth2 provider, but within the DTaaS gitlab is used as authorization provider. You will use the OAuth the web / server application authorization flow.
Here are the steps to get started:
1. Choose GitLab Server:
- You need to set up OAuth authorization on a GitLab server. The commercial gitlab.com is not suitable for multi-user authorization (DTaaS requires this), so you'll need an on-premise GitLab instance.
- You can use GitLab Omnibus Docker for this purpose.
- Configure the OAuth application as an instance-wide authorization type. Select option to generate client secret and also selection option for trusted application.
2. Determine Website Hostname:
Before setting up OAuth on GitLab, decide on the hostname for your website. It's recommended to use a self-hosted GitLab instance, which you will use in other parts of the DTaaS application.
3. Determine Callback and Logout URLs:
For the web / server authorization flow to function correctly, you need two URLs: a callback URL and a logout URL.
- The callback URL informs the OAuth provider of the page where signed-in users should be redirected. It is the landing homepage of the DTaaS application. (either http://foo.com/_oauth/ or http://localhost/_oauth/)
- The logout URL is the URL for signout of gitlab and clear authorization within traefik-forward auth. (either http://foo.com/_oauth/logout or http://localhost/_oauth/logout). The logout URL is to help users logout of traefik forward-auth. The logout URL should not be entered into Gitlab OAuth application setup.
4. Create OAuth Application:
Oauth application setup on GitLab can be located on Edit Profile -> Application https://gitlab.foo.com/-/profile/applications.
During the creation of the OAuth application on GitLab, you need to specify the scope. Choose read_user scope.
5. Copy Application Credentials:
After successfully creating the OAuth application, GitLab generates an application ID and client secret.
Both these values are long string of HEX values that you will need for your configuration files.
6. Checklist: Required Information from OAuth Application:
You will need the following information from the OAuth application registered on GitLab:
GitLab Variable Name Variable Name in .env of docker compose file Default Value OAuth Provider OAUTH_URL https://gitlab.foo.com/ Application ID OAUTH_CLIENT_ID xx Application Secret OAUTH_CLIENT_SECRET xx Callback URL (to be directly entered in Gitlab OAuth registration) Forward-auth secret OAUTH_SECRET random-secret-string (password for forward-auth, can be changed to your preferred string) Scopes read_user"},{"location":"admin/servers/auth.html#development-environment","title":"Development Environment","text":"The development environment and server installation scenarios requires traefik forward-auth.
"},{"location":"admin/servers/auth.html#configure-authorization-rules-for-traefik-forward-auth","title":"Configure Authorization Rules for Traefik Forward-Auth","text":"The Traefik forward-auth microservices requires configuration rules to manage authorization for different URL paths. The conf.server file can be used to configure the specific rules. There are broadly three kinds of URLs:
"},{"location":"admin/servers/auth.html#public-path-without-authorization","title":"Public Path Without Authorization","text":"To setup a public page, an example is shown below.
rule.noauth.action=allow\nrule.noauth.rule=Path(`/public`)\n
Here, 'noauth' is the rule name, and should be changed to suit rule use. Rule names should be unique for each rule. The 'action' property is set to \"allow\" to make the resource public. The 'rule' property defines the path/route to reach the resource.
"},{"location":"admin/servers/auth.html#common-to-all-users","title":"Common to All Users","text":"To setup a common page that requires Gitlab OAuth, but is available to all users of the Gitlab instance:
rule.all.action=auth\nrule.all.rule=Path(`/common`)\n
The 'action' property is set to \"auth\", to enable Gitlab OAuth before the resource can be accessed.
"},{"location":"admin/servers/auth.html#selective-access","title":"Selective Access","text":"Selective Access refers to the scenario of allowing access to a URL path for a few users. To setup selective access to a page:
rule.onlyu1.action=auth\nrule.onlyu1.rule=Path(`/user1`)\nrule.onlyu1.whitelist = user1@localhost\n
The 'whitelist' property of a rule defines a comma separated list of email IDs that are allowed to access the resource. While signing in users can sign in with either their username or email ID as usual, but the email ID corresponding to the account should be included in the whitelist.
This restricts access of the resource, allowing only users mentioned in the whitelist.
"},{"location":"admin/servers/auth.html#user-management","title":"User management","text":"DTaaS provides an easy way to add and remove additional users from your DTaaS instance.
All such user management can be done via the DTaaS CLI
"},{"location":"admin/servers/auth.html#limitation","title":"Limitation","text":"The rules in _conf._ file are not dynamically loaded into the traefik-forward-auth microservice. Any change in the conf file requires restart of traefik-forward-auth for the changes to take effect. All the existing user sessions get invalidated when the traefik-forward-auth* restarts.
Use a simple command on the terminal.
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
docker compose -f compose.server.yml --env-file .env.server up -d --force-recreate traefik-forward-auth\n
"},{"location":"admin/servers/lib/config.html","title":"Configure Library Microservice","text":"The microservices requires config specified in INI format. The template configuration file is:
PORT='4001'\nMODE='local'\nLOCAL_PATH='/Users/<Username>/DTaaS/files'\nLOG_LEVEL='debug'\nAPOLLO_PATH='/lib' or ''\nGRAPHQL_PLAYGROUND='false' or 'true'\n
The LOCAL_PATH
variable is the absolute filepath to the location of the local directory which will be served to users by the Library microservice.
Replace the default values the appropriate values for your setup.
The config is saved .env
file by convention. The libms looks for .env
file in the working directory from which it is run. If you want to run libms without explicitly specifying the configuration file, run
"},{"location":"admin/servers/lib/docker.html","title":"Host Library Microservice","text":"The lib microservice is a simplified file manager serving files over graphQL and HTTP API.
It has two features:
- provide a listing of directory contents.
- transfer a file to user.
This document provides instructions for running docker container to provide a stand alone library microservice.
"},{"location":"admin/servers/lib/docker.html#setup-the-file-system","title":"Setup the File System","text":""},{"location":"admin/servers/lib/docker.html#outside-dtaas","title":"Outside DTaaS","text":"The package can be used independently of DTaaS. If this is your use case, you do not need any specific file structure. A valid file directory named files
is sufficient and it should be placed in the directory from which the compose.lib.yml
will be run.
"},{"location":"admin/servers/lib/docker.html#inside-dtaas","title":"Inside DTaaS","text":"The users of DTaaS expect the following file system structure for their reusable assets.
There is a skeleton file structure in DTaaS codebase. You can copy and create file system for your users. Remember to name the directory containing the file structure as files
and place it in the directory from which the compose.lib.yml
will be run.
"},{"location":"admin/servers/lib/docker.html#use","title":"Use","text":"Use the docker compose file to start the service.
# To bring up the container\ndocker compose -f compose.lib.yml up -d\n# To bring down the container\ndocker compose -f compose.lib.yml down\n
"},{"location":"admin/servers/lib/docker.html#service-endpoints","title":"Service Endpoints","text":"The GraphQL URL: localhost:4001/lib
The HTTP URL: localhost:4001/lib/files
The service API documentation is available on user page.
"},{"location":"admin/servers/lib/npm.html","title":"Host Library Microservice","text":"The lib microservice is a simplified file manager serving files over graphQL and HTTP API.
It has two features:
- provide a listing of directory contents.
- upload and download files
This document provides instructions for installing npm package of library microservice and running the same as a standalone service.
"},{"location":"admin/servers/lib/npm.html#setup-the-file-system","title":"Setup the File System","text":""},{"location":"admin/servers/lib/npm.html#outside-dtaas","title":"Outside DTaaS","text":"The package can be used independently of DTaaS. If this is your use case, you do not need any specific file structure. Any valid file directory is sufficient.
"},{"location":"admin/servers/lib/npm.html#inside-dtaas","title":"Inside DTaaS","text":"The users of DTaaS expect the following file system structure for their reusable assets.
There is a skeleton file structure in DTaaS codebase. You can copy and create file system for your users.
"},{"location":"admin/servers/lib/npm.html#install","title":"Install","text":"The npm package is available in Github packages registry and on npmjs. Prefer the package on npmjs over Github.
Set the registry and install the package with the one of the two following commands
"},{"location":"admin/servers/lib/npm.html#npmjs","title":"npmjs","text":"sudo npm install -g @into-cps-association/libms # requires no login\n
"},{"location":"admin/servers/lib/npm.html#github","title":"Github","text":"# requires login\nsudo npm config set @into-cps-association:registry https://npm.pkg.github.com\n
The github package registry asks for username and password. The username is your Github username and the password is your Github personal access token. In order for the npm to download the package, your personal access token needs to have read:packages scope.
"},{"location":"admin/servers/lib/npm.html#use","title":"Use","text":"Display help.
libms -h\n
Please see configuration for explanation of configuration conventions. To use .env
as configuration file, run
libms\n
To run libms with a custom config file,
libms -c FILE-PATH\nlibms --config FILE-PATH\n
If the environment file is named something other than .env
, for example as .env.libms
, you can run
libms -c \".env.libms\"\n
You can press Ctl+C
to halt the application. If you wish to run the microservice in the background, use
nohup libms [-c FILE-PATH] & disown\n
The lib microservice is now running and ready to serve files.
This microservice can also serve files in a browser with files transferred over HTTP protocol.
This option needs to be enabled with -H http.json
flag. A sample http config provided here can be used.
nohup libms [-H http.json] & disown\n
The regular file upload and download options become available.
"},{"location":"admin/servers/lib/npm.html#service-endpoints","title":"Service Endpoints","text":"The GraphQL URL: localhost:PORT/lib
The HTTP URL: localhost:PORT/lib/files
The service API documentation is available on user page.
"},{"location":"admin/vagrant/base-box.html","title":"DTaaS Vagrant Box","text":"This README provides instructions on creating a custom Operating System virtual disk for running the DTaaS software. The virtual disk is managed by vagrant. The purpose is two fold:
- Provide cross-platform installation of the DTaaS application. Any operating system supporting use of vagrant software utility can support installation of the DTaaS software.
- Create a ready to use development environment for code contributors.
There are two scripts in this directory:
Script name Purpose Default user.sh
user installation developer.sh
developer installation If you are installing the DTaaS for developers, the default installation caters to your needs. You can skip the next step and continue with the creation of vagrant box.
If you are a developer and would like additional software installed, you need to modify Vagrantfile
. The existing Vagrantfile
has two lines:
config.vm.provision \"shell\", path: \"user.sh\"\n #config.vm.provision \"shell\", path: \"developer.sh\"\n
Uncomment the second line to have more software components installed. If you are not a developer, no changes are required to the Vagrantfile
.
This vagrant box installed for users will have the following items:
- docker v24.0
- nodejs v20.10
- yarn v1.22
- npm v10.2
- containers - ml-workspace-minimal v0.13, traefik v2.10, gitlab-ce v16.4, influxdb v2.7, grafana v10.1, rabbitmq v3-management, eclipse-mosquitto (mqtt) v2, mongodb v7.0
This vagrant box installed for developers will have the following items additional items:
- docker-compose v2.20
- microk8s v1.27
- jupyterlab
- mkdocs
- container - telegraf v1.28
At the end of installation, the software stack created in vagrant box can be visualised as shown in the following figure.
The upcoming instructions will help with the creation of base vagrant box.
#create a key pair\nssh-keygen -b 4096 -t rsa -f vagrant -q -N \"\"\n\nvagrant up\n\n# let the provisioning be complete\n# replace the vagrant ssh key-pair with personal one\nvagrant ssh\n\n# install the oh-my-zsh\nsh -c \"$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\"\n# install plugins: history, autosuggestions,\ngit clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions\n\n# inside ~/.zshrc, modify the following line\nplugins=(git zsh-autosuggestions history cp tmux)\n\n# to replace the default vagrant ssh key-pair with\n# the generated private key into authorized keys\ncp /vagrant/vagrant.pub /home/vagrant/.ssh/authorized_keys\n\n# exit vagrant guest machine and then\n# copy own private key to vagrant private key location\ncp vagrant .vagrant/machines/default/virtualbox/private_key\n\n# check\nvagrant ssh #should work\n\n# exit vagrant guest machine and then\nvagrant halt\n\nvagrant package --base dtaas \\\n--info \"info.json\" --output dtaas.vagrant\n\n# Add box to the vagrant cache in ~/.vagrant.d/boxes directory\nvagrant box add --name dtaas ./dtaas.vagrant\n\n# You can use this box in other vagrant boxes using\n#config.vm.box = \"dtaas\"\n
"},{"location":"admin/vagrant/base-box.html#references","title":"References","text":"Image sources: Ubuntu logo
"},{"location":"admin/vagrant/single-machine.html","title":"DTaaS on Single Vagrant Machine","text":"These are installation instructions for running DTaaS software inside one vagrant Virtual Machine. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space to the vagrant box.
"},{"location":"admin/vagrant/single-machine.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/single-machine
). This shall be useful for logging into the vagrant machines created for two-machine deployment.
"},{"location":"admin/vagrant/single-machine.html#target-installation-setup","title":"Target Installation Setup","text":"The goal is to use the dtaas Vagrant box to install the DTaaS software on one single vagrant machine. A graphical illustration of a successful installation can be seen here.
There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue color.
Tip
The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.
"},{"location":"admin/vagrant/single-machine.html#configure-server-settings","title":"Configure Server Settings","text":"A dummy foo.com URL has been used for illustration. Please change this to your unique website URL.
Please follow the next steps to make this installation work in your local environment.
Update the Vagrantfile. Fields to update are:
- Hostname (
node.vm.hostname = \"foo.com\"
) - MAC address (
:mac => \"xxxxxxxx\"
). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged. - Other adjustments are optional.
"},{"location":"admin/vagrant/single-machine.html#installation-steps","title":"Installation Steps","text":"Execute the following commands from terminal
vagrant up\nvagrant ssh\n
Set a cronjob inside the vagrant virtual machine to remote the conflicting default route. Download the route script and run the following command.
sudo bash route.sh\n
Please follow the instructions of regular server installation setup to complete the installation.
"},{"location":"admin/vagrant/single-machine.html#references","title":"References","text":"Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs
"},{"location":"admin/vagrant/two-machines.html","title":"DTaaS on Two Vagrant Machines","text":"These are installation instructions for running DTaaS application in two vagrant virtual machines (VMs). In this setup, all the user workspaces shall be run on server1 while all the platform services will be run on server2.
The setup requires two server VMs with the following hardware configuration:
server1: 16GB RAM, 8 x64 vCPUs and 50GB Hard Disk space
server2: 6GB RAM, 3 x64 vCPUs and 50GB Hard Disk space
Under the default configuration, two user workspaces are provisioned on server1. The default installation setup also installs InfluxDB, Grafana, RabbitMQ and MQTT services on server2. If you would like to install more services, you can create shell scripts to install the same on server2.
"},{"location":"admin/vagrant/two-machines.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/two-machine
). This shall be useful for logging into the vagrant machines created for two-machine deployment.
"},{"location":"admin/vagrant/two-machines.html#target-installation-setup","title":"Target Installation Setup","text":"The goal is to use this dtaas vagrant box to install the DTaaS software on server1 and the default platform services on server2. Both the servers are vagrant machines.
There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue and red color.
A graphical illustration of a successful installation can be seen here.
In this case, both the vagrant boxes are spawed on one server using two vagrant configuration files, namely boxes.json and Vagrantfile.
Tip
The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.
"},{"location":"admin/vagrant/two-machines.html#configure-server-settings","title":"Configure Server Settings","text":" A dummy foo.com and services.foo.com URLs has been used for illustration. Please change these to your unique website URLs.
The first step is to define the network identity of the two VMs. For that, you need server name, hostname and MAC address. The hostname is the network URL at which the server can be accessed on the web. Please follow these steps to make this work in your local environment.
Update the boxes.json. There are entries one for each server. The fields to update are:
name
- name of server1 (\"name\" = \"dtaas-two\"
) hostname
- hostname of server1 (\"name\" = \"foo.com\"
) - MAC address (
:mac => \"xxxxxxxx\"
). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged. name
- name of server2 (\"name\" = \"services\"
) hostname
- hostname of server2 (\"name\" = \"services.foo.com\"
) - MAC address (
:mac => \"xxxxxxxx\"
). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged. - Other adjustments are optional.
"},{"location":"admin/vagrant/two-machines.html#installation-steps","title":"Installation Steps","text":"The installation instructions are given separately for each vagrant machine.
"},{"location":"admin/vagrant/two-machines.html#launch-dtaas-platform-default-services","title":"Launch DTaaS Platform Default Services","text":"Follow the installation guide for services to install the DTaaS platform services.
After the services are up and running, you can see the following services active within server2 (services.foo.com).
service external url InfluxDB database services.foo.com Grafana visualization service services.foo.com:3000 MQTT Broker services.foo.com:1883 RabbitMQ Broker services.foo.com:5672 RabbitMQ Broker management website services.foo.com:15672 MongoDB database services.foo.com:27017"},{"location":"admin/vagrant/two-machines.html#install-dtaas-application","title":"Install DTaaS Application","text":"Execute the following commands from terminal
vagrant up\nvagrant ssh\n
Set a cronjob inside the vagrant virtual machine to remote the conflicting default route. Download the route script and run the following command.
sudo bash route.sh\n
Please follow the instructions of regular server installation setup to complete the installation.
"},{"location":"admin/vagrant/two-machines.html#references","title":"References","text":"Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs
"},{"location":"developer/index.html","title":"Developer Guide","text":"This guide is for DTaaS platform developers. It helps development team members get familiar with the DTaaS project software design, and development processes. Please see developer-specific Slides, Video, and Research paper.
"},{"location":"developer/index.html#development-environment","title":"Development Environment","text":"Ideally, developers should work on Ubuntu/Linux Operating System.There is an ongoing effort to bring support for Windows Operating System. But, the development and software installation scripts are still heavily suited to Ubuntu/Linux
Please use the steps given here to install the required software packages.
"},{"location":"developer/index.html#ubuntulinux","title":"Ubuntu/Linux","text":"bash script/env.sh\nbash script/docker.sh\n
The docker images are large and are likely to consume about 5GB of bandwidth and 15GB of space. You will have to download the docker images on a really good network.
"},{"location":"developer/index.html#windows","title":"Windows","text":"Two powershell installation scripts, namely base.ps1
and env.ps1
are available to install the required software packages. But errors might crop up due to missing environment variables. The potential errors are:
npm is not recognized.........
in base.ps1
. gem is not recognized.........
in env.ps1
If you encounter these errors, remember to include node and ruby installation locations in PATH environment variable (Settings --> search for \"system environment variables\"
--> Advanced --> Environment Variables --> PATH
).
The base.ps1
and env.ps1
scripts can be run again after setting the correct PATH environment variable.
"},{"location":"developer/index.html#pre-install-nodejs-and-ruby-software","title":"Pre-install Nodejs and Ruby Software","text":"Another way to solve the PATH environment problem is to install Nodejs and Ruby software packages before running the powershell scripts.
- Install the latest stable version of NodeJS from the official NodeJS website.
- Install Ruby from official Ruby website and follow all the defaults during the installation.
"},{"location":"developer/index.html#run-scripts","title":"Run Scripts","text":"Then, open powershell with administrative priviledges and run the following commands in the given order:
powershell -F script/base.ps1\npowershell -F script/env.ps1\npowershell -F script/docker.ps1\n
The docker images are large and are likely to consume about 5GB of bandwidth and 15GB of space. You will have to download the docker images on a really good network.
"},{"location":"developer/index.html#development-workflow","title":"Development Workflow","text":"To manage collaboration by multiple developers on the software, a development workflow is in place. Each developer should follow these steps:
- Fork of the main repository into your github account.
- Setup Code Climate and Codecov for your fork. The codecov does not require secret token for public repositories.
- Install git-hooks for the project.
- Use Fork, Branch, PR workflow.
- Work in your fork and open a PR from your working branch to your
feature/distributed-demo
branch. The PR will run all the github actions, code climate and codecov checks. - Resolve all the issues identified in the previous step.
- If you have access to the integration server, try your working branch on the integration server.
- Once changes are verified, a PR should be made to the
feature/distributed-demo
branch of the upstream DTaaS repository. - The PR will be merged after checks by either the project administrators or the maintainers.
Remember that every PR should be meaningful and satisfies a well-defined user story or improve the code quality.
"},{"location":"developer/index.html#code-quality","title":"Code Quality","text":"The project code qualities are measured based on:
- Linting issues identified by Code Climate
- Test coverage report collected by Codecov
- Successful github actions
"},{"location":"developer/index.html#code-climate","title":"Code Climate","text":"Code Climate performs static analysis, linting and style checks. Quality checks are performed by codeclimate are to ensure the best possible quality of code to add to our project.
While any new issues introduced in your code would be shown in the PR page itself, to address any specific issue, you can visit the issues or code section of the codeclimate page.
It is highly recommended that any code you add does not introduce new quality issues. If they are introduced, they should be fixed immediately using the appropriate suggestions from Code Climate, or in worst case, adding a ignore flag (To be used with caution).
"},{"location":"developer/index.html#codecov","title":"Codecov","text":"Codecov keeps track of the test coverage for the entire project. For information about testing and workflow related to that, please see the testing page.
"},{"location":"developer/index.html#github-actions","title":"Github Actions","text":"The project has multiple github actions defined. All PRs and direct code commits must have successful status on github actions.
"},{"location":"developer/github-actions.html","title":"Secrets for Github Action","text":"The Github actions require the following secrets to be obtained from docker hub:
Secret Name Explanation DOCKERHUB_SCOPE
Username or organization name on docker hub DOCKERHUB_USERNAME
Username on docker hub DOCKERHUB_TOKEN
API token to publish images to docker hub, with Read
, Write
and Delete
permissions NPM_TOKEN
Token to publish npm packages to the default npm registry. Remember to add these secrets to Github Secrets Setting of your fork.
"},{"location":"developer/npm-packages.html","title":"Publish NPM packages","text":"The DTaaS software is developed as a monorepo with multiple npm packages.
"},{"location":"developer/npm-packages.html#default-npm-registry","title":"Default npm registry","text":"The default registry for npm packages is npmjs. The freely-accessible public packages are published to the npmjs registry. The publication step is manual for the runner.
npm login --registry=\"https://registry.npmjs.org\"\ncat ~/.npmrc #shows the auth token for the registry\n//registry.npmjs.org/:_authToken=xxxxxxxxxx\nyarn publish --registry=\"https://registry.npmjs.org\" \\\n --no-git-tag-version --access public\n
At least one version of runner package is published to this registry for each release of DTaaS.
The publication step for library microservice is automated via github actions.
"},{"location":"developer/npm-packages.html#github-npm-registry","title":"Github npm registry","text":"The Github actions of the project publish packages. The only limitation is that the users need an access token from Github.
"},{"location":"developer/npm-packages.html#private-registry","title":"Private Registry","text":""},{"location":"developer/npm-packages.html#setup-private-npm-registry","title":"Setup private npm registry","text":"Since publishing to npmjs is irrevocable and public, developers are encouraged to setup their own private npm registry for local development. A private npm registry will help with local publish and unpublish steps.
We recommend using verdaccio for this task. The following commands help you create a working private npm registry for development.
docker run -d --name verdaccio -p 4873:4873 verdaccio/verdaccio\nnpm adduser --registry http://localhost:4873 #create a user on the verdaccio registry\nnpm set registry http://localhost:4873/\nyarn config set registry \"http://localhost:4873\"\nyarn login --registry \"http://localhost:4873\" #login with the credentials for yarn utility\nnpm login #login with the credentials for npm utility\n
You can open http://localhost:4873
in your browser, login with the user credentials to see the packages published.
"},{"location":"developer/npm-packages.html#publish-to-private-npm-registry","title":"Publish to private npm registry","text":"To publish a package to your local registry, do:
yarn install\nyarn build #the dist/ directory is needed for publishing step\nyarn publish --no-git-tag-version #increments version in package.json, publishes to registry\nyarn publish #increments version in package.json, publishes to registry and adds a git tag\n
The package version in package.json gets updated as well. You can open http://localhost:4873
in your browser, login with the user credentials to see the packages published. Please see verdaccio docs for more information.
If there is a need to unpublish a package, ex: @dtaas/runner@0.0.2
, do:
npm unpublish --registry http://localhost:4873/ @dtaas/runner@0.0.2\n
To install / uninstall this utility for all users, do:
sudo npm install --registry http://localhost:4873 -g @dtaas/runner\nsudo npm list -g # should list @dtaas/runner in the packages\nsudo npm remove --global @dtaas/runner\n
"},{"location":"developer/npm-packages.html#use-the-packages","title":"Use the packages","text":"The packages available in private npm registry can be used like the regular npm packages installed from npmjs.
For example, to use @dtaas/runner@0.0.2
package, do:
sudo npm install --registry http://localhost:4873 -g @dtaas/runner\nrunner # launch the digital twin runner\n
"},{"location":"developer/client/client.html","title":"React Website","text":"The Website is how the end-users interact with the software platform. The website is being developed as a React single page web application.
A dependency graph for the entire codebase of the react application is:
"},{"location":"developer/client/client.html#dependency-graphs","title":"Dependency Graphs","text":"The figures are the dependency graphs generated from the code.
"},{"location":"developer/client/client.html#src-directory","title":"src directory","text":""},{"location":"developer/client/client.html#test-directory","title":"test directory","text":""},{"location":"developer/docker/docker.html","title":"Docker workflow for DTaaS","text":"This readme will explain the building and use of different docker files for use in development and installation of the DTaaS software.
NOTE: A local docker installation with compose plugin is a pre-requisite for using docker workflows.
"},{"location":"developer/docker/docker.html#folder-structure","title":"Folder Structure","text":"There are two dockerfiles for building the containers:
- client.dockerfile: Dockerfile for building the client application container.
- libms.dockerfile: Dockerfile for building the library microservice container from source code.
- libms.npm.dockerfile: Dockerfile for building the library microservice container from published npm package at npmjs.com. This Dockerfile is only used during publishing. It is used neither in the development builds nor in Github actions.
There is a specific compose file for development:
The compose.dev.yml: file is the docker compose file for development environment.
"},{"location":"developer/docker/docker.html#build-and-publish-docker-images","title":"Build and Publish Docker Images","text":"The github workflows publish docker images of client website and libms to github and docker hub.
"},{"location":"developer/docker/docker.html#developers","title":"Developers","text":"Use of docker images is handy for developers as well. It is suggested that developers build the required images locally on their computer and use them for development purposes. The images can be built using
docker compose -f compose.dev.yml build\n
"},{"location":"developer/docker/docker.html#running-docker-containers","title":"Running Docker Containers","text":"Follow these steps to use the application with docker.
The DTaaS application requires multiple configuration files. The list of configuration files to be modified are given for each scenario.
"},{"location":"developer/docker/docker.html#development-environment","title":"Development Environment","text":"This scenario is for software developers.
The configuration files to be updated are:
- docker/.env : please see docker installation docs for help with updating this config file
- docker/conf.dev : please see docker installation docs for help with updating this config file
- client/config/local.js please see client config for help with updating this config file
- servers/lib/config/.env.default please see lib config for help with updating this config file
The docker commands need to be executed from this directory (docker
). The relevant docker commands are:
docker compose -f compose.dev.yml up -d #start the application\ndocker compose -f compose.dev.yml down #terminate the application\n
"},{"location":"developer/docker/docker.html#access-the-application","title":"Access the Application","text":"You should access the application through the PORT mapped to the Traefik container. e.g. localhost
"},{"location":"developer/oauth/AUTHMS.html","title":"Auth Microservice","text":"This document details the workflow and implementation of the DTaaS Auth Microservice. Please go through the System Design and the summary of the OAuth2.0 technology to be able to understand the content here better.
"},{"location":"developer/oauth/AUTHMS.html#workflow","title":"Workflow","text":""},{"location":"developer/oauth/AUTHMS.html#user-identity-using-oauth20","title":"User Identity using OAuth2.0","text":"We define some constants that will help with the following discussion:
- CLIENT ID: The OAuth2 Client ID of the Auth MS
- CLIENT SECRET: The OAuth2 Client Secret of Auth MS
- REDIRECT URI: The URI where the user is redirected to after the user has approved sharing of information with the client.
- STATE: A random string used as an identifier for the specific \"GET authcode\" request (Figure 3.3)
- AUTHCODE: The one-use-only Authorization code returned by the OAuth2 provider (GitLab instance) in response to \"GET authcode\" after user approval.
Additionally, let's say DTaaS uses a dedicated gitlab instance hosted at the URL https://gitlab.foo.com (instead of https://foo.com)
A successful OAuth2 workflow (Figure 3.3) has the following steps:
- The user requests a resource, say GET/BackendMS
- The Auth MS intercepts this request, and starts the OAuth2 process.
- The Auth MS sends a authorization request to the Gitlab instance.
This is written in shorthand as GET/authcode. The actual request (a user redirect) looks like:
https ://gitlab.foo.com/oauth/\nauthorize?\nresponse_type=code&\nclient_id=OAUTH_CLIENT_ID&\nredirect_uri=REDIRECT_URI&\nscope=read_user&state = STATE\n
Here the gitlab.foo.com/oauth/authorize is the specific endpoint of the Gitlab instance that handles authorisation code requests.
The query parameters in the request include the expected response type, which is fixed as \u201dcode\u201d, meaning that we expect an Authorization code. Other query parameters are the client id, the redirect uri, the scope which is set to read user for our purpose, and the state (the random string to identify the specific request).
-
The OAuth2 provider redirects the user to the login page. Here the user logs into their protected account with their username/email ID and password.
-
The OAuth2 provider then asks the user to approve/deny sharing the requested information with the Auth MS. The user should approve this for successful authentication.
-
After approval, the user is redirected by the GitLab instance to the REDIRECT URI. This URI has the following form:
REDIRECT_URI?code=AUTHCODE&state=STATE\n
The REDIRECT URI is as defined previously, during the OAuth2 Client initialisation, i.e. the same as the one provided in the \u201dGET authcode\u201d request by the Auth MS. The query parameters are provided by the Gitlab instance. These include the AUTHCODE which is the authoriation code that the Auth MS had requested, and the STATE which is the same random string in the \u201dGET authcode\u201d request.
-
The Auth MS retrieves these query parameters. It verifies that the STATE is the same as the random string it provided during the \"GET authcode\" request. This confirms that the AUTHCODE it has received is in response to the specific request it had made.
-
The Auth MS uses this one-use-only AUTHCODE to exchange it for a general access token. This access token wouldn\u2019t be one-use-only, although it would expire after a specified duration of time. To perform this exchange, the Auth MS makes another request to the GitLab instance. This request is written in shorthand as GET/access_token in the sequence diagram. The true form of the request is:
POST https://gitlab.foo.com/oauth/token,\nparameters = 'client_id=OAUTH_CLIENT_ID&\nclient_secret=OAUTH_CLIENT_SECRET&\ncode=AUTHCODE&\ngrant_type=authorization_code&\nredirect_uri=REDIRECT_URI'\n
The request to get a token by exchanging an authorization code, is actually a POST request (for most OAuth2 providers). The https://gitlab.foo.com/oauth/token API endpoint handles the token exchange requests. The parameters sent with the POST request are the client ID, the client secret, the AUTHCODE and the redirect uri. The grant type parameter is always set to the string \u201dauthorization code\u201d, which conveys that we will be exchanging an authentication code for an access token.
- The Gitlab instance exchanges a valid AUTHCODE for an Access Token. This is sent as a response to the Auth MS. An example response is of the following form:
{\n \"access_token\": \"d8aed28aa506f9dd350e54\",\n \"token_type\": \"bearer\",\n \"expires_in\": 7200,\n \"refresh_token\":\"825f3bffb2544b976633a1\",\n \"created_at\": 1607635748\n}\n
The access token field provides the string that can be used as an access token in the headers of requests tryng to access user information. The token type field is usually \u201dbearer\u201d, the expires in field specifies the time in seconds for which the access token will be valid, and the
created at field is the Epoch timestamp at which the token was created. The refresh token field has a string that can be used to refresh the access token, increasing it\u2019s lifetime. However we do not make use of the refresh token field. If an access token expires, the Auth MS simply asks for a new one. TOKEN is the access token string returned in the response.
-
The Auth MS has finally obtained an access token that it can use to retrieve the user\u2019s information. Note that if the Auth MS already had an existing valid access token for information about this user, the steps above wouldn\u2019t be necessary, and thus wouldn\u2019t be performed by the Auth MS. The steps till now in the sequence diagram are simply to get a valid access token for the user information.
-
The Auth MS makes a final request to the Gitlab instance, shorthanded as GET user_details in the sequence diagram. The actual request is of the form:
GET https ://gitlab.foo.com/api/v4/user\n\"Authorization\": Bearer <TOKEN>\n
Here, https://gitlab.foo.com/api/v4/user is the API endpoint that responds with user information. An authorization header is required on the request, with a valid access token. The required header is added here, and TOKEN is the access token that the Auth MS holds.
- The Gitlab instance verifies the access token, and if it is valid, responds with the required user information. This includes username, email ID, etc. An example response looks like:
{\n \"id\": 8,\n \"username\": \"UserX\",\n \"name\": \"XX\",\n \"state\": \"active\",\n \"web_url\": \"http://gitlab.foo.com/UserX\",\n \"created_at\":\"2023-12-03 T10:47:21.970 Z\",\n \"bio\": \"\",\n \"location\": \"\",\n \"public_email\": null,\n \"skype\": \"\",\n \"linkedin\": \"\",\n \"twitter\": \"\",\n \"organization\": \"\",\n \"job_title\": \"\",\n \"work_information\": null,\n \"followers\": 0,\n \"following\": 0,\n \"is_followed \": false,\n \"local_time\": null,\n \"last_sign_in_at\": \"2023-12-13 T12:46:21.223 Z\",\n \"confirmed_at\": \"2023-12-03 T10:47:21.542 Z \",\n \"last_activity_on\": \"2023-12-13\",\n \"email\": \"UserX@localhost\",\n \"projects_limit\": 100000,\n}\n
The important fields from this response are the \u201demail\u201d, \u201dusername\u201d keys. These keys are unique to a user, and thus provide an identity to the user.
- The Auth MS retrieves the values of candidate key fields like \u201demail\u201d, \u201dusername\u201d from the response. Thus, the Auth MS now knows the identity of the user.
"},{"location":"developer/oauth/AUTHMS.html#checking-user-permissions-authorization","title":"Checking User permissions - Authorization","text":"An important feature of the Auth MS is to implement access policies for DTaaS resources. We may have requirements that certain resources and/or microservices in DTaaS should only be accessible to certain users. For example, we may want that /BackendMS/user1 should only be accessible to the user who has username user1. Another example may be that we may want /BackendMS/group3 to only be available to users who have an email ID in the domain @gmail.com. The Auth MS should be able to impose these restrictions and make certain services selectively available to certain users. There are two steps to doing this:
-
Firstly, the user\u2019s identity should be known and trusted. The Auth MS should know the identity of a user and believe that the user is who they claim to be. This has been achieved in the previous section
-
Secondly, this identity should be analysed against certain rules or against a database of allowed users, to determine whether this user should be allowed to access the requested resource.
The second step requires, for every service, either a set of rules that define which users should be allowed access to the service, or a database of user identities that are allowed to access the service. This database and/or set of rules should use the user identities, in our case the email ID or username, to decide whether the user should be allowed or not. This means that the rules should be built based on the kind of username/ email ID the user has, say maybe using some RegEx. In the case of a database, the database should have the user identity as a key. For any service, we can simply look up if the key exists in the database or not and allow/deny the user access based on that.
In the sequence diagram, the Auth MS has a self-request marked as \u201dChecks user permissions\u201d after receiving the user identity from the Gitlab instance. This is when the Auth MS compares the identity of the user to the rules and/or database it has for the requested service. Based on this, if the given identity has access to the requested resource, the Auth MS responds with a 200 OK. This finally marks a succcessful authentication, and the user can now access the requested resource. Note: Again, the Auth MS and user do not communicate directly. All requests/responses of the Auth MS are with the Traefik gateway, not the User directly. Infact, the Auth MS is the external server used by the ForwardAuth middleware of the specific route, and communicates with this middleware. If the authentication is successful, The gateway forwards the request to the specific resource when the 200 OK is recieved, else it drops the request and returns the error code to the user.
"},{"location":"developer/oauth/AUTHMS.html#implementation","title":"Implementation","text":""},{"location":"developer/oauth/AUTHMS.html#traefik-forward-auth","title":"Traefik-forward-auth","text":"The implementation approach is setting up and configuring the open source thomseddon/traefik-forward-auth for our specific use case. This would work as our Auth microservice.
The traefik-forward-auth software is available as a docker.io image. This works as a docker container. Thus there are no dependency management issues. Additionally, it can be added as a middleware server to traefik routers. Thus, it needs atleast Traefik to work along with it properly. It also needs active services that it will be controlling access to. Traefik, the traefikforward-auth service and any services are thus, treated as a stack of docker containers. The main setup needed for this system is configuring the compose.yml file.
There are three main steps of configuring the Auth MS properly.
- The traefik-forward-auth service needs to be configured carefully. Firstly, we set the environment variables for our specific case. Since, we are using Gitlab, we use the generic-oauth provider configuration. Some important variables that are required are the OAuth2 Client ID, Client Secret, Scope. The API endpoints for getting an AUTHCODE, exchanging the code for an access token and getting user information are also necessary
Additionally, it is necessary to create a router that handles the REDIRECT URI path. This router should have a middleware which is set to traefik-forward-auth itself. This is so that after approval, when the user is taken to REDIRECT URI, this can be handled by the gateway and passed to the Auth service for token exchange. We add the ForwardAuth middleware here, which is a necessary part of our design as discussed before. We also add a load balancer for the service. We also need to add a conf file as a volume, for selective authorization rules (discussed later). This is according to the suggested configuration. Thus, we add the following to our docker services:
traefik\u2212forward\u2212auth:\nimage: thomseddon/traefik\u2212forward\u2212auth:latest\nvolumes:\n - <filepath>/conf:/conf\nenvironment:\n - DEFAULT_PROVIDER = generic - oauth\n - PROVIDERS_GENERIC_OAUTH_AUTH_URL=https://gitlab.foo.com/oauth/authorize\n - PROVIDERS_GENERIC_OAUTH_TOKEN_URL=https://gitlab.foo.com/oauth/token\n - PROVIDERS_GENERIC_OAUTH_USER_URL=https://gitlab.foo.com/api/v4/user\n - PROVIDERS_GENERIC_OAUTH_CLIENT_ID=OAUTH_CLIENT_ID\n - PROVIDERS_GENERIC_OAUTH_CLIENT_SECRET=OAUTH_CLIENT_SECRET\n - PROVIDERS_GENERIC_OAUTH_SCOPE = read_user\n - SECRET = a - random - string\n # INSECURE_COOKIE is required if\n # not using a https entrypoint\n - INSECURE_COOKIE = true\nlabels:\n - \"traefik.enable=true\"\n - \"traefik.http.routers.redirect.entryPoints=web\"\n - \"traefik.http.routers.redirect.rule=PathPrefix(/_oauth)\"\n - \"traefik.http.routers.redirect.middlewares=traefik-forward-auth\"\n - \"traefik.http.middlewares.traefik-forward-auth.forwardauth.address=http://traefik-forward-auth:4181\"\n - \"traefik.http.middlewares.traefik-forward-auth.forwardauth.authResponseHeaders=X-Forwarded-User\"\n - \"traefik.http.services.traefik-forward-auth.loadbalancer.server.port=4181\"\n
- The traefik-forward-auth service should be added to the backend services as a middleware.
To do this, the docker-compose configurations of the services need to be updated by adding the following lines:
- \"traefik.http.routers.<service-router>.rule=Path(/<path>)\"\n - \"traefik.http.routers.<service-router>.middlewares=traefik-forward-auth\"\n
This creates a router that maps to the required route, and adds the auth middleware to the required route.
-
Finally, we need to set user permissions on user identities by creating rules in the conf file. Each rule has a name (an identifier for the rule), and an associated route for which the rule will be invoked. The rule also has an action property, which can be either \u201dauth\u201d or \u201dallow\u201d. If action is set to \u201dallow\u201d, any requests on this route are allowed to bypass even the OAuth2 identification. If the action is set to \u201dauth\u201d, requests on this route will require User identity OAuth2 and the system will follow the sequence diagram. For rules with action=\u201dauth\u201d, the user information is retrieved. The identity we use for a user is the user\u2019s email ID. For \u201dauth\u201d rules, we can configure two types of User restrictions/permissions on this identity:
-
Whitelist - This would be a list of user identities (email IDs in our case) that are allowed to access the corresponding route.
- Domain - This would be a domain (example: gmail.com), and only email IDs (user identities) of that domain (example: johndoe@gmail.com) would be allowed access to the corresponding route.
Configuring any of these two properties of an \u201dauth\u201d rule allows us to selectively permit access to certain users for certain resources. Not configuring any of these properties for an \u201dauth\u201d rule means that the OAuth2 process is carried out and the user identity is retrieved, but all known user identities (i.e. all users that successfully complete the OAuth) are allowed to access the resource.
DTaaS currently uses only the whitelist type of rules.
These rules can be used in 3 different ways described below. The exact format of lines to be added to the conf file are also shown.
- No Auth - Serves the Path(\u2018/public\u2018) route. A rule with action=\u201dallow\u201d should be imposed on this.
rule.noauth.action=allow\nrule.noauth.rule=Path(`/public`)\n
- User specific: Serves the Path(\u2018/user1\u2018) route. A rule that only allows \u201duser1@localhost\u201d identity should be imposed on this
rule.onlyu1.action=auth\nrule.onlyu1.rule=Path(`/user1`)\nrule.onlyu1.whitelist=user1@localhost\n
- Common Auth - Serves the Path(\u2018/common\u2018) route. A rule that requires OAuth, i.e. with action=\u201dallow\u201d, but allows all valid and known user identities should be imposed on this.
rule.all.action = auth\nrule.all.rule = Path(`/common`)\n
"},{"location":"developer/oauth/DESIGN.html","title":"System Design of DTaaS Authorization Microservice","text":"DTaaS requires backend authorization to protect its backend services and user workspaces. This document details the system design of the DTaaS Auth Microservice which is responsible for the same.
"},{"location":"developer/oauth/DESIGN.html#requirements","title":"Requirements","text":"For our purpose, we require the Auth MS to be able to handle only requests of the general form \u201dIs User X allowed to access /BackendMS/example?\u201d.
If the user\u2019s identity is correctly verified though the GitLab OAuth2 provider AND this user is allowed to access the requested microservice/action, then the Auth MS should respond with a 200 (OK) code and let the request pass through the gateway to the required microservice/server.
If the user\u2019s identity verification through GitLab OAuth2 fails OR this user is not permitted to access the request resource, then the Auth MS should respond with a 40X (NOT OK) code, and restrict the request from going forward.
"},{"location":"developer/oauth/DESIGN.html#forward-auth-middleware-in-traefik","title":"Forward Auth Middleware in Traefik","text":"Traefik allows middlewares to be set for the routes configured into it. These middlewares intercept the route path requests, and perform analysis/modifications before sending the requests ahead to the services. Traefik has a ForwardAuth middleware that delegates authentication to an external service. If the external authentication server responds to the middleware with a 2XX response codes, the middleware acts as a proxy, letting the request pass through to the desired service. However, if the external server responds with any other response code, the request is dropped, and the response code returned by the external auth server is returned to the user
(source: Treafik documentation)
Thus, an Auth Microservice can be integrated into the existing gateway and DTaaS system structure easily by adding it as the external authentication server for ForwardAuth middlewares. These middlewares can be added on whichever routes/requests require authentication. For our specific purpose, this will be added to all routes since we impose atleast identity verification of users for any request through the gateway
"},{"location":"developer/oauth/DESIGN.html#auth-ms-design","title":"Auth MS Design","text":"The integrated Auth MS should thus work as described in the sequence diagram.
-
Any request made by the user is made on the React website, i.e. the frontend of the DTaaS software.
-
This request then goes through the Traefik gateway. Here it should be interrupted by the respective ForwardAuth middleware.
-
The middleware asks the Auth MS if this request for the given user should be allowed.
-
The Auth MS, i.e. the Auth server verifies the identity of the user using OAuth2 with GitLab, and checks if this user should be allowed to make this request.
-
If the user is verified and allowed to make the request, the Auth server responds with a 200 OK to Traefik Gateway (more specifically to the middleware in Traefik)
-
Traefik then forwards this request to the respective service. A response by the service, if any, will be passed through the chain back to the user.
-
However, If the user is not verified or not allowed to make this request, the Auth server responds with a 40x to Traefik gateway.
-
Traefik will then drop the request and respond to the Client informing that the request was forbidden. It will also pass the Auth servers response code
"},{"location":"developer/oauth/OAUTH2.0.html","title":"OAuth 2.0 Summary","text":"The Auth MS works on the OAuth 2.0 RFC. This document provides a brief summary of the working of the OAtuh 2.0 technology.
"},{"location":"developer/oauth/OAUTH2.0.html#entities","title":"Entities","text":"OAuth2, as used for user identity verification, has 3 main entities:
- The User: This is the entity whose identity we are trying to verify/know. In our case, this is the same as the user of the DTaaS software.
- The Client: This is the entity that wishes to know/verify the identity of a user. In our case, this is the Auth MS (initialised with a Gitlab application). This shouldn\u2019t be confused with the frontend website of DTaaS (referred to as Client in the previous section).
- The OAuth2 Identity Provider: This is the entity that allows the client to know the identity of the user. In our case, this is GitLab. Most commonly, users have an existing, protected account with this entity. The account is registered using a unique key, like an email ID or username and is usually password protected so that only that specific user can login using that account. After the user has logged in, they will be asked to approve sharing their profile information with the client. If they approve, the client will have access to the user\u2019s email id, username, and other profile information. This information can be used to know/verify the identity of the user.
Note: In general, it is possible for the Authorization server (which asks user for approval) and the Resource (User Identity) provider to be 2 different servers. However, in our case the Gitlab instance itself handles both the functions, through different API endpoints. The concepts remain the same. Thus, we only discuss the 3 main entities, the User, the OAuth2 Client and the Gitlab instance in our discussion.
"},{"location":"developer/oauth/OAUTH2.0.html#the-oauth2-client","title":"The OAuth2 Client","text":"Many sites allow you to initialise an OAuth2 client. For our purposes, we will use Gitlab itself, by making an \u201dapplication\u201d in Gitlab. However, it is not necessary to initialise a client using the same website as the identity provider. These are separate things. Our OAuth2 client is initialized by creating and configuring a Gitlab instance-wide application. There are two main things in this configuration:
- Redirect URI: It is the URI where the users are redirected to after they approve sharing information with the client.
- Scopes: These are the types and levels of access that the client can have over the user\u2019s profile. For our purposes, we only require the read user scope, which allows us to access the user\u2019s profile information for knowing the identity.
After the GitLab application is successfully created, we are provided a Client ID and Client Secret. This means our initialization is complete. This Client ID and Client Secret can be used in any application, essentially making that application the OAuth2 Client. This is why the Client secret should never be shared. We will use this Client ID and Client secret in our Auth MS, making it an OAuth2 Client application. It will now be able to follow the OAuth2 workflow to verify the identity of users.
"},{"location":"developer/oauth/OAUTH2.0.html#oauth-20-workflows","title":"OAuth 2.0 Workflows","text":"Two major different OAuth2.0 flows are used in DTaaS.
"},{"location":"developer/oauth/OAUTH2.0.html#oauth2-authorization-code-flow","title":"OAuth2 Authorization Code Flow","text":"This flow involves several steps and the exchange of an authorization code for an access tokens to ensure secure authorization. This flow is used for the DTaaS AuthMS, which is responsible for securing all backend DTaaS services
The OAuth2 workflow is initiated by the Client (Auth MS) whenever it requires knowing the identity of the user. Briefly, the flow starts when the Auth MS sends an authorization request to Gitlab. The Auth MS tries to obtain an access token, using which it can gather user information. Once it has user information, it can know the identity of the user and check whether the user has permission to access the requested resource.
The requests made by the Auth MS to the OAuth2 provider are abbreviated. A detailed explanation of the workflow for DTaaS specifically can be found in the AuthMS implementation docs
"},{"location":"developer/oauth/OAUTH2.0.html#oauth2-pkce-proof-key-for-code-exchange-flow","title":"OAuth2 PKCE (Proof Key for Code Exchange) Flow","text":"This is an extension to the OAuth2 Authorization Code Flow designed to provide an additional layer of security, particularly for public clients that cannot securely store client secrets. PKCE mitigates certain attack vectors, like authorization code interception.
The DTaaS client website login works based on the PKCE OAuth2.0 flow. More information about the details of this flow can be found here
"},{"location":"developer/servers/lib/lib-ms.html","title":"Library Microservice","text":"The Library Microservices provides users with access to files in user workspaces via API. This microservice will interface with local file system and Gitlab to provide uniform Gitlab-compliant API access to files.
Warning
This microservice is still under heavy development. It is still not a good replacement for file server we are using now.
"},{"location":"developer/servers/lib/lib-ms.html#architecture-and-design","title":"Architecture and Design","text":"The C4 level 2 diagram of this microservice is:
The GraphQL API provided by the library microservice shall be compliant with the Gitlab GraphQL service.
"},{"location":"developer/servers/lib/lib-ms.html#uml-diagrams","title":"UML Diagrams","text":""},{"location":"developer/servers/lib/lib-ms.html#class-diagram","title":"Class Diagram","text":"classDiagram\n class FilesResolver {\n -filesService: IFilesService\n +listDirectory(path: string): Promise<Project>\n +readFile(path: string): Promise<Project>\n }\n\n class FilesServiceFactory {\n -configService: ConfigService\n -localFilesService: LocalFilesService\n +create(): IFilesService\n }\n\n class LocalFilesService {\n -configService: ConfigService\n -getFileStats(fullPath: string, file: string): Promise<Project>\n +listDirectory(path: string): Promise<Project>\n +readFile(path: string): Promise<Project>\n }\n\n class ConfigService {\n +get(propertyPath: string): any\n }\n\n class IFilesService{\n listDirectory(path: string): Promise<Project>\n readFile(path: string): Promise<Project>\n }\n\n IFilesService <|-- FilesResolver: uses\n IFilesService <|.. LocalFilesService: implements\n IFilesService <|-- FilesServiceFactory: creates\n ConfigService <|-- FilesServiceFactory: uses\n ConfigService <|-- LocalFilesService: uses
"},{"location":"developer/servers/lib/lib-ms.html#sequence-diagram","title":"Sequence Diagram","text":"sequenceDiagram\n actor Client\n actor Traefik\n\n box LightGreen Library Microservice\n participant FR as FilesResolver\n participant FSF as FilesServiceFactory\n participant CS as ConfigService\n participant IFS as IFilesService\n participant LFS as LocalFilesService\n end\n\n participant FS as Local File System DB\n\n Client ->> Traefik : HTTP request\n Traefik ->> FR : GraphQL query\n activate FR\n\n FR ->> FSF : create()\n activate FSF\n\n FSF ->> CS : getConfiguration(\"MODE\")\n activate CS\n\n CS -->> FSF : return configuration value\n deactivate CS\n\n alt MODE = Local\n FSF ->> FR : return filesService (LFS)\n deactivate FSF\n\n FR ->> IFS : listDirectory(path) or readFile(path)\n activate IFS\n\n IFS ->> LFS : listDirectory(path) or readFile(path)\n activate LFS\n\n LFS ->> CS : getConfiguration(\"LOCAL_PATH\")\n activate CS\n\n CS -->> LFS : return local path\n deactivate CS\n\n LFS ->> FS : Access filesystem\n alt Filesystem error\n FS -->> LFS : Filesystem error\n LFS ->> LFS : Throw new InternalServerErrorException\n LFS -->> IFS : Error\n else Successful file operation\n FS -->> LFS : Return filesystem data\n LFS ->> IFS : return Promise<Project>\n end\n deactivate LFS\n activate IFS\n end\n\n alt Error thrown\n IFS ->> FR : return Error\n deactivate IFS\n FR ->> Traefik : return Error\n Traefik ->> Client : HTTP error response\n else Successful operation\n IFS ->> FR : return Promise<Project>\n deactivate IFS\n FR ->> Traefik : return Promise<Project>\n Traefik ->> Client : HTTP response\n end\n\n deactivate FR\n
"},{"location":"developer/servers/lib/lib-ms.html#dependency-graphs","title":"Dependency Graphs","text":"The figures are the dependency graphs generated from the code.
"},{"location":"developer/servers/lib/lib-ms.html#src-directory","title":"src directory","text":""},{"location":"developer/servers/lib/lib-ms.html#test-directory","title":"test directory","text":""},{"location":"developer/system/architecture.html","title":"System Overview","text":""},{"location":"developer/system/architecture.html#user-requirements","title":"User Requirements","text":"The DTaaS software platform users expect a single platform to support the complete DT lifecycle. To be more precise, the platform users expect the following features:
- Author \u2013 create different assets of the DT on the platform itself. This step requires use of some software frameworks and tools whose sole purpose is to author DT assets.
- Consolidate \u2013 consolidate the list of available DT assets and authoring tools so that user can navigate the library of reusable assets. This functionality requires support for discovery of available assets.
- Configure \u2013 support selection and configuration of DTs. This functionality also requires support for validation of a given configuration.
- Execute \u2013 provision computing infrastructure on demand to support execution of a DT.
- Explore \u2013 interact with a DT and explore the results stored both inside and outside the platform. Exploration may lead to analytical insights.
- Save \u2013 save the state of a DT that\u2019s already in the execution phase. This functionality is required for on demand saving and re-spawning of DTs.
- Services \u2013 integrate DTs with on-platform or external services with which users can interact with.
- Share \u2013 share a DT with other users of their organisation.
"},{"location":"developer/system/architecture.html#system-architecture","title":"System Architecture","text":"The figure shows the system architecture of the the DTaaS software platform.
"},{"location":"developer/system/architecture.html#system-components","title":"System Components","text":"The users interact with the software platform using a webapp. The service router is a single point of entry for direct access to the platform services. The service router is responsible for controlling user access to the microservice components. The service mesh enables discovery of microservices, load balancing and authorization functionalities.
In addition, there are microservices for catering to managing DT reusable assets, platform services, DT lifecycle manager, DT execution manager, accouting and security. The microservices are complementary and composable; they fulfil core requirements of the system.
The microservices responsible for satisfying the user requirements are:
- The security microservice implements role-based access control (RBAC) in the platform.
- The accounting microservice is responsible for keeping track of the live status of platform, DT asset and infrastructure usage. Any licensing, usage restrictions need to be enforced by the accounting microservice. Accounting is a pre-requisite to commercialisation of the platform. Due to significant use of external infrastructure and resources via the platform, the accounting microservice needs to interface with accounting systems of the external services.
- User Workspaces are virtual environments in which users can perform lifecycle operations on DTs. These virtual environments are either docker containers or virtual machines which provide desktop interface to users.
- Reusable Assets are assets / parts from which DTs are created. Further explation is available on the assets page
- Services are dedicated services available to all the DTs and users of the DTaaS platform. Services build upon DTs and provide user interfaces to users.
- DT Execution Manager provides virtual and isolated execution environments for DTs. The execution manager is also responsible for dynamic resource provisioning of cloud resources.
- DT Lifecycle Manager manages the lifecycle operations on all DTs. It also directs DT Execution Manager to perform execute, save and terminate operations on DTs.
If you are interested, please take a look at the C4 architectural diagram.
A mapping of the architectural components to related pages in the documentation is available in the table.
System Component Doc Page(s) Service Router Traefik Gateway Web Application React Webapplication Reusable Assets Library Microservice Digital Twins and DevOps Integrated Gitlab Services Third-party Services (MQTT, InfluxDB, RabbitMQ, Grafana and MongoDB) DT Lifecycle Not available yet Security Gitlab client OAuth and server OAuth Digital Twins as Services DT Runner Accounting Not available yet Execution Manager Not available yet"},{"location":"developer/system/architecture.html#references","title":"References","text":"Font sources: fileformat
"},{"location":"developer/system/current-status.html","title":"Current Status","text":"The DTaaS software platform is currently under development. Crucial system components are in place with ongoing development work focusing on increased automation and feature enhancement. The figure below shows the current status of the development work.
A C4 representation of the same diagram is also available.
"},{"location":"developer/system/current-status.html#user-security","title":"User Security","text":"There is a two-level authorization mechanisms in place for the react website and the Traefik gateway.
The react website component uses Gitlab for user authorization using OAuth protocol.
"},{"location":"developer/system/current-status.html#gateway-authorization","title":"Gateway Authorization","text":"The Traefik gateway has OAuth2 web server authorization provided by Traefik-forward-auth microservice. This authorization protects all the microservices and workspaces running in the backend.
"},{"location":"developer/system/current-status.html#user-workspaces","title":"User Workspaces","text":"All users have dedicated dockerized-workspaces. These docker-images are based on container images published by mltooling group.
Thus DT experts can develop DTs from existing DT components and share them with other users. A file server has been setup to act as a DT asset repository. Each user gets space to store private DT assets and also gets access to shared DT assets. Users can synchronize their private DT assets with external git repositories. In addition, the asset repository transparently gets mapped to user workspaces within which users can perform DT lifecycle operations. There is also a library microservice which in the long-run will replace the file server.
Users can run DTs in their workspaces and also permit remote access to other users. There is already shared access to internal and external services. With these two provisions, users can treat live DTs as service components in their own software systems.
"},{"location":"developer/system/current-status.html#platform-services","title":"Platform Services","text":"There are four external services integrated with the DTaaS software platform. They are: InfluxDB, Grafana, RabbitMQ, MQTT, and MongoDB.
These services can be used by DTs and PTs for communication, storing and visualization of data. There can also be monitoring services setup based on these services.
"},{"location":"developer/system/current-status.html#development-priorities","title":"Development Priorities","text":"The development priorities for the DTaaS software development team are:
- DevOps from React frontend website
- Upgrade software stack of user workspaces
- Increased automation of installation procedures
- DT Runner (API Interface to DT)
- DT Configuration DSL \u00edn the form of YAML schema
Your contributions are highly welcome.
"},{"location":"developer/system/current-status.html#references","title":"References","text":"Font sources: fileformat
"},{"location":"developer/testing/intro.html","title":"Testing","text":""},{"location":"developer/testing/intro.html#common-questions-on-testing","title":"Common Questions on Testing","text":""},{"location":"developer/testing/intro.html#what-is-software-testing","title":"What is Software Testing","text":"Software testing is a procedure to investigate the quality of a software product in different scenarios. It can also be stated as the process of verifying and validating that a software program or application works as expected and meets the business and technical requirements that guided design and development.
"},{"location":"developer/testing/intro.html#why-software-testing","title":"Why Software Testing","text":"Software testing is required to point out the defects and errors that were made during different development phases. Software testing also ensures that the product under test works as expected in all different cases \u2013 stronger the test suite, stronger is our confidence in the product that we have built. One important benefit of software testing is that it facilitates the developers to make incremental changes to source code and make sure that the current changes are not breaking the functionality of the previously existing code.
"},{"location":"developer/testing/intro.html#what-is-tdd","title":"What is TDD","text":"TDD stands for Test Driven Development. It is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards. The goal of TDD can be viewed as specification and not validation. In other words, it\u2019s one way to think through your requirements or design before your write your functional code.
"},{"location":"developer/testing/intro.html#what-is-bdd","title":"What is BDD","text":"BDD stands for \u201cBehaviour Driven Development\u201d. It is a software development process that emerged from TDD. It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. This provides software development and management teams with shared tools and a shared process to collaborate on software development. BDD is largely facilitated through the use of a simple domain-specific language (DSL) using natural language constructs (e.g., English-like sentences) that can express the behavior and the expected outcomes. Mocha and Cucumber testing libraries are built around the concepts of BDD.
"},{"location":"developer/testing/intro.html#testing-workflow","title":"Testing workflow","text":"(Ref: Ham Vocke, The Practical Test Pyramid)
We follow a testing workflow in accordance with the test pyramid diagram given above, starting with isolated tests and moving towards complete integration for any new feature changes. The different types of tests (in the order that they should be performed) are explained below:
"},{"location":"developer/testing/intro.html#unit-tests","title":"Unit Tests","text":"Unit testing is a level of software testing where individual units/ components of a software are tested. The objective of Unit Testing is to isolate a section of code and verify its correctness.
Ideally, each test case is independent from the others. Substitutes such as method stubs, mock objects, and spies can be used to assist testing a module in isolation.
"},{"location":"developer/testing/intro.html#benefits-of-unit-testing","title":"Benefits of Unit Testing","text":" - Unit testing increases confidence in changing/ maintaining code. If good unit tests are written and if they are run every time any code is changed, we will be able to promptly catch any defects introduced due to the change.
- If codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
- The cost, in terms of time, effort and money, of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels.
"},{"location":"developer/testing/intro.html#unit-tests-in-dtaas","title":"Unit Tests in DTaaS","text":"Each component DTaaS project uses unique technology stack. Thus the packages used for unit tests are different. Please check the test/
directory of a component to figure out the unit test packages used.
"},{"location":"developer/testing/intro.html#integration-tests","title":"Integration tests","text":"Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. In DTaaS, we use an integration server for software development as well as such tests.
The existing integration tests are done at the component level. There are no integration tests between the components. This task has been postponed to future.
"},{"location":"developer/testing/intro.html#end-to-end-tests","title":"End-to-End tests","text":"Testing any code changes through the end user interface of your software is essential to verify if your code has the desired effect for the user. End-to-End tests in DTaaS a functional setup.
There are end-to-end tests in the DTaaS. This task has been postponed to future.
"},{"location":"developer/testing/intro.html#feature-tests","title":"Feature Tests","text":"A Software feature can be defined as the changes made in the system to add new functionality or modify the existing functionality. Each feature is said to have a characteristics that is designed to be useful, intuitive and effective. It is important to test a new feature when it has been added. We also need to make sure that it does not break the functionality of already existing features. Hence feature tests prove to be useful.
The DTaaS project does not have any feature tests yet. Cucumber shall be used in future to implement feature tests.
"},{"location":"developer/testing/intro.html#references","title":"References","text":" - Arthur Hicken, Shift left approach to software testing
- Justin Searls and Kevin Buchanan, Contributing Tests wiki.
- This wiki has good explanation of TDD and test doubles.
"},{"location":"user/features.html","title":"Overview","text":""},{"location":"user/features.html#advantages","title":"Advantages","text":"The DTaaS software platform provides certain advantages to users:
- Support for different kinds of Digital Twins
- CFD, Simulink, co-simulation, FEM, ROM, ML etc.
- Integrates with other Digital Twin frameworks
- Facilitate availability of Digital Twin as a Service
- Collaboration and reuse
- Private workspaces for verification of reusable assets, trial run DTs
- Cost effectiveness
"},{"location":"user/features.html#software-features","title":"Software Features","text":"Each installation of DTaaS platform comes with the features highlighted in the following picture.
All the users have dedicated workspaces. These workspaces are dockerized versions of Linux Desktops. The user desktops are isolated so the installations and customizations done in one user workspace do not effect the other user workspaces.
Each user workspace comes with some development tools pre-installed. These tools are directly accessible from web browser. The following tools are available at present:
Tool Advantage Jupyter Lab Provides flexible creation and use of digital twins and their components from web browser. All the native Jupyterlab usecases are supported here. Jupyter Notebook Useful for web-based management of their files (library assets) VS Code in the browser A popular IDE for software development. Users can develop their digital twin-related assets here. ungit An interactive git client. Users can work with git repositories from web browser In addition, users have access to xfce-based remote desktop via VNC client. The VNC client is available right in the web browser. The xfce supported desktop software can also be run in their workspace.
The DTaaS software platform has some pre-installed services available. The currently available services are:
Service Advantage InfluxDB Time-series database primarly for storing time-series data from physical twins. The digital twins can use an already existing data. Users can also create visualization dashboards for their digital twins. RabbitMQ Communication broker for communication between physical and digital twins Grafana Visualization dashboards for their digital twins. MQTT Lightweight data transfer broker for IoT devices / physical twins feeding data into digital twins. MongoDB NoSQL document database for storing metadata of data from physical twins In addition, the workspaces are connected to the Internet so all the Digital Twins running in the workspace can interact with both the internal and external services.
The users can publish and reuse the digital twin assets available on the platform. In addition, users can run their digital twins and make these live digital twins available as services to their clients. The clients need not be users of the DTaaS software installation.
"},{"location":"user/motivation.html","title":"DTaaS for Users","text":""},{"location":"user/motivation.html#user-guide","title":"User Guide","text":"This guide is for users of the DTaaS platform. Please make sure that you have access to a live installation of the DTaaS platform. The easiest is localhost installation scenario.
Please see user-specific Slides and Video to get the conceptual framework behind composable digital twins in the DTaaS platform.
"},{"location":"user/motivation.html#motivation","title":"Motivation","text":"How can DT software platforms enable users collaborate to:
- Build digital twins (DTs)
- Use DTs themselves
- Share DTs with other users
- Provide the existing DTs as Service to other users
In addition, how can the DT software platforms:
- Support DT lifecycle
- Scale up rather than scale down (flexible convention over configuration)
"},{"location":"user/motivation.html#existing-approaches","title":"Existing Approaches","text":"There are quite a few solutions proposed in the recent past to solve this problem. Some of them are:
- Focus on data from Physical Twins (PTs) to perform analysis, diagnosis, planning etc\u2026
- Share DT assets across the upstream, downstream etc\u2026.
- Evaluate different models of PT
- DevOps for Cyber Physical Systems (CPS)
- Scale DT / execution of DT / ensemble of related DTs
- Support for PT product lifecycle
"},{"location":"user/motivation.html#our-approach","title":"Our Approach","text":" - Support for transition from existing workflows to DT frameworks
- Create DTs from reusable assets
- Enable users to share DT assets
- Offer DTs as a Service
- Integrate the DTs with external software systems
- Separate configurations of independent DT components
"},{"location":"user/digital-twins/create.html","title":"Create a Digital Twin","text":"The first step in digital twin creation is to use the available assets in your workspace. If you have assets / files in your computer that need to be available in the DTaaS workspace, then please follow the instructions provided in library assets.
There are dependencies among the library assets. These dependencies are shown below.
A digital twin can only be created by linking the assets in a meaningful way. This relationship can be expressed using a mathematical equation:
where D denotes data, M denotes models, F denotes functions, T denotes tools, denotes DT configuration and is a symbolic notation for a digital twin itself. The expression denotes composition of DT from D,M,T and F assets. The indicates zero or one more instances of an asset and indicates one or more instances of an asset.
The DT configuration specifies the relevant assets to use, the potential parameters to be set for these assets. If a DT needs to use RabbitMQ, InfluxDB like services supported by the platform, the DT configuration needs to have access credentials for these services.
This kind of generic DT definition is based on the DT examples seen in the wild. You are at liberty to deviate from this definition of DT. The only requirement is the ability to run the DT from either commandline or desktop.
Tip
If you are stepping into the world of Digital Twins, you might not have distinct digital twin assets. You are likely to have one directory of everything in which you run your digital twin. In such a case we recommend that you upload this monolithic digital twin into digital_twin/your_digital_twin_name directory.
"},{"location":"user/digital-twins/create.html#example","title":"Example","text":"The Examples repository contains a co-simulation setup for mass spring damper. This example illustrates the potential of using co-simulation for digital twins.
The file system contents for this example are:
workspace/\n data/\n mass-spring-damper\n input/\n output/\n\n digital_twins/\n mass-spring-damper/\n cosim.json\n time.json\n lifecycle/\n analyze\n clean\n evolve\n execute\n save\n terminate\n README.md\n\n functions/\n models/\n MassSpringDamper1.fmu\n MassSpringDamper2.fmu\n\n tools/\n common/\n data/\n functions/\n models/\n tools/\n maestro-2.3.0-jar-with-dependencies.jar\n
The workspace/data/mass-spring-damper/
contains input
and output
data for the mass-spring-damper digital twin.
The two FMU models needed for this digital twin are in models/
directory.
The co-simulation digital twin needs Maestro co-simulation orchestrator. Since this is a reusable asset for all the co-simulation based DTs, the tool has been placed in common/tools/
directory.
The actual digital twin configuration is specified in digital twins/mass-spring-damper
directory. The co-simulation configuration is specified in two json files, namely cosim.json
and time.json
. A small explanation of digital twin for its users can be placed in digital twins/mass-spring-damper/README.md
.
The launch program for this digital twin is in digital twins/mass-spring-damper/lifecycle/execute
. This launch program runs the co-simulation digital twin. The co-simulation runs till completion and then ends. The programs in digital twins/mass-spring-damper/lifecycle
are responsible for lifecycle management of this digital twin. The lifecycle page provides more explanation on these programs.
Execution of a Digital Twin
A frequent question arises on the run time characteristics of a digital twin. The natural intuition is to say that a digital twin must operate as long as its physical twin is in operation. If a digital twin runs for a finite time and then ends, can it be called a digital twin? The answer is a resounding YES. The Industry 4.0 usecases seen among SMEs have digital twins that run for a finite time. These digital twins are often run at the discretion of the user.
You can run this digital twin by,
- Go to Workbench tools page of the DTaaS website and open VNC Desktop. This opens a new tab in your browser
- A page with VNC Desktop and a connect button comes up. Click on Connect. You are now connected to the Linux Desktop of your workspace.
- Open a Terminal (black rectangular icon in the top left region of your tab) and type the following commands.
-
Download the example files by following the instructions given on examples overview.
-
Go to the digital twin directory and run
cd /workspace/examples/digital_twins/mass-spring-damper\nlifecycle/execute\n
The last command executes the mass-spring-damper digital twin and stores the co-simulation output in data/mass-spring-damper/output
.
"},{"location":"user/digital-twins/devops.html","title":"Digital Twins and DevOps","text":"DevOps has been a well established software development practice. We are bringing out an experimental feature of integration DevOps in the DTaaS.
Warning
This feature is under documented. Please expect some instability in this release. However, we are working to mature the feature and improve the documentation.
This feature requires specific installation setup.
- Integrated gitlab installation
- A valid gitlab repository for the logged in user. Please see an example repository. You can clone this repository and customize to your needs.
- A linked Gitlab Runner to the user gitlab repository.
"},{"location":"user/digital-twins/devops.html#manage","title":"Manage","text":"The manage tab allows for different operations on a digital twin:
- Checking the details (Details button)
- Delete (Delete button)
- Modify / Reconfigure (Reconfigure button)
A digital twin placed in the DTaaS has a certain recommended structure. Please see the assets pag for an explanation and this example.
The information page shown using the Details button, shows the README.md information stored inside the digital twin directory.
A reconfigure button opens an editor and shows all the files corresponding to a digital twin. All of these files can be updated. These files correspond to three categories.
- Description
- Configuration
- Lifecycle
"},{"location":"user/digital-twins/devops.html#execute","title":"Execute","text":"The execute tabs shows the possibility of executing multiple digital twins. Once an execution of digital twin is complete, you can see the execution log as well.
"},{"location":"user/digital-twins/lifecycle.html","title":"Digital Twin Lifecycle","text":"The physical products in the real world have product lifecycle. A simplified four-stage product life is illustrated here.
A digital twin tracking the physical products (twins) need to track and evolve in conjunction with the corresponding physical twin.
The possible activities undertaken in each lifecycle phases are illustrated in the figure.
(Ref: Minerva, R, Lee, GM and Crespi, N (2020) Digital Twin in the IoT context: a survey on technical features, scenarios and architectural models. Proceedings of the IEEE, 108 (10). pp. 1785-1824. ISSN 0018-9219.)
"},{"location":"user/digital-twins/lifecycle.html#lifecycle-phases","title":"Lifecycle Phases","text":"The four phase lifecycle has been extended to a lifecycle with eight phases. The new phase names and the typical activities undertaken in each phase are outlined in this section.
A DT lifecycle consists of explore, create, execute, save, analyse, evolve and terminate phases.
Phase Main Activities explore selection of suitable assets based on the user needs and checking their compatibility for the purposes of creating a DT. create specification of DT configuration. If DT already exists, there is no creation phase at the time of reuse. execute automated / manual execution of a DT based on its configuration. The DT configuration must checked before starting the execution phase. analyse checking the outputs of a DT and making a decision. The outputs can be text files, or visual dashboards. evolve reconfigure DT primarily based on analysis. save involves saving the state of DT to enable future recovery. terminate stop the execution of DT. A digital twin faithfully tracking the physical twin lifecycle will have to support all the phases. It is also possible for digital twin engineers to add more phases to digital they are developing. Thus it is important for the DTaaS software platform needs to accommodate needs of different DTs.
A potential linear representation of the tasks undertaken in a digital twin lifecycle are shown here.
Again this is only a one possible pathway. Users are at liberty to alter the sequence of steps.
It is possible to map the lifecycle phases identified so far with the Build-Use-Share approach of the DTaaS software platform.
Even though not mandatory, having a matching coding structure makes it easy to for users to create and manage their DTs within the DTaaS. It is recommended to have the following structure:
workspace/\n digital_twins/\n digital-twin-1/\n lifecycle/\n analyze\n clean\n evolve\n execute\n save\n terminate\n
A dedicated program exists for each phase of DT lifecycle. Each program can be as simple as a script that launches other programs or sends messages to a live digital twin.
The recommended way to implement lifecycle phases within DTaaS is to create scripts. These scripts can be as simple as shell scripts.
"},{"location":"user/digital-twins/lifecycle.html#example-lifecycle-scripts","title":"Example Lifecycle Scripts","text":"Here are the example programs / scripts to manage three phases in the lifecycle of mass-spring-damper DT.
lifecycle/execute#!/bin/bash\nmkdir -p /workspace/data/mass-spring-damper/output\n#cd ..\njava -jar /workspace/common/tools/maestro-2.3.0-jar-with-dependencies.jar \\\n import -output /workspace/data/mass-spring-damper/output \\\n --dump-intermediate sg1 cosim.json time.json -i -vi FMI2 \\\n output-dir>debug.log 2>&1\n
The execute phases uses the DT configuration, FMU models and Maestro tool to execute the digital twin. The script also stores the output of cosimulation in /workspace/data/mass-spring-damper/output
.
It is possible for a DT not to support a specific lifecycle phase. This intention can be specified with an empty script and a helpful message if deemed necessary.
lifecycle/analyze#!/bin/bash\nprintf \"operation is not supported on this digital twin\"\n
The lifecycle programs can call other programs in the code base. In the case of lifecycle/terminate
program, it is calling another script to do the necessary job.
lifecycle/terminate#!/bin/bash\nlifecycle/clean\n
"},{"location":"user/examples/index.html","title":"DTaaS Examples","text":"There are some example digital twins created for the DTaaS software. You can peruse these examples and follow the steps given in this Examples section to experience features of the DTaaS software platform and understand best practices for managing digital twins within the platform. Please see these slides and video to get an overview of these examples.
There are two demo vides available: CP-SENS project (slides and video) and Incubator (video). These two videos have been recorded using DTaaS v0.5.0.
"},{"location":"user/examples/index.html#copy-examples","title":"Copy Examples","text":"The first step is to copy all the example code into your user workspace within the DTaaS. Use the given shell script to copy all the examples into /workspace/examples
directory.
wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS-examples/main/getExamples.sh\nbash getExamples.sh\n
"},{"location":"user/examples/index.html#example-list","title":"Example List","text":"The digital twins provided in examples vary in their complexity. It is best to use the examples in the following order.
- Mass Spring Damper
- Water Tank Fault Injection
- Water Tank Model Swap
- Desktop Robotti and RabbitMQ
- Water Treatment Plant and OPC-UA
- Three Water Tanks with DT Manager Framework
- Flex Cell with Two Industrial Robots
- Incubator
- Firefighters in Emergency Environments
- Mass Spring Damper with NuRV Runtime Monitor FMU
- Water Tank Fault Injection with NuRV Runtime Monitor FMU
- Incubator Co-Simulation with NuRV Runtime Monitor FMU
- Incubator with NuRV Runtime Monitor as Service
- Incubator with NuRV Runtime Monitor FMU as Service
DTaaS examples
"},{"location":"user/examples/drobotti-rmqfmu/index.html","title":"Desktop Robotti with RabbitMQ","text":""},{"location":"user/examples/drobotti-rmqfmu/index.html#overview","title":"Overview","text":"This example demonstrates bidirectional communication between a mock physical twin and a digital twin of a mobile robot (Desktop Robotti). The communication is enabled by RabbitMQ Broker.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#example-structure","title":"Example Structure","text":"The mock physical twin of mobile robot is created using two python scripts
- data/drobotti_rmqfmu/rmq-publisher.py
- data/drobotti_rmqfmu/consume.py
The mock physical twin sends its physical location in (x,y) coordinates and expects a cartesian distance calculated from digital twin.
The rmq-publisher.py reads the recorded (x,y) physical coordinates of mobile robot. The recorded values are stored in a data file. These (x,y) values are published to RabbitMQ Broker. The published (x,y) values are consumed by the digital twin.
The consume.py subscribes to RabbitMQ Broker and waits for the calculated distance value from the digital twin.
The digital twin consists of a FMI-based co-simulation, where Maestro is used as co-orchestration engine. In this case, the co-simulation is created by using two FMUs - RMQ FMU (rabbitmq-vhost.fmu) and distance FMU (distance-from-zero.fmu). The RMQ FMU receives the (x,y) coordinates from rmq-publisher.py and sends calculated distance value to consume.py. The RMQ FMU uses RabbitMQ broker for communication with the mock mobile robot, i.e., rmq-publisher.py and consume.py. The distance FMU is responsible for calculating the distance between (0,0) and (x,y). The RMQ FMU and distance FMU exchange values during co-simulation.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models, one tool, one data, and two scripts to create mock physical twin. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models distance-from-zero.fmu Private No rmq-vhost.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes Data drobotti_playback_data.csv private No Mock PT rmq-publisher.py Private No consume.py Private No This DT has many configuration files. The coe.json
and multimodel.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
The RabbitMQ access credentials need to be provided in multimodel.json
. The rabbitMQ-credentials.json
provides RabbitMQ access credentials for mock PT python scripts. Please add your credentials in both these files.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool and pip packages for python scripts Execute Runs both DT and mock PT Clean Clears run logs and outputs"},{"location":"user/examples/drobotti-rmqfmu/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/drobotti_rmqfmu\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/drobotti-rmqfmu/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 in the workspace. Also install the required python pip packages for rmq-publisher.py and consume.py scripts.
lifecycle/create\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#execute","title":"Execute","text":"Run the python scripts to start mock physical twin. Also run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.
lifecycle/execute\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/digital_twins/drobotti_rmqfmu directory.
Executing the DT will generate and launch a co-simulation (RMQFMU and distance FMU), and two python scripts. One to publish data that is read from a file. And one to consume what is sent by the distance FMU.
In this examples the DT will run for 10 seconds, with a stepsize of 100ms. Thereafter it is possible to examine the logs produce in /workspace/examples/digital_twins/drobotti_rmqfmu/target
. The outputs for each FMU, xpos and ypos for the RMQFMU, and the distance for the distance FMU are recorded in the outputs.csv
file. Other logs can be examined for each FMU and the publisher scripts. Note that, the RMQFMU only sends data, if the current input is different to the previous one.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#references","title":"References","text":"The RabbitMQ FMU github repository contains complete documentation and source code of the rmq-vhost.fmu.
More information about the case study is available in:
Frasheri, Mirgita, et al. \"Addressing time discrepancy between digital\nand physical twins.\" Robotics and Autonomous Systems 161 (2023): 104347.\n
"},{"location":"user/examples/flex-cell/index.html","title":"Flex Cell Digital Twin with Two Industrial Robots","text":""},{"location":"user/examples/flex-cell/index.html#overview","title":"Overview","text":"The flex-cell Digital Twin is a case study with two industrial robotic arms, a UR5e and a Kuka LBR iiwa 7, working in a cooperative setting on a manufacturing cell.
The case study focuses on the robot positioning in the discrete cartesian space of the flex-cell working space. Therefore, it is possible to send (X,Y,Z) commands to both robots, which refer to the target hole and height they want should move to.
The flex-cell case study is managed using the TwinManager
(formerly DT Manager
), which is packed as a jar library in the tools, and run from a java main file.
The TwinManager
uses Maestro as a slave for co-simulation, so it generates the output of the co-simulation and can interact with the real robots at the same time (with the proper configuration and setup). The mainfile can be changed according to the application scope, i.e., the /workspace/examples/tools/flex-cell/FlexCellDTaaS.java
can be manipulated to get a different result.
The /workspace/examples/models/flex-cell/
folder contains the .fmu
files for the kinematic models of the robotic arms, the .urdf
files for visualization (including the grippers), and the .aasx
files for the schema representation with Asset Administration Shell.
The case study also uses RabbitMQFMU to inject values into the co-simulation, therefore, there is the rabbitmqfmu in the models folder as well. Right now, RabbitMQFMU is only used for injecting values into the co-simulation, but not the other way around. The TwinManager
is in charge of reading the values from the co-simulation output and the current state of the physical twins.
"},{"location":"user/examples/flex-cell/index.html#example-structure","title":"Example Structure","text":"The example structure represents the components of the flex-cell DT implementation using the TwinManager
architecture.
The TwinManager orchestrates the flex-cell DT via the Flex-cell DT System, which is composed of 2 smaller Digital Twins, namely, the DT UR5e and the DT Kuka lbr iiwa 7. The TwinManager also provides the interface for the Physical Twins, namely, PT UR5e and PT Kuka lbr iiwa 7. Each Physical Twin and Digital Twin System has a particular endpoint (with a different specialization), which is initialized from configuration files and data model (twin schema).
The current endpoints used in this implementation are:
Digital or Physical Twin Endpoint Flex-cell DT System MaestroEndpoint DT UR5e FMIEndpoint DT Kuka lbr iiwa 7 FMIEndpoint PT UR5e MQTTEndpoint and RabbitMQEndpoint PT Kuka lbr iiwa 7 MQTTEndpoint and RabbitMQEndpoint The Flex-cell DT System uses another configuration to be integrated with the Maestro co-simulation engine.
In the lower part, the Flex-cell System represents the composed physical twin, including the two robotic arms and controller and the Flex-cell Simulation is the mock-up representation for the real system, which is implemented by FMU blocks and their connections.
"},{"location":"user/examples/flex-cell/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses seven models, five tools, six data files, two functions, and one script. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Model kukalbriiwa_model.fmu Private No kuka_irw_gripper_rg6.urdf Private No kuka.aasx Private No ur5e_model.fmu Private No ur5e_gripper_2fg7.urdf Private No ur5e.aasx Private No rmq-vhost.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes TwinManagerFramework-0.0.2.jar Private Yes urinterface (installed with pip) Private No kukalbrinterface Private No robots_flexcell Private No FlexCellDTaaS.java (main script) Private No Data publisher-flexcell-physical.py Private No ur5e_mqtt_publisher.py Private No connections.conf Private No outputs.csv Private No kukalbriiwa7_actual.csv Private No ur5e_actual.csv Private No Function plots.py Private No prepare.py Private No"},{"location":"user/examples/flex-cell/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
- Installation of dependencies in the create phase.
- Preparing the credentials for connections in the prepare phase.
- Execution of the experiment in the execution phase.
- Saving experiments in the save phase.
- Plotting the results of the co-simulation and the real data coming from the robots in the analyze phase.
- Terminating the background processes and cleaning up the outputs in the termination phase.
Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool, Compiles source code of TwinManager to create a usable jar package (used as tool) Prepare Takes the RabbitMQ and MQTT credentials in connections.conf file and configures different assets of DT. Execute The TwinManager executes the flex-cell DT and produces output in data/flex-cell/output
directory Save Save the experimental results Analyze Uses plotting functions to generate plots of co-simulation results Terminate Terminating the background processes Clean Cleans up the output data"},{"location":"user/examples/flex-cell/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/flex-cell\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
This example requires Java 11. The create script installs Java 11; however if you have already installed other Java versions, your default java might be pointing to another version. You can check and modify the default version using the following commands.
java -version\nupdate-alternatives --config java\n
Now, run the following scripts:
"},{"location":"user/examples/flex-cell/index.html#create","title":"Create","text":"Installs Open Java Development Kit 11 and a python virtual environment with pip dependencies. Also builds the TwinManager
tool (TwinManagerFramework-0.0.2.jar) from source code.
lifecycle/create\n
"},{"location":"user/examples/flex-cell/index.html#prepare","title":"Prepare","text":"Configure different assets of DT with these credentials. The functions/flex-cell/prepare.py
script is used for this purpose. The only thing needed to set up the connection is to update the file /workspace/examples/data/flex-cell/input/connections.conf
with the connection parameters for MQTT and RabbitMQ and then execute the prepare
script.
lifecycle/prepare\n
The following files are updated with the configuration information:
/workspace/examples/digital_twins/flex-cell/kuka_actual.conf
/workspace/examples/digital_twins/flex-cell/ur5e_actual.conf
/workspace/examples/data/flex-cell/input/publisher-flexcell-physical.py
modelDescription.xml
for the RabbitMQFMU require special credentials to connect to the RabbitMQ and the MQTT brokers.
"},{"location":"user/examples/flex-cell/index.html#execute","title":"Execute","text":"Execute the flex-cell digital twin using TwinManager. TwinManager in-turn runs the co-simulation using Maestro. Generates the co-simulation output.csv file at /workspace/examples/data/flex-cell/output
. The execution needs to be stopped with control + c
since the TwinManager
runs the application in a non-stopping loop.
lifecycle/execute\n
"},{"location":"user/examples/flex-cell/index.html#save","title":"Save","text":"Each execution of the DT is treated as a single run. The results of one execution are saved as time-stamped co-simulation output file in The TwinManager executes the flex-cell digital twin and produces output in data/flex-cell/output/saved_experiments
directory.
lifecycle/save\n
The execute
and save
scripts can be executed in that order any number of times. A new file data/flex-cell/output/saved_experiments
directory with each iteration.
"},{"location":"user/examples/flex-cell/index.html#analyze","title":"Analyze","text":"There are dedicated plotting functions in functions/flex-cell/plots.py
. This script plots the co-simulation results against the recorded values from the two robots.
lifecycle/analyze\n
"},{"location":"user/examples/flex-cell/index.html#terminate","title":"Terminate","text":"Stops the Maestro running in the background. Also stops any other jvm process started during execute phase.
lifecycle/terminate\n
"},{"location":"user/examples/flex-cell/index.html#clean","title":"Clean","text":"Removes the output generated during execute phase.
lifecycle/clean\n
"},{"location":"user/examples/flex-cell/index.html#examining-the-results","title":"Examining the results","text":"Executing this Digital Twin will generate a co-simulation output, but the results can also be monitored from updating the /workspace/examples/tools/flex-cell/FlexCellDTaaS.java
with a specific set of getAttributeValue
commands, such as shown in the code. That main file enables the online execution and comparison on Digital Twin and Physical Twin at the same time and at the same abstraction level.
The output is generated to the /workspace/examples/data/flex-cell/output
folder. In case a specific experiments is to be saved, the save lifecycle script stores the co-simulation results into the /workspace/examples/data/flex-cell/output/saved_experiments
folder.
In the default example, the co-simulation is run for 11 seconds in steps of 0.2 seconds. This can be modified for a longer period and different step size. The output stored in outputs.csv
contains the joint position of both robotic arms and the current discrete (X,Y,Z) position of the TCP of the robot. Additional variables can be added, such as the discrete (X,Y,Z) position of the other joints.
When connected to the real robots, the tools urinterface
and kukalbrinterface
log their data at a higher sampling rate.
"},{"location":"user/examples/flex-cell/index.html#references","title":"References","text":"The RabbitMQ FMU github repository contains complete documentation and source code of the rmq-vhost.fmu.
More information about the TwinManager (formerly DT Manager) and the case study is available in:
- D. Lehner, S. Gil, P. H. Mikkelsen, P. G. Larsen and M. Wimmer, \"An Architectural Extension for Digital Twin Platforms to Leverage Behavioral Models,\" 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, 2023, pp. 1-8, doi: 10.1109/CASE56687.2023.10260417.
- S. Gil, P. H. Mikkelsen, D. Tola, C. Schou and P. G. Larsen, \"A Modeling Approach for Composed Digital Twins in Cooperative Systems,\" 2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), Sinaia, Romania, 2023, pp. 1-8, doi: 10.1109/ETFA54631.2023.10275601.
- S. Gil, C. Schou, P. H. Mikkelsen, and P. G. Larsen, \u201cIntegrating Skills into Digital Twins in Cooperative Systems,\u201d in 2024 IEEE/SICE International Symposium on System Integration (SII), 2024, pp. 1124\u20131131, doi: 10.1109/SII58957.2024.10417610.
"},{"location":"user/examples/incubator/index.html","title":"Incubator Digital Twin","text":""},{"location":"user/examples/incubator/index.html#overview","title":"Overview","text":"This is a case study of an Incubator with the purpose of understanding the steps and processes involved in developing a digital twin system. This incubator is an insulated container with the ability to keep a temperature and heat, but not cool. A picture of the incubator is given below.
The overall purpose of the system is to reach a certain temperature within a box and keep the temperature regardless of content. An overview of the system can be seen below:
The system consists of:
- 1x styrofoam box in order to have an insulated container
- 1x heat source to heat up the content within the Styrofoam box.
- 1x fan to distribute the heating within the box
- 2x temperature sensor to monitor the temperature within the box
- 1x temperature Sensor to monitor the temperature outside the box
- 1x controller to actuate the heat source and the fan and read sensory information from the temperature sensors, and communicate with the digital twin.
The original repository for the example can be found: Original repository. This trimmed version of the codebase does not have the following:
- docker support
- tests
- datasets
The original repository contains the complete documentation of the example, including the full system architecture, instructions for running with a physical twin, and instructions for running a 3D visualization of the incubator.
"},{"location":"user/examples/incubator/index.html#digital-twin-structure","title":"Digital Twin Structure","text":"This diagrams shows the main components and the interfaces they use to communicate. All components communicate via the RabbitMQ message exchange, and the data is stored in a time series database. The RabbitMQ and InfluxDB are platform services hosted by the DTaaS.
The Incubator digital twin is a pre-packaged digital twin. It can be used as is or integrated with other digital twins.
The mock physical twin is executed from incubator/mock_plant/real_time_model_solver.py
script.
"},{"location":"user/examples/incubator/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses a plethora of Python scripts to run the digital twin. By default it is configured to run with a mock physical twin. Furthermore, it depends on a RabbitMQ and an InfluxDB instances.
There is one configuration file: simulation.conf
. The RabbitMQ and InfluxDB configuration parameters need to be updated.
"},{"location":"user/examples/incubator/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Create Potentially updates the system and installs Python dependencies Execute Executes the Incubator digital twin and produces output in the terminal and in incubator/log.log. Clean Removes the log file."},{"location":"user/examples/incubator/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/incubator\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/incubator/index.html#create","title":"Create","text":"Potentially updates the system and installs Python dependencies.
lifecycle/create\n
"},{"location":"user/examples/incubator/index.html#execute","title":"Execute","text":"Executes the Incubator digital twin with a mock physical twin. Pushes the results in the terminal, incubator/log.log, and in InfluxDB.
lifecycle/execute\n
"},{"location":"user/examples/incubator/index.html#clean","title":"Clean","text":"Removes the output log file.
lifecycle/clean\n
"},{"location":"user/examples/incubator/index.html#examining-the-results","title":"Examining the results","text":"After starting all services successfully, the controller service will start producing output that looks like the following:
time execution_interval elapsed heater_on fan_on room box_air_temperature state\n19/11 16:17:59 3.00 0.01 True False 10.70 19.68 Heating\n19/11 16:18:02 3.00 0.03 True True 10.70 19.57 Heating\n19/11 16:18:05 3.00 0.01 True True 10.70 19.57 Heating\n19/11 16:18:08 3.00 0.01 True True 10.69 19.47 Heating\n19/11 16:18:11 3.00 0.01 True True 10.69 19.41 Heating\n
An InfluxDB dashboard can be setup based on incubator/digital_twin/data_access/influxdbserver/dashboards/incubator_data.json
. If the dashboard on the InfluxDB is setup properly, the following visualization can be seen:
"},{"location":"user/examples/incubator/index.html#references","title":"References","text":" Forked from: Incubator repository with commit ID: 989ccf5909a684ad26a9c3ec16be2390667643aa
To understand what a digital twin is, we recommend you read/watch one or more of the following resources:
- Feng, Hao, Cl\u00e1udio Gomes, Casper Thule, Kenneth Lausdahl, Alexandros Iosifidis, and Peter Gorm Larsen. \u201cIntroduction to Digital Twin Engineering.\u201d In 2021 Annual Modeling and Simulation Conference (ANNSIM), 1\u201312. Fairfax, VA, USA: IEEE, 2021. https://doi.org/10.23919/ANNSIM52504.2021.9552135.
- Video recording of presentation by Claudio Gomes
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html","title":"Incubator Digital Twin with NuRV FMU Monitoring Service","text":""},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#overview","title":"Overview","text":"This example demonstrates how an FMU can be used as a runtime monitoring service (in this example NuRV[1]) and connected with the Incubator digital twin to verify runtime behavior of the Incubator.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#simulated-scenario","title":"Simulated scenario","text":"This example simulates a scenario where the lid of the Incubator is removed and later put back on. The Incubator is equipped with anomaly detection capabilities, which can detect anomalous behavior (i.e. the removal of the lid). When an anomaly is detected, the Incubator triggers an energy saving mode where the heater is turned off.
From a monitoring perspective, we wish to verify that within 3 messages of an anomaly detection, the energy saving mode is turned on. To verify this behavior, we construct the property:
.
The monitor will output the unknown state as long as the property is satisfied and will transition to the false state once a violation is detected.
The simulated scenario progresses as follows:
- Initialization: The services are initialized and the Kalman filter in the Incubator is given 2 minutes to stabilize. Sometimes, the anomaly detection algorithm will detect an anomaly at startup even though the lid is still on. It will disappear after approx 15 seconds.
- After 2 minutes: The lid is lifted and an anomaly is detected. The energy saver is turned on shortly after.
- After another 30 seconds: The energy saver is manually disabled producing a false verdict.
- After another 30 seconds: The lid is put back on and the anomaly detection is given time to detect that the lid is back on. The monitor is then reset producing an Unknown verdict again. The simulation then ends.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#example-structure","title":"Example structure","text":"A diagram depicting the logical software structure of the example can be seen below.
The execute script is responsible for starting the NuRV service and running the Python script that controls the scenario (execute.py).
The execute.py script starts the Incubator services and runs the example scenario. Once the Incubator DT is started, a RabbitMQ client is created that subscribes to changes in the anomaly and energy_saving states of the DT, as well as the verdicts produced by the NuRV service. Each time an update is received, the full state and verdict is printed to the console.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#digital-twin-configuration","title":"Digital Twin configuration","text":"Before running the example, please configure the simulation.conf file with your RabbitMQ credentials.
The example uses the following assets:
Asset Type Names of Assets Visibility Reuse in other Examples Tools common/tool/NuRV/NuRV Common Yes Other common/fmi2_headers Common Yes DT common/digital_twins/incubator Common Yes Specification safe-operation.smv Private No Script execute.py Private No The safe-operation.smv file contains the default monitored specification as described in the Simulated scenario section. These can be configured as desired.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#lifecycle-phases","title":"Lifecycle phases","text":"The lifecycle phases for this example include:
Lifecycle phase Completed tasks create Downloads the necessary tools and creates a virtual python environment with the necessary dependencies execute Runs a python script that starts up the necessary services as well as the Incubator simulation. Various status messages are printed to the console, including the monitored system states and monitor verdict. If required, change the execute permissions of lifecycle scripts you need to execute. This can be done using the following command.
chmod +x lifecycle/{script}\n
where {script} is the name of the script, e.g. create, execute etc.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#running-the-example","title":"Running the example","text":"To run the example, first run the following command in a terminal:
cd /workspace/examples/digital_twins/incubator-NuRV-fmu-monitor-service/\n
Then, first execute the create script followed by the execute script using the following command:
lifecycle/{script}\n
The execute script will then start outputting system states and the monitor verdict approx every 3 seconds. The output is printed as follows.
\"State: {anomaly state} & {energy_saving state}\"
where \"anomaly\" indicates that an anomaly is detected and \"!anomaly\" indicates that an anomaly is not currently detected. The same format is used for the energy_saving state. NuRV verdicts are printed as follows
\"Verdict from NuRV: {verdict}\".
The monitor verdict can be false or unknown, where the latter indicates that the monitor does not yet have sufficient information to determine the satisfaction of the property. The monitor will never produce a true verdict as the entire trace must be verified to ensure satisfaction due to the G operator. Thus the unknown state can be viewed as a tentative true verdict.
An example output trace is provided below:
....\nUsing LIFECYCLE_PATH: /workspace/examples/digital_twins/incubator-NuRV-fmu-monitor-service/lifecycle\nUsing INCUBATOR_PATH: /workspace/examples/digital_twins/incubator-NuRV-fmu-monitor-service/lifecycle/../../../common/digital_twins/incubator\nStarting NuRV FMU Monitor Service, see output at /tmp/nurv-fmu-service.log\nNuRVService.py PID: 13496\nStarting incubator\nConnected to rabbitmq server.\nRunning scenario with initial state: lid closed and energy saver on\nSetting energy saver mode: enable\nSetting G_box to: 0.5763498\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\n
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#references","title":"References","text":" - Information on the NuRV monitor can be found on FBK website.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html","title":"Incubator Digital Twin with NuRV monitoring service","text":""},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#overview","title":"Overview","text":"This example demonstrates how a runtime monitoring service (in this example NuRV[1]) can be connected with the Incubator digital twin to verify runtime behavior of the Incubator.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#simulated-scenario","title":"Simulated scenario","text":"This example simulates a scenario where the lid of the Incubator is removed and later put back on. The Incubator is equipped with anomaly detection capabilities, which can detect anomalous behavior (i.e. the removal of the lid). When an anomaly is detected, the Incubator triggers an energy saving mode where the heater is turned off.
From a monitoring perspective, we wish to verify that within 3 simulation steps of an anomaly detection, the energy saving mode is turned on. To verify this behavior, we construct the property: . Whenever a True or False verdict is produced by the monitor, it is reset, allowing for the detection of repeated satisfaction/violation detections of the property.
The simulated scenario progresses as follows:
- Initialization: The services are initialized and the Kalman filter in the Incubator is given 2 minutes to stabilize. Sometimes, the anomaly detection algorithm will detect an anomaly at startup even though the lid is still on. It will disappear after approx 15 seconds.
- After 2 minutes: The lid is lifted and an anomaly is detected. The energy saver is turned on shortly after
- After another 30 seconds: The energy saver is manually disabled producing a False verdict.
- After another 30 seconds: The lid is put back on and the anomaly detection is given time to detect that the lid is back on. The simulation then ends.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#example-structure","title":"Example structure","text":"A diagram depicting the logical software structure of the example can be seen below.
The execute.py script is responsible for orchestrating and starting all the relevant services in this example. This includes the Incubator DT, CORBA naming service (omniNames) and the NuRV monitor server as well as implementing the Monitor connector component that connects the DT output to the NuRV monitor server.
The NuRV monitor server utilizes a CORBA naming service where it registers under a specific name. A user can then query the naming service for the specific name, to obtain a reference to the monitor server. For more information on how the NuRV monitor server works, please refer to [1].
After establishing connection with the NuRV monitor server, the Incubator DT is started and a RabbitMQ client is created that subscribes to changes in the anomaly and energy_saving states of the DT. Each time an update is received of either state, the full state (the new updated state and the previous other state) is pushed to the NuRV monitor server whereafter the verdict is printed to the console.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#digital-twin-configuration","title":"Digital Twin configuration","text":"Before running the example, please configure the simulation.conf file with your RabbitMQ credentials.
The example uses the following assets:
Asset Type Names of Assets Visibility Reuse in other Examples Service common/services/NuRV_orbit Common Yes DT common/digital_twins/incubator Common Yes Specification safe-operation.smv Private No Script execute.py Private No The safe-operation.smv file contains the default monitored specification as described in the Simulated scenario section. These can be configured as desired.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#lifecycle-phases","title":"Lifecycle phases","text":"The lifecycle phases for this example include:
Lifecycle phase Completed tasks create Downloads the necessary tools and creates a virtual python environment with the necessary dependencies execute Runs a python script that starts up the necessary services as well as the Incubator simulation. Various status messages are printed to the console, including the monitored system states and monitor verdict. clean Removes created data directory and incubator log files. If required, change the execute permissions of lifecycle scripts you need to execute. This can be done using the following command
chmod +x lifecycle/{script}\n
where {script} is the name of the script, e.g. create, execute etc.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#running-the-example","title":"Running the example","text":"To run the example, first run the following command in a terminal:
cd /workspace/examples/digital_twins/incubator-monitor-server/\n
Then, first execute the create script (this can take a few mins depending on your network connection) followed by the execute script using the following command:
lifecycle/{script}\n
The execute script will then start outputting system states and the monitor verdict approx every 3 seconds. The output is printed as follows \"State: {anomaly state} & {energy_saving state}, verdict: {Verdict}\" where \"anomaly\" indicates that an anomaly is detected and \"!anomaly\" indicates that an anomaly is not currently detected. The same format is used for the energy_saving state.
The monitor verdict can be True, False or Unknown, where the latter indicates that the monitor does not yet have sufficient information to determine the satisfaction of the property.
An example output trace is provided below:
....\nRunning scenario with initial state: lid closed and energy saver on\nSetting energy saver mode: enable\nSetting G_box to: 0.5763498\nState: !anomaly & !energy_saving, verdict: True\nState: !anomaly & !energy_saving, verdict: True\n....\nState: anomaly & !energy_saving, verdict: Unknown\nState: anomaly & energy_saving, verdict: True\nState: anomaly & energy_saving, verdict: True\n
There is currently some startup issues with connecting to the NuRV server, and it will likely take a few tries before the connection is established. This is however handled automatically.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#references","title":"References","text":" - Information on the NuRV monitor can be found on FBK website.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html","title":"Incubator Co-Simulation Digital Twin validation with NuRV Monitor","text":""},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#overview","title":"Overview","text":"This example demonstrates how to validate some digital twin components using FMU monitors (in this example, the monitors are generated with NuRV[1]).
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#simulated-scenario","title":"Simulated scenario","text":"This example validates some components of the Incubator digital twin, by performing a simulation in which the commponents are wrapped inside FMUs, and are then inspected at runtime by some a FMU monitor generated by NuRV. Please note that the link to Incubator digital twin is only provided to know the details of the incubator physical twin. The digital twin (DT) presented here is a co-simulation DT of the Incubator.
The input data for the simulation is generated by a purpose-built FMU component named source, which supplies testing data to the anomaly detector, simulating an anomaly occurring at time t=60s. An additional component, watcher, is employed to verify whether the energy saver activates in response to an anomaly reported by the anomaly detector.
The output of the watcher is the passed to the monitor, which ensures that when an anomaly is detected, the energy saver activates within a maximum of three simulation cycles.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#example-structure","title":"Example structure","text":"A diagram depicting the logical software structure of the example can be seen below.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#digital-twin-configuration","title":"Digital Twin configuration","text":"The example uses the following assets:
Asset Type Names of Assets Visibility Reuse in other Examples Models anomaly_detection.fmu Private No energy_saver.fmu Private No Source.fmu Private No Watcher.fmu Private No Specification safe-operation.smv Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The safe-operation.smv file contains the default monitored specification as described in the Simulated scenario section.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#lifecycle-phases","title":"Lifecycle phases","text":"The lifecycle phases for this example include:
Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro toolGenerates and compiles the monitor FMU Execute Produces and stores output in data/incubator-NuRV-monitor-validation/output directory Clean Clears run logs and outputs If required, change the execute permissions of lifecycle scripts you need to execute. This can be done using the following command
chmod +x lifecycle/{script}\n
where {script} is the name of the script, e.g. create, execute etc.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/incubator-NuRV-monitor-validation\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#create","title":"Create","text":" - Installs Open Java Development Kit 17 in the workspace.
- Generates and compiles the monitor FMU from the NuRV specification
lifecycle/create\n
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#execute","title":"Execute","text":"Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the FMU models.
lifecycle/execute\n
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/incubator-NuRV-monitor-validation/output/plots
.
lifecycle/analyze\n
A sample plot is given here.
In the plot, four color-coded indicators provide a visual representation of distinct values. The blue line depicts the simulated temperature, while orange one represents the temperature estimate. A red dashed line indicates the target temperature set by the energy saver component. The green line shows the monitor's output verdict. As observed, when there is a disparity between the estimated and actual temperatures, the energy saver adjusts the target temperature downward, indicating that the component is working properly. Thus, the output of the monitor is fixed at unknown
(0
), signifying that the monitoring property is not violated.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/incubator-NuRV-monitor-validation/output directory where the logs are also included.
Figures of the output results can be found in the /workspace/examples/data/incubator-NuRV-monitor-validation/output directory.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#references","title":"References","text":" - More information about NuRV is available here.
"},{"location":"user/examples/mass-spring-damper/index.html","title":"Mass Spring Damper","text":""},{"location":"user/examples/mass-spring-damper/index.html#overview","title":"Overview","text":"The mass spring damper digital twin (DT) comprises two mass spring dampers and demonstrates how a co-simulation based DT can be used within DTaaS.
"},{"location":"user/examples/mass-spring-damper/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/mass-spring-damper/index.html#example-structure","title":"Example Structure","text":"There are two simulators included in the study, each representing a mass spring damper system. The first simulator calculates the mass displacement and speed of for a given force acting on mass . The second simulator calculates force given a displacement and speed of mass . By coupling these simulators, the evolution of the position of the two masses is computed.
"},{"location":"user/examples/mass-spring-damper/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models MassSpringDamper1.fmu Private Yes MassSpringDamper2.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The co-sim.json
and time.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
"},{"location":"user/examples/mass-spring-damper/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/mass-spring-damper/output directory Clean Clears run logs and outputs"},{"location":"user/examples/mass-spring-damper/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/mass-spring-damper\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/mass-spring-damper/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 in the workspace.
lifecycle/create\n
"},{"location":"user/examples/mass-spring-damper/index.html#execute","title":"Execute","text":"Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.
lifecycle/execute\n
"},{"location":"user/examples/mass-spring-damper/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/mass-spring-damper/output directory.
You can also view run logs in the /workspace/examples/digital_twins/mass-spring-damper.
"},{"location":"user/examples/mass-spring-damper/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/mass-spring-damper/index.html#references","title":"References","text":"More information about co-simulation techniques and mass spring damper case study are available in:
Gomes, Cl\u00e1udio, et al. \"Co-simulation: State of the art.\"\narXiv preprint arXiv:1702.00686 (2017).\n
The source code for the models used in this DT are available in mass spring damper github repository.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html","title":"Mass Spring Damper with NuRV Runtime Monitor","text":""},{"location":"user/examples/mass-spring-damper-monitor/index.html#overview","title":"Overview","text":"This digital twin is derives from te Mass String Damper digital twin.
The mass spring damper digital twin (DT) comprises two mass spring dampers and demonstrates how a co-simulation based DT can be used within DTaaS. This version of the example is expanded with a monitor generated by NuRV. More information about NuRV is available here.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/mass-spring-damper-monitor/index.html#example-structure","title":"Example Structure","text":"There are two simulators included in the study, each representing a mass spring damper system. The first simulator calculates the mass displacement and speed of for a given force acting on mass . The second simulator calculates force given a displacement and speed of mass . By coupling these simulators, the evolution of the position of the two masses is computed.
Additionally, a monitor is inserted in the simulation to check at runtime whether the displacement of the two masses stays below a maximum threshold.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models MassSpringDamper1.fmu Private Yes MassSpringDamper2.fmu Private Yes m2.fmu Private No RtI.fmu Private Yes Specification m2.smv Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The co-sim.json
and time.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro toolGenerates and compiles the monitor FMU Execute Produces and stores output in data/mass-spring-damper-monitor/output directory Clean Clears run logs and outputs"},{"location":"user/examples/mass-spring-damper-monitor/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/mass-spring-damper-monitor\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#create","title":"Create","text":" - Installs Open Java Development Kit 17 in the workspace.
- Generates and compiles the monitor FMU from the NuRV specification
lifecycle/create\n
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#execute","title":"Execute","text":"Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.
lifecycle/execute\n
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/mass-spring-damper-monitor/output/plots
.
lifecycle/analyze\n
A sample plot is given here.
In the plot, three color-coded indicators are used to represent different values. The blue line shows the distance between the two masses, while the green indicates the monitor's verdict. A red dashed line serves as a reference point, marking the distance checked by the monitor. Since the distance of the masses is always below the threshold, the output of the monitor is fixed to unknown
(0
).
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/mass-spring-damper-monitor/output directory.
You can also view run logs in the /workspace/examples/digital_twins/mass-spring-damper-monitor.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#references","title":"References","text":"More information about co-simulation techniques and mass spring damper case study are available in:
Gomes, Cl\u00e1udio, et al. \"Co-simulation: State of the art.\"\narXiv preprint arXiv:1702.00686 (2017).\n
The source code for the models used in this DT are available in mass spring damper github repository.
"},{"location":"user/examples/o5g/index.html","title":"Firefighter Mission in a Burning Building","text":"In an firefighter mission it is important to monitor the oxygen levels of each firefighters Self Contained Breating Aparatus (SCBA) in context of their mission.
"},{"location":"user/examples/o5g/index.html#physical-twin-overview","title":"Physical Twin Overview","text":"Image: Schematic overview of a firefighter mission. Note the mission commander on the lower left documenting the air supply pressure levels provided by radio communication from the firefighters inside and around the burning building. This image was created with the assistance of DALL\u00b7E.
We assume the following scenario:
- a set of firefighters work to extinguish a burning building
- they each use an SCBA with pressurised oxygen to breath
- a mission commander on the outside coordinates the efforts and surveills the oxygen levels
"},{"location":"user/examples/o5g/index.html#digital-twin-overview","title":"Digital Twin Overview","text":"In this example a monitor is implemented, that calculates how much time the firefighers have left, until they need to leave the building. To that end, the inputs used are:
- 3D-model of the building in which the mission takes place,
- pressure data of a firefighters SCBA and
- firefighters location inside of the building
are used to estimate:
- the shortest way out,
- how much time this will need and
- how much time is left until all oxygen from the SCBA is used up.
The remaining mission time is monitored and the firefighter receive a warning if it drops under a certain threshold.
This example is an implementation of the the paper Digital Twin for Rescue Missions--a Case Study by Leucker et al.
"},{"location":"user/examples/o5g/index.html#quick-check","title":"Quick Check","text":"Before runnnig this example please make sure the following files are at the correct locations:
/workspace/examples/\n data/o5g/input/\n runTessla.sh\n sensorSimulation.py\n telegraf.conf\n\n models/\n lab.ifc\n makefmu.mos\n PathOxygenEstimate.mo\n\n tools/\n graphToPath.py\n ifc_to_graph\n pathToTime.py\n tessla-telegraf-connector/\n tessla-telegraf-connector/\n tessla.jar\n specification.tessla (run-time specification)\n\n digital_twins/o5g/\n main.py\n config\n lifecycle/ (scripts)\n
"},{"location":"user/examples/o5g/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"All configuration for this example is contained in digital_twins/o5g/config
.
To use the MQTT-Server, account information needs to be provided. The topics are set to their default values, which allow the DT to access the mock physical twins sensor metrics and to send back alerts.
export O5G_MQTT_SERVER=\nexport O5G_MQTT_PORT=\nexport O5G_MQTT_USER=\nexport O5G_MQTT_PASS=\n\nexport O5G_MQTT_TOPIC_SENSOR='vgiot/ue/metric'\nexport O5G_MQTT_TOPIC_AIR_PREDICTION='vgiot/dt/prediction'\nexport O5g_MQTT_TOPIC_ALERT='vgiot/dt/alerts'\n
This example uses InfuxDB as a data storage, which will need to be configured to use your Access data. The following configuration steps are needed:
- Log into the InfluxDB Web UI
- Obtain org name (is below your username in the sidebar)
- Create a data bucket if you don't have one already in
Load Data -> Buckets
- Create an API access token in
Load Data -> API Tokens
, Copy and save this token somewhere immediately, you can not access it later!
export O5G_INFLUX_SERVER=\nexport O5G_INFLUX_PORT=\nexport O5G_INFLUX_TOKEN=\nexport O5G_INFLUX_ORG=\nexport O5G_INFLUX_BUCKET=\n
"},{"location":"user/examples/o5g/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Install Installs Open Modelica, Rust, Telegraf and the required pip dependencies Create Create FMU from Open Modelica file Execute Execute the example in the background tmux terminal session Terminate Terminate the tmux terminal session running in the background Clean Delete the temporary files"},{"location":"user/examples/o5g/index.html#run-the-example","title":"Run the example","text":""},{"location":"user/examples/o5g/index.html#install","title":"Install","text":"Run the install script by executing
lifecycle/install\n
This will install all the required dependencies from apt and pip, as well as Open Modelica, Rust, Telegraf and the required pip dependencies from their respective repos.
"},{"location":"user/examples/o5g/index.html#create","title":"Create","text":"Run the create script by executing
lifecycle/create\n
This will compile the modelica model to an Functional Mockup Unit (FMU) for the correct platform.
"},{"location":"user/examples/o5g/index.html#exceute","title":"Exceute","text":"To run the Digital Twin execute
lifecycle/execute\n
This will start all the required components in a single tmux session called o5g
in the background. To view the running Digital Twin attatch to this tmux session by executing
tmux a -t o5g\n
To detatch press Ctrl-b
followed by d
.
The tmux session contains 4 components of the digital twin:
Panel location Purpose Top Left Sensor simulator generating random location and O2-level data Top Right Main Digital Twin receives the sensor data and calculates an estimate of how many minutes of air remain Bottom Left Telegraf to convert between different message formats, also displays all messages between components Bottom Right TeSSLa monitor raises an alarm, if the remaining time is to low."},{"location":"user/examples/o5g/index.html#examine-the-results","title":"Examine the Results","text":"For additional mission awareness, we recommend utilising the Influx data visualisation. We provide a dashboard configuration in the file influx-dashoard.json. Log in to your Influx Server to import (usually port 8086). A screenshot of the dashboard is given here.
The data gets stored in o5g->prediction->air-remaining->37ae3e4fb3ea->true->vgiot/dt/prediction
variable of the InfluxDB. In addition to importing dashboard configuration given above, it is possible to create your custom dashboards using the stored data.
"},{"location":"user/examples/o5g/index.html#terminate","title":"Terminate","text":"To stop the all components and close the tmux session execute
lifecycle/terminate\n
"},{"location":"user/examples/o5g/index.html#clean","title":"Clean","text":"To remove temoporary files created during execution
lifecycle/clean\n
"},{"location":"user/examples/opc-ua-waterplant/index.html","title":"Waste Water Plant with OPC-UA","text":""},{"location":"user/examples/opc-ua-waterplant/index.html#introduction","title":"Introduction","text":"Waste water treatment (WWT) plants must comply with substance and species concentration limits established by regulation in order to ensure the good quality of the water. This is usually done taking periodic samples that are analyzed in the laboratory. This means that plant operators do not have continuous information for making decisions and, therefore, operation setpoints are set to higher values than needed to guarantee water quality. Some of the processes involved in WWT plants consume a lot of power, thus adjusting setpoints could significantly reduce energy consumption.
"},{"location":"user/examples/opc-ua-waterplant/index.html#physical-twin-overview","title":"Physical Twin Overview","text":"This example demonstrates the communication between a physical ultraviolet (UV) disinfection process (the tertiary treatment of a WWT plant) and its digital twin, which is based on Computational Fluid Dynamics (CFD) and compartment models. The aim of this digital twin is to develop \"virtual sensors\" that provide continuous information that facilitates the decision making process for the plant operator.
The physical twin of the waste water plant is composed of an ultraviolet channel controlled by a PLC that controls the power of the UV lamps needed to kill all the pathogens of the flow. The channel has 3 groups of UV lamps, therefore the real channel (and is mathematical model) is subdivided into 7 zones: 4 correspond to zones without UV lamps (2 for the entrance and exit of the channel + 2 zones between UV lamps) and the 3 reamaining for the UV lamps.
The dose to be applied (related with the power) changes according to the residence time (computed from the measure of the volume flow) and the UV intensity (measured by the intensity sensor).
The information of the volumetric flow and power (in the three parts of the channel) is transmitted to the PLC of the plant. Furthermore, the PLC is working as OPC UA Server to send and receive data to and from an OPC UA Client. Additionally, some sizing parameters and initial values are read from a spreadsheet filled in by the plant operator. In this case, the spreadsheet is an Open Office file (.ods) due to the software installed in the SCADA PC. Some of the variables like initial concentration of disinfectant and pathogens are included, among others. Some values defined by the plant operator correspond to input signals that are not currently being measured, but are expected to be measured in the future.
"},{"location":"user/examples/opc-ua-waterplant/index.html#digital-twin-overview","title":"Digital Twin Overview","text":"The digital twin is a reduced model (developed in C)\u00a0that solves physical conservation laws (mass, energy and momentum), but simplifies details (geometry, mainly) to ensure real-time calculations and accurate results. The results are compared to the ones obtained by the CFD. C solver developed is used by the OpenModelica\u00a0model. OpenModelica converts\u00a0it into the FMI standard, to be integrated in the OPC UA Client (client-opcua.py).
"},{"location":"user/examples/opc-ua-waterplant/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"Asset Type Name of Asset Visibility Reuse in Other Examples Model Test_DTCONEDAR.mo private No Data configuration_freopcua.ods private No model_description.csv (generated by client-asyncua.py) private No Mock OPC UA Server: opcua-mock-server.py private No Tool OPC UA Client: client-opcua.py private No FMU builder: create-fmu.mos private No In this example, a dummy model representation of the plant is used, instead of the real model. The simplified model (with not the real equations) is developed in Open Modelica (Test_DTCONEDAR.mo). The FMU is generated from the Open Modelica interface to obtain the needed binaries to run the FMU. It is possible to run an FMU previously generated, however, to ensure that we are using the right binaries it is recommended to install Open Modelica Compiler and run script.mos
to build the FMU from the Modelica file Test_DTCONEDAR.mo
.
The FMU model description file (modelDescription.xml
file inside Test_DTCONEDAR.fmu
) has the information of the value references configuration_freeopcua.ods has the information of the OPC-UA node IDs And both have in common the variable name
The client python script (client-opcua.py
) does the following actions:
"},{"location":"user/examples/opc-ua-waterplant/index.html#input-data-variables","title":"Input Data Variables","text":"The configuration_freeopcua.ods date file is used for customizing the initial input data values used by the server.
"},{"location":"user/examples/opc-ua-waterplant/index.html#dt-config","title":"DT Config","text":"The config.json specifies the configuration parameters for the OPC UA client.
Optional parameters can be modified:
- stop_time
- step_size
- record = True, if we want to save the results of the simulation
- record_interval. Sometimes the simulation step_size is small and a the size of the results file can be too big. For instance, if the simulation step_size is 0.01 seconds, we can increase the record_interval so as to reduce the result file size.
- record_variables: we can specify the list of variables that we want to record.
- enable_send = True, if we want to send results to the OPC UA Server.
"},{"location":"user/examples/opc-ua-waterplant/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Install Installs Open Modelica, Python 3.10 and the required pip dependencies Create Create FMU from Open Modelica file Execute Run OPC UA mock server and normal OPC-UA client Clean Delete the temporary files"},{"location":"user/examples/opc-ua-waterplant/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital twins/opc-ua-waterplant\n
If required, change the execute permission of lifecycle scripts.
chmod +x lifecycle/*\n
Now, run the following scripts:
"},{"location":"user/examples/opc-ua-waterplant/index.html#install","title":"Install","text":"Installs Open Modelica, Python 3.10 and the required pip dependencies
lifecycle/install\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#create","title":"Create","text":"Create Test_DTCONEDAR.fmu
co-simulation model from `Test_DTCONEDAR.mo open modelica file.
lifecycle/create\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#execute","title":"Execute","text":"Start the mock OPC UA server in the background. Run the OPC UA client.
lifecycle/execute\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#clean","title":"Clean","text":"Remove the temporary files created by Open Modelica and output files generated by OPC UA client.
lifecycle/clean\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#references","title":"References","text":"More explanation about this example is available at:
Royo, L., Labar\u00edas, A., Arnau, R., G\u00f3mez, A., Ezquerra, A., Cilla, I., &\nD\u00edez-Anto\u00f1azas, L. (2023). Improving energy efficiency in the tertiary\ntreatment of Alguazas WWTP (Spain) by means of digital twin development\nbased on Physics modelling . Cambridge Open Engage.\n[doi:10.33774/coe-2023-1vjcw](https://doi.org/10.33774/coe-2023-1vjcw)\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#acknowledgements","title":"Acknowledgements","text":"The work on this example was done in a project subsidised thanks to the support grants for Agrupaci\u00f3n Empresarial Innovadora (AEI) of the Ministry of Industry, Trade and Tourism (MINCOTUR) with the aim of improving the competitiveness of small and medium-sized enterprises within the Recovery, Transformation and Resilience Plan (PRTR) financed by the Next Generation EU (NGEU) funds, with Grant Number: AEI-010500-2022b-196.
"},{"location":"user/examples/three-tank/index.html","title":"Three-Tank System Digital Twin","text":""},{"location":"user/examples/three-tank/index.html#overview","title":"Overview","text":"The three-tank system is a simple case study allows us to represent a system that is composed of three individual components that are coupled in a cascade as follows: The first tank is connected to the input of the second tank, and the output of the second tank is connected to the input of the third tank.
This example contains only the simulated components for demonstration purposes; therefore, there is no configuration for the connection with the physical system.
The three-tank system case study is managed using the DTManager
, which is packed as a jar library in the tools, and run from a java main file. The DTManager
uses Maestro as a slave for co-simulation, so it generates the output of the co-simulation.
The main file can be changed according to the application scope, i.e., the /workspace/examples/tools/three-tank/TankMain.java
can be manipulated to get a different result.
The /workspace/examples/models/three-tank/
folder contains the Linear.fmu
file, which is a non-realistic model for a tank with input and output and the TankSystem.aasx
file for the schema representation with Asset Administration Shell. The three instances use the same .fmu
file and the same schema due to being of the same object class. The DTManager
is in charge of reading the values from the co-simulation output.
"},{"location":"user/examples/three-tank/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/three-tank/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models, two tools, one data, and one script. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Model Linear.fmu Private No TankSystem.aasx Private No Tool DTManager-0.0.1-Maestro.jar (wraps Maestro) Common Yes maestro-2.3.0-jar-with-dependencies.jar (used by DTManager) Common Yes TankMain.java (main script) Private No Data outputs.csv Private No This DT has multiple configuration files. The coe.json and multimodel.json are used by Maestro tool. The tank1.conf, tank2.conf and tank3.conf are the config files for three different instances of one model (Linear.fmu).
"},{"location":"user/examples/three-tank/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute The DT Manager executes the three-tank digital twin and produces output in data/three-tank/output
directory Terminate Terminating the background processes and cleaning up the output"},{"location":"user/examples/three-tank/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital twins/three-tank\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/three-tank/index.html#create","title":"Create","text":"Installs Open Java Development Kit 11 and pip dependencies. Also creates DTManager
tool (DTManager-0.0.1-Maestro.jar) from source code.
lifecycle/create\n
"},{"location":"user/examples/three-tank/index.html#execute","title":"Execute","text":"Execute the three-tank digital twin using DTManager. DTManager in-turn runs the co-simulation using Maestro. Generates the co-simulation output.csv file at /workspace/examples/data/three-tank/output
.
lifecycle/execute\n
"},{"location":"user/examples/three-tank/index.html#terminate","title":"Terminate","text":"Stops the Maestro running in the background. Also stops any other jvm process started during execute phase.
lifecycle/terminate\n
"},{"location":"user/examples/three-tank/index.html#clean","title":"Clean","text":"Removes the output generated during execute phase.
lifecycle/terminate\n
"},{"location":"user/examples/three-tank/index.html#examining-the-results","title":"Examining the results","text":"Executing this Digital Twin will generate a co-simulation output, but the results can also be monitored from updating the/workspace/examples/tools/three-tank/TankMain.java
with a specific set of getAttributeValue
commands, such as shown in the code.
That main file enables the online execution of the Digital Twin and its internal components.
The output of the co-simulation is generated to the /workspace/examples/data/three-tank/output
folder.
In the default example, the co-simulation is run for 10 seconds in steps of 0.5 seconds. This can be modified for a longer period and different step size. The output stored in outputs.csv
contains the level, in/out flow, and leak values.
No data from the physical twin are generated/used.
"},{"location":"user/examples/three-tank/index.html#references","title":"References","text":"More information about the DT Manager is available at:
D. Lehner, S. Gil, P. H. Mikkelsen, P. G. Larsen and M. Wimmer,\n\"An Architectural Extension for Digital Twin Platforms to Leverage\nBehavioral Models,\" 2023 IEEE 19th International Conference on\nAutomation Science and Engineering (CASE), Auckland, New Zealand,\n2023, pp. 1-8, doi: 10.1109/CASE56687.2023.10260417.\n
"},{"location":"user/examples/water_tank_FI/index.html","title":"Water Tank Fault Injection","text":""},{"location":"user/examples/water_tank_FI/index.html#overview","title":"Overview","text":"This example shows a fault injection (FI) enabled digital twin (DT). A live DT is subjected to simulated faults received from the environment. The simulated faults is specified as part of DT configuration and can be changed for new instances of DTs.
In this co-simulation based DT, a watertank case-study is used; co-simulation consists of a tank and controller. The goal of which is to keep the level of water in the tank between Level-1
and Level-2
. The faults are injected into output of the water tank controller (Watertankcontroller-c.fmu) from 12 to 20 time units, such that the tank output is closed for a period of time, leading to the water level increasing in the tank beyond the desired level (Level-2
).
"},{"location":"user/examples/water_tank_FI/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/water_tank_FI/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_FI/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models watertankcontroller-c.fmu Private Yes singlewatertank-20sim.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The multimodelFI.json
and simulation-config.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
The faults are defined in wt_fault.xml.
"},{"location":"user/examples/water_tank_FI/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_FI/output directory Clean Clears run logs and outputs"},{"location":"user/examples/water_tank_FI/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/water_tank_FI\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/water_tank_FI/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 and pip dependencies. The pandas and matplotlib are the pip dependencies installated.
lifecycle/create\n
"},{"location":"user/examples/water_tank_FI/index.html#execute","title":"Execute","text":"Run the co-simulation. Generates the co-simulation output.csv file at /workspace/examples/data/water_tank_FI/output
.
lifecycle/execute\n
"},{"location":"user/examples/water_tank_FI/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/
.
lifecycle/analyze\n
"},{"location":"user/examples/water_tank_FI/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/water_tank_FI/output directory.
You can also view run logs in the /workspace/examples/digital_twins/water_tank_FI.
"},{"location":"user/examples/water_tank_FI/index.html#terminate-phase","title":"Terminate phase","text":"Clean up the temporary files and delete output plot
lifecycle/terminate\n
"},{"location":"user/examples/water_tank_FI/index.html#references","title":"References","text":"More details on this case-study can be found in the paper:
M. Frasheri, C. Thule, H. D. Macedo, K. Lausdahl, P. G. Larsen and\nL. Esterle, \"Fault Injecting Co-simulations for Safety,\"\n2021 5th International Conference on System Reliability and Safety (ICSRS),\nPalermo, Italy, 2021.\n
The fault-injection plugin is an extension to the Maestro co-orchestration engine that enables injecting inputs and outputs of FMUs in an FMI-based co-simulation with tampered values. More details on the plugin can be found in fault injection git repository. The source code for this example is also in the same github repository in a example directory.
"},{"location":"user/examples/water_tank_FI_monitor/index.html","title":"Water Tank Fault Injection with NuRV Runtime Monitor","text":""},{"location":"user/examples/water_tank_FI_monitor/index.html#overview","title":"Overview","text":"This example shows a fault injection (FI) enabled digital twin (DT). A live DT is subjected to simulated faults received from the environment. The simulated faults is specified as part of DT configuration and can be changed for new instances of DTs. This version of the example is expanded with a monitor generated by NuRV. More information about NuRV is available here.
In this co-simulation based DT, a watertank case-study is used; co-simulation consists of a tank and controller. The goal of which is to keep the level of water in the tank between Level-1
and Level-2
. The faults are injected into output of the water tank controller (Watertankcontroller-c.fmu) from 12 to 20 time units, such that the tank output is closed for a period of time, leading to the water level increasing in the tank beyond the desired level (Level-2
). Additionally, a monitor is inserted in the simulation to check at runtime whether the level of the water stays below a maximum threshold.
"},{"location":"user/examples/water_tank_FI_monitor/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/water_tank_FI_monitor/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_FI_monitor/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models watertankcontroller-c.fmu Private Yes singlewatertank-20sim.fmu Private Yes m1.fmu Private No RtI.fmu Private Yes Specification m1.smv Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The multimodelFI.json
and simulation-config.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
The faults are defined in wt_fault.xml.
"},{"location":"user/examples/water_tank_FI_monitor/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro toolGenerates and compiles the monitor FMU Execute Produces and stores output in data/water_tank_FI_monitor/output directory Clean Clears run logs and outputs"},{"location":"user/examples/water_tank_FI_monitor/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/water_tank_FI_monitor\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/water_tank_FI_monitor/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 and pip dependencies. The pandas and matplotlib are the pip dependencies installated. The monitor FMU from the NuRV specification is generated and compiled.
lifecycle/create\n
"},{"location":"user/examples/water_tank_FI_monitor/index.html#execute","title":"Execute","text":"Run the co-simulation. Generates the co-simulation output.csv file at /workspace/examples/data/water_tank_FI_monitor/output
.
lifecycle/execute\n
"},{"location":"user/examples/water_tank_FI_monitor/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI_monitor/output/plots/
.
lifecycle/analyze\n
A sample plot is given here.
In the plot, four color-coded indicators are used to represent different values. The blue line shows the water tank level, while orange represents the control output and green indicates the monitor's verdict. A red dashed line serves as a reference point, marking the level checked by the monitor. As the water level exceeds this threshold, the monitor's verdict changes from unknown
(0
) to false
(2
).
"},{"location":"user/examples/water_tank_FI_monitor/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/water_tank_FI_monitor/output directory.
You can also view run logs in the /workspace/examples/digital_twins/water_tank_FI_monitor.
"},{"location":"user/examples/water_tank_FI_monitor/index.html#terminate-phase","title":"Terminate phase","text":"Clean up the temporary files and delete output plot
lifecycle/terminate\n
"},{"location":"user/examples/water_tank_FI_monitor/index.html#references","title":"References","text":"More details on this case-study can be found in the paper:
M. Frasheri, C. Thule, H. D. Macedo, K. Lausdahl, P. G. Larsen and\nL. Esterle, \"Fault Injecting Co-simulations for Safety,\"\n2021 5th International Conference on System Reliability and Safety (ICSRS),\nPalermo, Italy, 2021.\n
The fault-injection plugin is an extension to the Maestro co-orchestration engine that enables injecting inputs and outputs of FMUs in an FMI-based co-simulation with tampered values. More details on the plugin can be found in fault injection git repository. The source code for this example is also in the same github repository in a example directory.
"},{"location":"user/examples/water_tank_swap/index.html","title":"Water Tank Model Swap","text":""},{"location":"user/examples/water_tank_swap/index.html#overview","title":"Overview","text":"This example shows multi-stage execution and dynamic reconfiguration of a digital twin (DT). Two features of DTs are demonstrated here:
- Fault injection into live DT
- Dynamic auto-reconfiguration of live DT
The co-simulation methodology is used to construct this DT.
"},{"location":"user/examples/water_tank_swap/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_swap/index.html#configuration-of-assets","title":"Configuration of assets","text":"This example uses four models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models Watertankcontroller-c.fmu Private Yes Singlewatertank-20sim.fmu Private Yes Leak_detector.fmu Private No Leak_controller.fmu Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes This DT has many configuration files. The DT is executed in two stages. There exist separate DT configuration files for each stage. The following table shows the configuration files and their purpose.
Configuration file name Execution Stage Purpose mm1. json stage-1 DT configuration wt_fault.xml, FaultInject.mabl stage-1 faults injected into DT during stage-1 mm2.json stage-2 DT configuration simulation-config.json Both stages Configuration for specifying DT execution time and output logs"},{"location":"user/examples/water_tank_swap/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_swap/output directory Analyze Process the co-simulation output and produce plots Clean Clears run logs, outputs and plots"},{"location":"user/examples/water_tank_swap/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/water_tank_swap\n
If required, change the permission of files you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/water_tank_swap/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 and pip dependencies. The matplotlib pip package is also installated.
lifecycle/create\n
"},{"location":"user/examples/water_tank_swap/index.html#execute","title":"Execute","text":"This DT has two-stage execution. In the first-stage, a co-simulation is executed. The Watertankcontroller-c.fmu and Singlewatertank-20sim.fmu models are used to execute the DT. During this stage, faults are injected into one of the models (Watertankcontroller-c.fmu) and the system performance is checked.
In the second-stage, another co-simulation is run in which three FMUs are used. The FMUs used are: watertankcontroller, singlewatertank-20sim, and leak_detector. There is an in-built monitor in the Maestro tool. This monitor is enabled during the stage and a swap condition is set at the beginning of the second-stage. When the swap condition is satisfied, the Maestro swaps out Watertankcontroller-c.fmu model and swaps in Leakcontroller.fmu model. This swapping of FMU models demonstrates the dynamic reconfiguration of a DT.
The end of execution phase generates the co-simulation output.csv file at /workspace/examples/data/water_tank_swap/output
.
lifecycle/execute\n
"},{"location":"user/examples/water_tank_swap/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/
.
lifecycle/analyze\n
"},{"location":"user/examples/water_tank_swap/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the workspace/examples/data/water_tank_swap/output directory.
You can also view run logs in the workspace/examples/digital_twins/water_tank_swap.
"},{"location":"user/examples/water_tank_swap/index.html#terminate-phase","title":"Terminate phase","text":"Clean up the temporary files and delete output plot
lifecycle/terminate\n
"},{"location":"user/examples/water_tank_swap/index.html#references","title":"References","text":"The complete source of this example is available on model swap github repository.
The runtime model (FMU) swap mechanism demonstrated by the experiment is detailed in the paper:
Ejersbo, Henrik, et al. \"fmiSwap: Run-time Swapping of Models for\nCo-simulation and Digital Twins.\" arXiv preprint arXiv:2304.07328 (2023).\n
The runtime reconfiguration of co-simulation by modifying the Functional Mockup Units (FMUs) used is further detailed in the paper:
Ejersbo, Henrik, et al. \"Dynamic Runtime Integration of\nNew Models in Digital Twins.\" 2023 IEEE/ACM 18th Symposium on\nSoftware Engineering for Adaptive and Self-Managing Systems\n(SEAMS). IEEE, 2023.\n
"},{"location":"user/servers/execution/runner/index.html","title":"Runner","text":"A utility service to manage safe execution of remote scripts / commands. User launches this from commandline and let the utility manage the commands to be executed.
The runner utility runs as a service and provides REST API interface to safely execute remote commands. Multiple runners can be active simultaneously on one computer. The commands are sent via the REST API and are executed on the computer with active runner.
"},{"location":"user/servers/execution/runner/index.html#install","title":"Install","text":""},{"location":"user/servers/execution/runner/index.html#npm-registry","title":"NPM Registry","text":"The package is available on npmjs.
Install the package with the following command:
sudo npm install -g @into-cps-association/runner\n
"},{"location":"user/servers/execution/runner/index.html#github-registry","title":"Github Registry","text":"The package is available in Github packages registry.
Set the registry and install the package with the following commands
sudo npm config set @into-cps-association:registry https://npm.pkg.github.com\nsudo npm install -g @into-cps-association/runner\n
The npm install command asks for username and password. The username is your Github username and the password is your Github personal access token. In order for the npm to download the package, your personal access token needs to have read:packages scope.
"},{"location":"user/servers/execution/runner/index.html#configure","title":"Configure","text":"The utility requires config specified in YAML format. The template configuration file is:
port: 5000\nlocation: 'script' #directory location of scripts\ncommands: #list of permitted scripts\n - create\n - execute\n - terminate\n
It is suggested that the configuration file be named as runner.yaml and placed in the directory in which the runner microservice is run.
The location
refers to the relative location of the scripts directory with respect to the location of runner.yaml file.
However, there is no limitation on either the configuration filename or the location
. The path to runner.yaml can either be relative or absolute path. However, the location
path is always relative path with respect to the path of runner.yaml file.
The commands must be executable. Please make sure that the commands have execute permission on Linux platforms.
"},{"location":"user/servers/execution/runner/index.html#create-commands","title":"Create Commands","text":"The runner requires commands / scripts to be run. These need to be placed in the location
specified in runner.yaml file. The location must be relative to the directory in which the runner microservice is being run.
"},{"location":"user/servers/execution/runner/index.html#use","title":"Use","text":"Display help.
$runner -h\nUsage: runner [options]\n\nRemote code execution for humans\n\nOptions:\n -v --version package version\n -c --config <string> runner config file specified in yaml format (default: \"runner.yaml\")\n -h --help display help\n
The config option is not mandatory. If it is not used, runner looks for runner.yaml in the directory from which it is being run. Once launched, the utility runs at the port specified in runner.yaml file.
runner #use runner.yaml of the present working directory\nrunner -c FILE-PATH #absolute or relative path to config file\nrunner --config FILE-PATH #absolute or relative path to config file\n
If launched on one computer, you can access the same at http://localhost:<port>
.
Access to the service on network is available at http://<ip or hostname>:<port>/
.
"},{"location":"user/servers/execution/runner/index.html#application-programming-interface-api","title":"Application Programming Interface (API)","text":"Three REST API methods are active. The route paths and the responses given for these two sources are:
REST API Route HTTP Method Return Value Comment localhost:port POST Returns the execution status of command Executes the command provided. Each invocation appends to array of commands executed so far. localhost:port GET Returns the execution status of the last command sent via POST request. localhost:port/history GET Returns the array of POST requests received so far."},{"location":"user/servers/execution/runner/index.html#post-request-to","title":"POST Request to /","text":"Executes a command. The command name given here must exist in location directory.
Valid HTTP RequestHTTP Response - Valid CommandHTTP Response - Inalid Command POST / HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n \"name\": \"<command-name>\"\n}\n
Connection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"status\": \"success\"\n}\n
Connection: close\nContent-Length: 28\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"status\": \"invalid command\"\n}\n
"},{"location":"user/servers/execution/runner/index.html#get-request-to","title":"GET Request to /","text":"Shows the status of the command last executed.
Valid HTTP RequestHTTP Response - Valid CommandHTTP Response - Inalid Command GET / HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n \"name\": \"<command-name>\"\n}\n
Connection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"name\": \"<command-name>\",\n \"status\": \"valid\",\n \"logs\": {\n \"stdout\": \"<output log of command>\",\n \"stderr\": \"<error log of command>\"\n }\n}\n
Connection: close\nContent-Length: 70\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"name\": \"<command-name\",\n \"status\": \"invalid\",\n \"logs\": {\n \"stdout\": \"\",\n \"stderr\": \"\"\n }\n}\n
"},{"location":"user/servers/execution/runner/index.html#get-request-to-history","title":"GET Request to /history","text":"Returns the array of POST requests received so far. Both valid and invalid commands are recorded in the history.
[\n {\n \"name\": \"valid command\"\n },\n {\n \"name\": \"valid command\"\n },\n {\n \"name\": \"invalid command\"\n }\n]\n
"},{"location":"user/servers/lib/LIB-MS.html","title":"Library Microservice","text":"The lib microservice is responsible for handling and serving the contents of library assets of the DTaaS platform. It provides API endpoints for clients to query, and fetch these assets.
This document provides instructions for using the library microservice.
Please see assets for a suggested storage conventions of your library assets.
Once the assets are stored in the library, they become available in user workspace.
"},{"location":"user/servers/lib/LIB-MS.html#application-programming-interface-api","title":"Application Programming Interface (API)","text":"The lib microservice application provides services at two end points:
GraphQL API Endpoint: http://foo.com/lib
HTTP Endpoint: http://foo.com/lib/files
"},{"location":"user/servers/lib/LIB-MS.html#http-protocol","title":"HTTP protocol","text":"Endpoint: localhost:PORT/lib/files
This option needs to be enabled with -H http.json
flag. The regular file upload and download options become available.
Here are sample screenshots.
"},{"location":"user/servers/lib/LIB-MS.html#graphql-protocol","title":"GraphQL protocol","text":"Endpoint: localhost:PORT/lib
The http://foo.com/lib
URL opens a graphql playground.
You can check the query schema and try sample queries here. The graphql queries need to be sent as HTTP POST requests and get responses.
The library microservice services two API calls:
- Provide a list of contents for a directory
- Fetch a file from the available files
The API calls are accepted over GraphQL and HTTP API end points. The format of the accepted queries are:
"},{"location":"user/servers/lib/LIB-MS.html#provide-list-of-contents-for-a-directory","title":"Provide list of contents for a directory","text":"To retrieve a list of files in a directory, use the following GraphQL query.
Replace path
with the desired directory path.
send requests to: https://foo.com/lib
"},{"location":"user/servers/lib/LIB-MS.html#graphql-query-for-list-of-contents","title":"GraphQL query for list of contents","text":"query {\n listDirectory(path: \"user1\") {\n repository {\n tree {\n blobs {\n edges {\n node {\n name\n type\n }\n }\n }\n trees {\n edges {\n node {\n name\n type\n }\n }\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#graphql-response-for-list-of-contents","title":"GraphQL response for list of contents","text":"{\n \"data\": {\n \"listDirectory\": {\n \"repository\": {\n \"tree\": {\n \"blobs\": {\n \"edges\": []\n },\n \"trees\": {\n \"edges\": [\n {\n \"node\": {\n \"name\": \"common\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"data\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"digital twins\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"functions\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"models\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"tools\",\n \"type\": \"tree\"\n }\n }\n ]\n }\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-request-for-list-of-contents","title":"HTTP request for list of contents","text":"POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n \"query\":\"query {\\n listDirectory(path: \\\"user1\\\") {\\n repository {\\n tree {\\n blobs {\\n edges {\\n node {\\n name\\n type\\n }\\n }\\n }\\n trees {\\n edges {\\n node {\\n name\\n type\\n }\\n }\\n }\\n }\\n }\\n }\\n}\"\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-response-for-list-of-contents","title":"HTTP response for list of contents","text":"HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 306\nContent-Type: application/json; charset=utf-8\nDate: Tue, 26 Sep 2023 20:26:49 GMT\nX-Powered-By: Express\n{\"data\":{\"listDirectory\":{\"repository\":{\"tree\":{\"blobs\":{\"edges\":[]},\"trees\":{\"edges\":[{\"node\":{\"name\":\"data\",\"type\":\"tree\"}},{\"node\":{\"name\":\"digital twins\",\"type\":\"tree\"}},{\"node\":{\"name\":\"functions\",\"type\":\"tree\"}},{\"node\":{\"name\":\"models\",\"type\":\"tree\"}},{\"node\":{\"name\":\"tools\",\"type\":\"tree\"}}]}}}}}}\n
"},{"location":"user/servers/lib/LIB-MS.html#fetch-a-file-from-the-available-files","title":"Fetch a file from the available files","text":"This query receives directory path and send the file contents to user in response.
To check this query, create a file files/user2/data/welcome.txt
with content of hello world
.
"},{"location":"user/servers/lib/LIB-MS.html#graphql-query-for-fetch-a-file","title":"GraphQL query for fetch a file","text":"query {\n readFile(path: \"user2/data/sample.txt\") {\n repository {\n blobs {\n nodes {\n name\n rawBlob\n rawTextBlob\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#graphql-response-for-fetch-a-file","title":"GraphQL response for fetch a file","text":"{\n \"data\": {\n \"readFile\": {\n \"repository\": {\n \"blobs\": {\n \"nodes\": [\n {\n \"name\": \"sample.txt\",\n \"rawBlob\": \"hello world\",\n \"rawTextBlob\": \"hello world\"\n }\n ]\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-request-for-fetch-a-file","title":"HTTP request for fetch a file","text":"POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 217\n{\n \"query\":\"query {\\n readFile(path: \\\"user2/data/welcome.txt\\\") {\\n repository {\\n blobs {\\n nodes {\\n name\\n rawBlob\\n rawTextBlob\\n }\\n }\\n }\\n }\\n}\"\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-response-for-fetch-a-file","title":"HTTP response for fetch a file","text":"HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Wed, 27 Sep 2023 09:17:18 GMT\nX-Powered-By: Express\n{\"data\":{\"readFile\":{\"repository\":{\"blobs\":{\"nodes\":[{\"name\":\"welcome.txt\",\"rawBlob\":\"hello world\",\"rawTextBlob\":\"hello world\"}]}}}}}\n
The path refers to the file path to look at: For example, user1 looks at files of user1; user1/functions looks at contents of functions/ directory.
"},{"location":"user/servers/lib/assets.html","title":"Reusable Assets","text":"The reusability of digital twin assets makes it easy for users to work with the digital twins. The reusability of assets is a fundamental feature of the platform.
"},{"location":"user/servers/lib/assets.html#kinds-of-reusable-assets","title":"Kinds of Reusable Assets","text":"The DTaaS software categorizes all the reusable library assets into six categories:
"},{"location":"user/servers/lib/assets.html#data","title":"Data","text":"The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.
"},{"location":"user/servers/lib/assets.html#models","title":"Models","text":"The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.
"},{"location":"user/servers/lib/assets.html#tools","title":"Tools","text":"The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.
"},{"location":"user/servers/lib/assets.html#functions","title":"Functions","text":"The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.
"},{"location":"user/servers/lib/assets.html#digital-twins","title":"Digital Twins","text":"These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.
"},{"location":"user/servers/lib/assets.html#file-system-structure","title":"File System Structure","text":"Each user has their assets put into five different directories named above. In addition, there will also be common library assets that all users have access to. A simplified example of the structure is as follows:
workspace/\n data/\n data1/ (ex: sensor)\n filename (ex: sensor.csv)\n README.md\n data2/ (ex: turbine)\n README.md (remote source; no local file)\n ...\n digital_twins/\n digital_twin-1/ (ex: incubator)\n config (yaml and json)\n README.md (usage instructions)\n description.md (short summary of digital twin)\n lifecycle/ (directory containing lifecycle scripts)\n digital_twin-2/ (ex: mass spring damper)\n config (yaml and json)\n README.md (usage instructions)\n description.md (short summary of digital twin)\n lifecycle/ (directory containing lifecycle scripts)\n digital_twin-3/ (ex: model swap)\n config (yaml and json)\n README.md (usage instructions)\n description.md (short summary of digital twin)\n lifecycle/ (directory containing lifecycle scripts)\n ...\n functions/\n function1/ (ex: graphs)\n filename (ex: graphs.py)\n README.md\n function2/ (ex: statistics)\n filename (ex: statistics.py)\n README.md\n ...\n models/\n model1/ (ex: spring)\n filename (ex: spring.fmu)\n README.md\n model2/ (ex: building)\n filename (ex: building.skp)\n README.md\n model3/ (ex: rabbitmq)\n filename (ex: rabbitmq.fmu)\n README.md\n ...\n tools/\n tool1/ (ex: maestro)\n filename (ex: maestro.jar)\n README.md\n ...\n common/\n data/\n functions/\n models/\n tools/\n
Tip
The DTaaS is agnostic to the format of your assets. The only requirement is that they are files which can be uploaded on the Library page. Any directories can be compressed as one file and uploaded. You can decompress the file into a directory from a Terminal or xfce Desktop available on the Workbench page.
A recommended file system structure for storing assets is also available in DTaaS examples.
"},{"location":"user/servers/lib/assets.html#upload-assets","title":"Upload Assets","text":"Users can upload assets into their workspace using Library page of the website.
You can go into a directory and click on the upload button to upload a file or a directory into your workspace. This asset is then available in all the workbench tools you can use. You can also create new assets on the page by clicking on new drop down menu. This is a simple web interface which allows you to create text-based files. You need to upload other files using upload button.
The user workbench has the following services:
- Jupyter Notebook and Lab
- VS Code
- XFCE Desktop Environment available via VNC
- Terminal
Users can also bring their DT assets into user workspaces from outside using any of the above mentioned services. The developers using git repositories can clone from and push to remote git servers. Users can also use widely used file transfer protocols such as FTP, and SCP to bring the required DT assets into their workspaces.
"},{"location":"user/website/index.html","title":"DTaaS Website Screenshots","text":"This page contains a screenshot driven preview of the website serving the DTaaS software platform.
"},{"location":"user/website/index.html#visit-the-dtaas-application","title":"Visit the DTaaS Application","text":"Start off by simply visiting the website of the DTaaS instance for which you are a user.
"},{"location":"user/website/index.html#redirected-to-authorization-provider","title":"Redirected to Authorization Provider","text":"You will be redirected to the Gitlab Authorization for DTaaS.
Enter your email/username and password. If the email ID registered with DTaaS, is the same as your Google Login email ID, you can also opt to sign in using Google.
You will be redirected to the OAuth Application page.
"},{"location":"user/website/index.html#permit-dtaas-server-to-use-gitlab","title":"Permit DTaaS Server to Use Gitlab","text":"Click on Authorize to allow the OAuth application to access the information connected to your Gitlab account. This is a necessary step.
You are now logged into the DTaaS server. You will be redirected to the login page of the DTaaS website.
The DTaaS website uses an additional layer of security - the third-party authorization protocol known as OAuth. This protocol provides secure access to a DTaaS installation if users have a working active accounts at the selected OAuth service provider. This also uses Gitlab as OAuth provider.
You can see the Gitlab signin button. A click on this button takes you to Gitlab instance providing authorization for DTaaS. You will not need to sign in to Gitlab again, unless you have explicitly logged out of your Gitlab account.
"},{"location":"user/website/index.html#permit-dtaas-website-to-use-gitlab","title":"Permit DTaaS Website to Use Gitlab","text":"The DTaaS website needs your permission to use your Gitlab account for authorization. Click on Authorize button.
After successful authorization, you will be redirected to the Library page of the DTaaS website.
There are two icons on the top-right of the webpage. The hyperlink on question mark icon redirects to help page while the hyperlink on github icon redirects to github code repository.
"},{"location":"user/website/index.html#overview-of-menu-items","title":"Overview of menu items","text":"The menu is hidden by default. Only the icons of menu items are visible. You can click on the icon in the top-left corner of the page to see the menu.
There are three menu items:
Library: for management of reusable library assets. You can upload, download, create and modify new files on this page.
Digital Twins: for management of digital twins. You are presented with the Jupyter Lab page from which you can run the digital twins.
Workbench: Not all digital twins can be managed within Jupyter Lab. You have more tools at your disposal on this page.
"},{"location":"user/website/index.html#library-tabs-and-their-help-text","title":"Library tabs and their help text","text":"You can see five tabs each corresponding to one type of digital twin assets. Each tab provides help text to guide users on the type of an asset.
Functions The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.
Data The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.
Models The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.
Tools The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.
Digital Twins These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.
There are two sub-tabs, namely private and common. The library assets in private category are visible only to the logged in user while the library assets in common category are available to all the users.
Further explanation on the placement of reusable assets within each type and the underlying directory structure on the server is in the assets page
Note
You can upload assets (files) using the upload button.
The file manager is based on Jupyter notebook and all the tasks you can perform in the Jupyter Notebook can be undertaken here.
"},{"location":"user/website/index.html#digital-twins-page","title":"Digital Twins page","text":"The digital twins page has three tabs and the central pane opens Jupyter lab. There are three tabs with helpful instructions on the suggested tasks you can undertake in the Create - Execute - Analyze life cycle phases of digital twin. You can see more explanation on the life cycle phases of digital twin.
Create Create digital twins from tools provided within user workspaces. Each digital twin will have one directory. It is suggested that user provide one bash shell script to run their digital twin. Users can create the required scripts and other files from tools provided in Workbench page.
Execute Digital twins are executed from within user workspaces. The given bash script gets executed from digital twin directory. Terminal-based digital twins can be executed from VSCode and graphical digital twins can be executed from VNC GUI. The results of execution can be placed in the data directory.
Analyze The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI. The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI.
The reusable assets (files) seen in the file manager are available in the Jupyter Lab. In addition, there is a git plugin installed in the Jupyter Lab using which you can link your files with the external git repositories.
"},{"location":"user/website/index.html#workbench","title":"Workbench","text":"The workbench page provides links to four integrated tools.
- Desktop
- VS Code
- Jupyter Lab
- Jupyter Notebook
The hyperlinks open in new browser tab. The screenshots of pages opened in new browser are:
Terminal
The Terminal hyperlink does not exist on workbench page. If you want terminal. Please use the tools dropdown in the Jupyter Notebook.
"},{"location":"user/website/index.html#digital-twins-preview-page","title":"Digital Twins Preview Page","text":"There is a fifth link on the Workbench page. It is Digital Twins Preview Page. It is an experimental feature at the moment. Clicking on the link opens a new page that is similar to the Digital Twins page.
There are two tabs, namely Manage and Execute.
Manage Read the complete description of digital twins. If necessary, users can delete a digital twin, removing it from the workspace with all its associated data. Users can also reconfigure the digital twin.
Execute Execute the Digital Twins using Gitlab CI/CD workflows.
"},{"location":"user/website/index.html#finally-logout","title":"Finally logout","text":"You have to close the browser in order to completely exit the DTaaS software platform.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"What is DTaaS?","text":"The Digital Twin as a Service (DTaaS) software platform is useful to Build, Use and Share digital twins (DTs).
Build: DTs are built on DTaaS using reusable DT assets available on the platform.
Use: Run your DTs on DTaaS.
Share: Share ready-to-use DTs with other users. It is also possible to share the services offered by one DT with other users.
There is an overview of DTaaS available in the form of slides, video, and feature walkthrough.
"},{"location":"index.html#license","title":"License","text":"This software is owned by The INTO-CPS Association and is available under the INTO-CPS License.
DTaaS software platform uses third-party open-source software. These software components have their own licenses.
"},{"location":"FAQ.html","title":"Frequently Asked Questions","text":""},{"location":"FAQ.html#abreviations","title":"Abreviations","text":"Term Full Form DT Digital Twin DTaaS Digital Twin as a Service PT Physical Twin"},{"location":"FAQ.html#general-questions","title":"General Questions","text":"What is DTaaS? DTaaS is software platform on which you can create and run digital twins. Please see the features page to get a sense of the things you can do in DaaS.
What is the scope and current capabilities of DTaaS? - DTaaS is a web based interface to allow you to invoke various tools related to work you want to perform with one or more DTs.
- DTaaS permits users to run DTs in their private workspaces. These user workspaces are based on Ubuntu 20.04 Operating system.
- DTaaS can help you create reusable DT assets only if DT asset authoring tools can work in Ubuntu 20.04 xfce desktop environment.
- DTs are just executables, as far as DTaaS is concerned. Users are not constrained to work with DTs in a certain way. The DTaaS suggests creation of DTs from reusable assets and provides a suggestive structure for DTs. The examples provide more insight into DTaaS way of working with DTs. But this suggested workflow is not binding on the users.
- DTs can be run as services with REST API from within user workspace which can help with service-level DT composition.
What can not be done inside DTaaS? - DTaaS as such won't help you to install DTs that you get from elsewhere.
- The current user interface of DTaaS web application is heavily reliant on the use of Jupyter lab and notebook. The Digital Twins page has Create / Execute / Analyze sections but all point to Jupyter lab. Web interface. The functionality of these pages is still under development.
- DTaaS does not yet have DevOps like automation capabilities.
Is there any fundamental difference between commercial solutions like Ansys Twin Builder and DTaaS? Commercial DT platforms like Ansys Twin Builder provide tight integration between models, simulation and sensors. This leads to fewer choices in DT design and implementation. In addition, there is a limitation of vendor lockin. On the other hand, DTaas lets users separate DT into reusable assets and combine these assets in a flexible way.
Do you provide licensed software like Matlab? Proprietary and commercially licensed software is not available by default on the software platform. But users have private workspaces which are based on Linux-based xfce Desktop environment. Users can install proprietary and commercially licensed software in their workspaces. Please see a screencast of using Matlab Simulink within the DTaaS software. The licensed software installed by one user is not available to another user.
"},{"location":"FAQ.html#digital-twin-assets","title":"Digital Twin Assets","text":"Can DTaaS be used to create new DT assets? The core feature of DTaaS software is to help users create DTs from assets already available in the library. However, it is possible for users to take advantage of services available in their workspace to install asset authoring tools in their own workspace. These authoring tools can then be used to create and publish new assets. User workspaces are private and are not shared with other users. Thus any licensed software tools installed in their workspace is only available to them.
"},{"location":"FAQ.html#digital-twin-models","title":"Digital Twin Models","text":"Can DTaaS create new DT models? DTaaS is not a model creation tool. You can put model creation tool inside DTaaS and create new models. The DTaaS itself does not create digital twin models but it can help users create digital twin models. You can run Linux desktop / terminal tools inside the DTaaS. So you can create models inside DTaaS and run them using tools that can run in Linux. The Windows only tools can not run in DTaaS.
How can DTaaS help to design geometric model? Does it support 3D modeling and simulation? Well, DTaaS by itself does not produce any models. DTaaS only provides a platform and an ecosystem of services to facilitate digital twins to be run as services. Since each user has a Linux OS at their disposal, they can also run digital twins that have graphical interface. In summary, DTaaS is neither a modeling nor simulation tool. If you need these kinds of tools, you need to bring them onto the platform. For example, if you need Matlab for your work, you need to bring he licensed Matlab software.
Can DTaaS support only the information models (or behavioral models) or some other kind of models? The DTaaS as such is agnostic to the kind of models you use. DTaaS can run all kinds of models. This includes behavioral and data models. As long as you have models and the matching solvers that can run in Linux OS, you are good to go in DTaaS. In some cases, models and solvers (tools) are bundled together to form monolithic DTs. The DTaaS does not limit you from running such DTs as well. DTaaS does not provide dedicated solvers. But if you can install a solver in your workspace, then you don't need the platform to provide one.
Does it support XML-based representation and ontology representation? Currently No. We are looking for users needing this capability. If you have concrete requirements and an example, we can discuss a way of realizing it in DTaaS.
"},{"location":"FAQ.html#communication-between-physical-twin-and-digital-twin","title":"Communication Between Physical Twin and Digital Twin","text":"How can DTaaS control the physical entity? Which technologies it uses for controlling the physical world? At a very abstract level, there is a communication from physical entity to digital entity and back to physical entity. How this communication should happen is decided by the person designing the digital entity. The DTaaS can provide communication services that can help you do this communication with relative ease. You can use InfluxDB, RabbitMQ and Mosquitto services hosted on DTaaS for two communication between digital and physical entities.
How would you measure a physical entity like shape, size, weight, structure, chemical attributes etc. using DTaaS? Any specific technology used in this case? The real measurements are done at physical twin which are then communicated to the digital twin. Any digital twin platform like DTaaS can only facilitate this communication of these measurements from physical twin. The DTaaS provides InfluxDB, RabbitMQ and Mosquitto services for this purpose. These three are probably most widely used services for digital twin communication. Having said that, DTaaS allows you to utilize other communication technologies and services hosted elsewhere on the Internet.
How can real-time data differ from static data and what is the procedure to identify dynamic data? Is there any UI or specific tool used here? DTaaS can not understand the static or dynamic nature of data. It can facilitate storing names, units and any other text description of interesting quantities (weight of batter, voltage output etc). It can also store the data being sent by the physical twin. The distinction between static and dynamic data needs to be made by the user. Only metadata of the data can reveal such more information about the nature of data. A tool can probably help in very specific cases, but you need metadata. If there is a human being making this distinction, then the need for metadata goes down but does not completely go away. In some of the DT platforms supported by manufacturers, there is a tight integration between data and model. In this case, the tool itself is taking care of the metadata. The DTaaS is a generic platform which can support execution of digital twins. If a tool can be executed on a Linux desktop / commandline, the tool can be supported within DTaaS. The tool (ex. Matlab) itself can take care of the metadata requirements.
"},{"location":"FAQ.html#data-management","title":"Data Management","text":"Can DTaaS collect data directly from sensors? Yes via platform services.
Does DTaaS support data collection from different sources like hardware, software and network? Is there any user interface or any tracking instruments used for data collection? The DTaaS provids InfluxDB, RabbitMQ, MQTT and MongoDB services. Both the physical twin and digital twin can utilize these protocols for communication. The IoT (time-series) data can be collected using InfluxDB and MQTT broker services. There is a user interface for InfluxDB which can be used to analyze the data collected. Users can also manually upload their data files into DTaaS.
Is DTaaS able to transmit data to cloud in real time? Yes via platform services.
Which transmission protocol does DTaaS allow? InfluxDB, RabbitMQ, MQTT and anything else that can be used from Cloud service providers.
Does DTaaS support multisource information and combined multi sensor input data? Can it provide analysis and decision-supporting inferences? You can store information from multiple sources. The existing InfluxDB services hosted on DTaaS already has a dedicated Influx / Flux query language for doing sensor fusion, analysis and inferences.
Which kinds of visualization technologies DTaaS can support (e.g. graphical, geometry, image, VR/AR representation)? Graphical, geometric and images. If you need specific licensed software for the visualization, you will have to bring the license for it. DTaaS does not support AR/VR.
"},{"location":"FAQ.html#platform-native-services-on-dtaas-platform","title":"Platform Native Services on DTaaS Platform","text":"Is DTaaS able to detect the anomalies about-to-fail components and prescribe solutions? This is the job of a digital twin. If you have a ready to use digital twin that does the job, DTaaS allows others to use your solution.
"},{"location":"FAQ.html#comparison-with-other-dt-platforms","title":"Comparison with other DT Platforms","text":"All the DT platforms seem to provide different features. Is there a comparison chart? Here is a qualitative comparison of different DT integration platforms:
Legend: high performance (H), mid performance (M) and low performance (L)
DT Platforms License DT Development Process Connectivity Security Processing power, performance and Scalability Data Storage Visualization Modeling and Simulation Microsoft Azure DT Commercial Cloud H H H M H H H AWS IOT Greengrass Open source commercial H H H M H H H Eclipse Ditto Open source M H M H H L L Asset Administration Shell Open source H H L H M L M PTC Thingworx Commercial H H H H H M M GE Predix Commercial M H H M L M L AU's DTaaS Open source H H L L M M M Adopted by Tanusree Roy from Table 4 and 5 of the following paper.
Ref: Naseri, F., Gil, S., Barbu, C., Cetkin, E., Yarimca, G., Jensen, A. C., ... & Gomes, C. (2023). Digital twin of electric vehicle battery systems: Comprehensive review of the use cases, requirements, and platforms. Renewable and Sustainable Energy Reviews, 179, 113280.
All the comparisons between DT platforms seems so confusing. Why? The fundamental confusion comes from the fact that different DT platforms (Azure DT, GE Predix) provide different kind of DT capabilities. You can run all kinds of models natively in GE Predix. In fact you can run models even next to (on) PTs using GE Predix. But you cannot natively do that in Azure DT service. You have to do the leg work of integrating with other Azure services or third-party services to get the kind of capabilities that GE Predix natively provides in one interface. The takeaway is that we pick horses for the courses.
"},{"location":"FAQ.html#gdpr-concerns","title":"GDPR Concerns","text":"Does your platform adhere to GDPR compliance standards? If so, how? The DTaaS software platform does not store any personal information of users. It only stores username to identify users and these usernames do not contain enough information to deduce the true identify of users.
Which security measures are deployed? How is data encrypted (if exists)? The default installation requires a HTTPS terminating reverse proxy server from user to the DTaaS software installation. The administrators of DTaaS software can also install HTTPS certificates into the application. The codebase can generate HTTPS application and the users also have the option of installing their own certificates obtained from certification agencies such as LetsEncrypt.
What security measures does your cloud provider offer? The current installation of DTaaS software runs on Aarhus University servers. The university network offers firewall access control to servers so that only permitted user groups have access to the network and physical access to the server.
How is user access controlled and authenticated? There is a two-level authorization mechanism in place in each default installation of DTaaS. The first-level is HTTP basic authorization over secure HTTPS connection. The second-level is the OAuth PKCE authorization flow for each user. The OAuth authorization is provider by a Gitlab instance. The DTaaS does not store the account and authorization information of users.
Does you platform manage personal data? How is data classified and tagged based on the sensitivity? Who has access to the critical data? The platform does not store personal data of users.
How are identities and roles managed within the platform? There are two roles for users on the platform. One is the administrator and the other one is user. The user roles are managed by the administrator.
"},{"location":"LICENSE.html","title":"License","text":"--- Start of Definition of INTO-CPS Association Public License ---
/*
-
This file is part of the INTO-CPS Association.
-
Copyright (c) 2017-CurrentYear, INTO-CPS Association (ICA),
- c/o Peter Gorm Larsen, Aarhus University, Department of Engineering,
-
Finlandsgade 22, 8200 Aarhus N, Denmark.
-
All rights reserved.
-
THIS PROGRAM IS PROVIDED UNDER THE TERMS OF GPL VERSION 3 LICENSE OR
- THIS INTO-CPS ASSOCIATION PUBLIC LICENSE (ICAPL) VERSION 1.0.
- ANY USE, REPRODUCTION OR DISTRIBUTION OF THIS PROGRAM CONSTITUTES
- RECIPIENT'S ACCEPTANCE OF THE INTO-CPS ASSOCIATION PUBLIC LICENSE OR
-
THE GPL VERSION 3, ACCORDING TO RECIPIENTS CHOICE.
-
The INTO-CPS tool suite software and the INTO-CPS Association
- Public License (ICAPL) are obtained from the INTO-CPS Association, either
- from the above address, from the URLs: http://www.into-cps.org or
- in the INTO-CPS tool suite distribution.
- GNU version 3 is obtained from:
-
http://www.gnu.org/copyleft/gpl.html.
-
This program is distributed WITHOUT ANY WARRANTY; without
- even the implied warranty of MERCHANTABILITY or FITNESS
- FOR A PARTICULAR PURPOSE, EXCEPT AS EXPRESSLY SET FORTH
- IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF
-
THE INTO-CPS ASSOCIATION PUBLIC LICENSE.
-
See the full ICAPL conditions for more details.
*/
--- End of INTO-CPS Association Public License Header ---
The ICAPL is a public license for the INTO-CPS tool suite with three modes/alternatives (GPL, ICA-Internal-EPL, ICA-External-EPL) for use and redistribution, in source and/or binary/object-code form:
-
GPL. Any party (member or non-member of the INTO-CPS Association) may use and redistribute INTO-CPS tool suite under GPL version 3.
-
Silver Level members of the INTO-CPS Association may also use and redistribute the INTO-CPS tool suite under ICA-Internal-EPL conditions.
-
Gold Level members of the INTO-CPS Association may also use and redistribute The INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL conditions.
Definitions of the INTO-CPS Association Public license modes:
-
GPL = GPL version 3.
-
ICA-Internal-EPL = These INTO-CPA Association Public license conditions together with Internally restricted EPL, i.e., EPL version 1.0 with the Additional Condition that use and redistribution by a member of the INTO-CPS Association is only allowed within the INTO-CPS Association member's own organization (i.e., its own legal entity), or for a member of the INTO-CPS Association paying a membership fee corresponding to the size of the organization including all its affiliates, use and redistribution is allowed within/between its affiliates.
-
ICA-External-EPL = These INTO-CPA Association Public license conditions together with Externally restricted EPL, i.e., EPL version 1.0 with the Additional Condition that use and redistribution by a member of the INTO-CPS Association, or by a Licensed Third Party Distributor having a redistribution agreement with that member, to parties external to the INTO-CPS Association member\u2019s own organization (i.e., its own legal entity) is only allowed in binary/object-code form, except the case of redistribution to other members the INTO-CPS Association to which source is also allowed to be distributed.
[This has the consequence that an external party who wishes to use the INTO-CPS Association in source form together with its own proprietary software in all cases must be a member of the INTO-CPS Association].
In all cases of usage and redistribution by recipients, the following conditions also apply:
a) Redistributions of source code must retain the above copyright notice, all definitions, and conditions. It is sufficient if the ICAPL Header is present in each source file, if the full ICAPL is available in a prominent and easily located place in the redistribution.
b) Redistributions in binary/object-code form must reproduce the above copyright notice, all definitions, and conditions. It is sufficient if the ICAPL Header and the location in the redistribution of the full ICAPL are present in the documentation and/or other materials provided with the redistribution, if the full ICAPL is available in a prominent and easily located place in the redistribution.
c) A recipient must clearly indicate its chosen usage mode of ICAPL, in accompanying documentation and in a text file ICA-USAGE-MODE.txt, provided with the distribution.
d) Contributor(s) making a Contribution to the INTO-CPS Association thereby also makes a Transfer of Contribution Copyright. In return, upon the effective date of the transfer, ICA grants the Contributor(s) a Contribution License of the Contribution. ICA has the right to accept or refuse Contributions.
Definitions:
\"Subsidiary license conditions\" means:
The additional license conditions depending on the by the recipient chosen mode of ICAPL, defined by GPL version 3.0 for GPL, and by EPL for ICA-Internal-EPL and ICA-External-EPL.
\"ICAPL\" means:
INTO-CPS Association Public License version 1.0, i.e., the license defined here (the text between \"--- Start of Definition of INTO-CPS Association Public License ---\" and \"--- End of Definition of INTO-CPS Association Public License ---\", or later versions thereof.
\"ICAPL Header\" means:
INTO-CPS Association Public License Header version 1.2, i.e., the text between \"--- Start of Definition of INTO-CPS Association Public License ---\" and \"--- End of INTO-CPS Association Public License Header ---, or later versions thereof.
\"Contribution\" means:
a) in the case of the initial Contributor, the initial code and documentation distributed under ICAPL, and
b) in the case of each subsequent Contributor: i) changes to the INTO-CPS tool suite, and ii) additions to the INTO-CPS tool suite;
where such changes and/or additions to the INTO-CPS tool suite originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the INTO-CPS tool suite by such Contributor itself or anyone acting on such Contributor's behalf.
For Contributors licensing the INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL conditions, the following conditions also hold:
Contributions do not include additions to the distributed Program which: (i) are separate modules of software distributed in conjunction with the INTO-CPS tool suite under their own license agreement, (ii) are separate modules which are not derivative works of the INTO-CPS tool suite, and (iii) are separate modules of software distributed in conjunction with the INTO-CPS tool suite under their own license agreement where these separate modules are merged with (weaved together with) modules of The INTO-CPS tool suite to form new modules that are distributed as object code or source code under their own license agreement, as allowed under the Additional Condition of internal distribution according to ICA-Internal-EPL and/or Additional Condition for external distribution according to ICA-External-EPL.
\"Transfer of Contribution Copyright\" means that the Contributors of a Contribution transfer the ownership and the copyright of the Contribution to the INTO-CPS Association, the INTO-CPS Association Copyright owner, for inclusion in the INTO-CPS tool suite. The transfer takes place upon the effective date when the Contribution is made available on the INTO-CPS Association web site under ICAPL, by such Contributors themselves or anyone acting on such Contributors' behalf. The transfer is free of charge. If the Contributors or the INTO-CPS Association so wish, an optional Copyright transfer agreement can be signed between the INTO-CPS Association and the Contributors.
\"Contribution License\" means a license from the INTO-CPS Association to the Contributors of the Contribution, effective on the date of the Transfer of Contribution Copyright, where the INTO-CPS Association grants the Contributors a non-exclusive, world-wide, transferable, free of charge, perpetual license, including sublicensing rights, to use, have used, modify, have modified, reproduce and or have reproduced the contributed material, for business and other purposes, including but not limited to evaluation, development, testing, integration and merging with other software and distribution. The warranty and liability disclaimers of ICAPL apply to this license.
\"Contributor\" means any person or entity that distributes (part of) the INTO-CPS tool chain.
\"The Program\" means the Contributions distributed in accordance with ICAPL.
\"The INTO-CPS tool chain\" means the Contributions distributed in accordance with ICAPL.
\"Recipient\" means anyone who receives the INTO-CPS tool chain under ICAPL, including all Contributors.
\"Licensed Third Party Distributor\" means a reseller/distributor having signed a redistribution/resale agreement in accordance with ICAPL and the INTO-CPS Association Bylaws, with a Gold Level organizational member which is not an Affiliate of the reseller/distributor, for distributing a product containing part(s) of the INTO-CPS tool suite. The Licensed Third Party Distributor shall only be allowed further redistribution to other resellers if the Gold Level member is granting such a right to it in the redistribution/resale agreement between the Gold Level member and the Licensed Third Party Distributor.
\"Affiliate\" shall mean any legal entity, directly or indirectly, through one or more intermediaries, controlling or controlled by or under common control with any other legal entity, as the case may be. For purposes of this definition, the term \"control\" (including the terms \"controlling,\" \"controlled by\" and \"under common control with\") means the possession, direct or indirect, of the power to direct or cause the direction of the management and policies of a legal entity, whether through the ownership of voting securities, by contract or otherwise.
NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF ICAPL, THE INTO-CPS ASSOCIATION IS PROVIDED ON AN \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the INTO-CPS tool suite and assumes all risks associated with its exercise of rights under ICAPL , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF ICAPL, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE INTO-CPS TOOL SUITE OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
A Contributor licensing the INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL may choose to distribute (parts of) the INTO-CPS tool suite in object code form under its own license agreement, provided that:
a) it complies with the terms and conditions of ICAPL; or for the case of redistribution of the INTO-CPS tool suite together with proprietary code it is a dual license where the INTO-CPS tool suite parts are distributed under ICAPL compatible conditions and the proprietary code is distributed under proprietary license conditions; and
b) its license agreement: i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; iii) states that any provisions which differ from ICAPL are offered by that Contributor alone and not by any other party; and iv) states from where the source code for the INTO-CPS tool suite is available, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
When the INTO-CPS tool suite is made available in source code form:
a) it must be made available under ICAPL; and
b) a copy of ICAPL must be included with each copy of the INTO-CPS tool suite.
c) a copy of the subsidiary license associated with the selected mode of ICAPL must be included with each copy of the INTO-CPS tool suite.
Contributors may not remove or alter any copyright notices contained within The INTO-CPS tool suite.
If there is a conflict between ICAPL and the subsidiary license conditions, ICAPL has priority.
This Agreement is governed by the laws of Denmark. The place of jurisdiction for all disagreements related to this Agreement, is Aarhus, Denmark.
The EPL 1.0 license definition has been obtained from: http://www.eclipse.org/legal/epl-v10.html. It is also reproduced in the INTO-CPS distribution.
The GPL Version 3 license definition has been obtained from http://www.gnu.org/copyleft/gpl.html. It is also reproduced in the INTO-CPS distribution.
--- End of Definition of INTO-CPS Association Public License ---
"},{"location":"PUBLISH.html","title":"Project Documentation","text":"This file contains instructions for creation, compilation and publication of project documentation.
The documentation system is based on Material for Mkdocs. The documentation is generated based on the configuration files:
- mkdocs.yml: used for generating online documentation which is hosted on the web
- mkdocs-github.yml: used for generating documentation in github actions
Install Mkdocs using the following command.
pip install -r docs/requirements.txt\n
"},{"location":"PUBLISH.html#fix-linting-errors","title":"Fix Linting Errors","text":"This project uses markdownlint linter tool for identifying the formatting issues in markdown files. Run
mdl docs\n
from top-directory of the project and fix any identified issues. This needs to be done before committing changes to the documentation.
"},{"location":"PUBLISH.html#create-documentation","title":"Create documentation","text":"The document generation pipeline can generate both html and pdf versions of documentation.
The generation of pdf version of documentation is controlled via a shell variable.
export MKDOCS_ENABLE_PDF_EXPORT=0 #disables generation of pdf document\nexport MKDOCS_ENABLE_PDF_EXPORT=1 #enables generation of pdf document\n
The mkdocs
utility allows for live editing of documentation on the developer computer.
You can add, and edit the markdown files in docs/
directory to update the documentation. There is a facility to check the status of your documentation by using:
mkdocs serve --config-file mkdocs.yml\n
"},{"location":"PUBLISH.html#publish-documentation","title":"Publish documentation","text":"You can compile and place the html version of documentation on the webpage-docs
branch of the codebase.
export MKDOCS_ENABLE_PDF_EXPORT=1 #enable generation of pdf document\nsource script/docs.sh [version]\n
The command takes an optional version parameter. This version parameter is needed for making a release. Otherwise, the documentation gets published with the latest version tag. This command makes a new commit on webpage-docs
branch. You need to push the branch to upstream.
git push webpage-docs\n
The github pages system serves the project documentation from this branch.
"},{"location":"bugs.html","title":"Few issues in the Software","text":"If you find a bug, please open an issue
"},{"location":"bugs.html#third-party-software","title":"Third-Party Software","text":"The explanation given below corresponds to the bugs you may face from third party software included in DTaaS. Known issues are listed below.
"},{"location":"bugs.html#ml-workspace","title":"ML Workspace","text":" - the docker container might down a bit after two weeks. The only known solution is to restart the docker container. You don't need to restart the complete DTaaS platform, restart of the docker container of ml-workspace is sufficient.
- the terminal tool doesn't seem to have the ability to refresh itself. If there is an issue, the only solution is to close and reopen the terminal from \"open tools\" drop down of notebook
- terminal app does not show at all after some time: terminal always comes if it is open from drop-down menu of Jupyter Notebook, but not as a direct link.
"},{"location":"bugs.html#gitlab","title":"Gitlab","text":" - The gilab oauth authorization service does not have a way to sign out of a third-party application. Even if you sign out of DTaaS, the gitlab still shows user as signed in. The next time you click on the sign in button on the DTaaS page, user is not shown the login page. Instead user is directly taken to the Library page. So close the brower window after you are done. Another way to overcome this limitation is to open your gitlab instance (
https://gitlab.foo.com
) and signout from there. Thus user needs to sign out of two places, namely DTaaS and gitlab, in order to completely exit the DTaaS application.
"},{"location":"thanks.html","title":"Contributors","text":"code contributors
"},{"location":"thanks.html#users","title":"Users","text":"Cl\u00e1udio \u00c2ngelo Gon\u00e7alves Gomes, Dmitri Tcherniak, Elif Ecem Bas, Farshid Naseri, Giuseppe Abbiati, Hannes Iven, Hao Feng, Henrik Ejersbo, Lars Vosteen, Lei Zhao, Mirgita Frasheri, Morten Haahr Kristensen, Neena Goveas, Tanusree Roy, Santiago Gil Arboleda, Swarup Mahato, Valdemar Tang
"},{"location":"thanks.html#example-contributors","title":"Example Contributors","text":"Example Name Contributors Mass Spring Damper Prasad Talasila Water Tank Fault Injection Henrik Ejersbo and Mirgita Frasheri Water Tank Model Swap Henrik Ejersbo and Mirgita Frasheri Desktop Robotti with RabbitMQ Mirgita Frasheri Water Treatment Plant and OPC-UA Lucia Royo and Alejandro Labarias Three Water Tanks with DT Manager Framework Santiago Gil Arboleda Flex-Cell with Two Industrial Robots Santiago Gil Arboleda Incubator Morten Haahr Kristensen Firefighters in Emergency Environments Lars Vosteen and Hannes Iven Mass Spring Damper with NuRV Runtime Monitor Alberto Bonizzi Incubator with NuRV Runtime Monitor Alberto Bonizzi and Morten Haahr Kristensen Incubator with NuRV Runtime Monitor Service Valdemar Tang Water Tank Fault Injection with NuRV Runtime Monitor Alberto Bonizzi Incubator Co-Simulation with NuRV Runtime Monitor FMU Morten Haahr Kristensen Incubator with NuRV Runtime Monitor FMU as Service Valdemar Tang and Morten Haahr Kristensen Incubator with NuRV Runtime Monitor as Service Morten Haahr Kristensen and Valdemar Tang"},{"location":"thanks.html#documentation","title":"Documentation","text":" - Talasila, P., Gomes, C., Mikkelsen, P. H., Arboleda, S. G., Kamburjan, E., & Larsen, P. G. (2023). Digital Twin as a Service (DTaaS): A Platform for Digital Twin Developers and Users arXiv preprint arXiv:2305.07244.
- Astitva Sehgal for developer and example documentation.
- Tanusree Roy and Farshid Naseri for asking interesting questions that ended up in FAQs.
"},{"location":"third-party.html","title":"Third Party Software","text":"The DTaaS software platform uses many third-party software. These software components have their own licenses.
"},{"location":"third-party.html#user-installations","title":"User Installations","text":"The list of software included with DTaaS installation scripts are:
Software Package Usage License docker v24.0 mandatory Apache 2.0 ml-workspace-minimal v0.13 mandatory Apache 2.0 nodejs v20.10 mandatory Custom - Modified MIT npm v10.2 mandatory Artistic License 2.0 serve mandatory MIT Tr\u00e6fik v2.10 mandatory MIT yarn v1.22 mandatory BSD 2-Clause eclipse-mosquitto v2 optional Eclipse Public License-2.0 gitlab-ce v16.4 optional MIT Grafana v10.1 optional GNU Affero General Public (AGPL) License v3.0 InfluxDB v2.7 optional Apache2, MIT Mongodb v7.0 optional AGPL License and Server Side Public License (SSPL) v1 Tabbitmq v3-management optional Mozilla Public License Telegraf v1.28 optional MIT"},{"location":"third-party.html#development-environments","title":"Development Environments","text":"Inaddition to all the software included in user installations, the DTaaS development environments may use the following additional software packages.
Software Package Usage License Material for mkdocs mandatory MIT Docker-compose v2.20 optional Apache 2.0 Jupyter Lab optional 3-Clause BSD Microk8s v1.27 optional Apache 2.0 Openssl optional Custom License"},{"location":"third-party.html#package-dependencies","title":"Package Dependencies","text":"There are specific software packages included in the development of client, library microservice and runner microservice. These packages can be seen in the package.json file of the matching directories.
The plugins of material for mkdocs might have their own licenses. The list of plugins used are in requirements.txt file.
"},{"location":"admin/cli.html","title":"DTaaS Command Line Interface","text":"This is a command line tool for the INTO-CPS-Association Digital Twins as a Service.
"},{"location":"admin/cli.html#prerequisite","title":"Prerequisite","text":"The DTaaS application with base users and essential containers should be up and running before using the CLI.
"},{"location":"admin/cli.html#installation","title":"Installation","text":"Simply install using:
We recommend installing this in a virutal environment.
Steps to install:
- Change the working folder:
cd <DTaaS-directory>/cli\n
pip install dtaas\n
"},{"location":"admin/cli.html#usage","title":"Usage","text":""},{"location":"admin/cli.html#setup","title":"Setup","text":"The base DTaaS system should be up and running before adding/deleting users with the CLI.
Additionally, Setup the dtaas.toml file in the cli directory:
-
Set common.server-dns to domain name of your server. If you want to bring up the server locally, please set this to \"localhost\".
-
Set the path to the full system path of the DTaaS directory.
[common]\n# absolute path to the DTaaS application directory\nserver-dns = \"localhost\"\npath = \"/home/Desktop/DTaaS\"\n
"},{"location":"admin/cli.html#select-template","title":"Select Template","text":"The cli uses YAML templates provided in this directory to create new user workspaces. The templates are:
- user.local.yml: localhost installation
- User.server.yml: multi-user web application application over HTTP
- user.server.secure.yml: multi-user web application application over HTTPS
Please note that the cli is not capable of detecting the difference between HTTP and HTTPS modes of web application. Thus if you are serving web application over HTTPS, please do one extra step.
cp user.server.secure.yml user.server.yml\n
This will change the user template from insecure to secure.
"},{"location":"admin/cli.html#add-users","title":"Add users","text":"To add new users using the CLI, fill in the users.add list in dtaas.toml with the Gitlab instance usernames of the users to be added
[users]\n# matching user info must present in this config file\nadd = [\"username1\",\"username2\", \"username3\"]\n
Make sure you are in the cli directory.
Then simply:
dtaas admin user add\n
The command checks for the existence of files/<username>
directory. If it does not exist, a new directory with correct file structure is created. The directory, if it exists, must be owned by the user executing dtaas command on the host operating system. If the files do not have the expected ownership rights, the command fails.
"},{"location":"admin/cli.html#caveats","title":"Caveats","text":"This brings up the containers, without the AuthMS authentication.
-
Currently the email fields for each user in dtaas.toml are not in use, and are not necessary to fill in. These emails must be configured manually for each user in the deploy/docker/conf.server files and the traefik-forward-auth container must be restarted. This is done as follows:
-
Go to the docker directory
cd <DTaaS>/deploy/docker\n
- Add three lines to the
conf.server
file
rule.onlyu3.action=auth\nrule.onlyu3.rule=PathPrefix(`/user3`)\nrule.onlyu3.whitelist = user3@emailservice.com\n
- Run the command for these changes to take effect:
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
The new users are now added to the DTaaS instance, with authorization enabled.
"},{"location":"admin/cli.html#delete-users","title":"Delete users","text":" - To delete existing users, fill in the users.delete list in dtaas.toml with the Gitlab instance usernames of the users to be deleted.
[users]\n# matching user info must present in this config file\ndelete = [\"username1\",\"username2\", \"username3\"]\n
- Make sure you are in the cli directory.
Then simply:
dtaas admin user delete\n
- Remember to remove the rules for deleted users in conf.server.
"},{"location":"admin/cli.html#additional-points-to-remember","title":"Additional Points to Remember","text":" -
The user add CLI will add and start a container for a new user. It can also start a container for an existing user if that container was somehow stopped. It shows a Running status for existing user containers that are already up and running, it doesn't restart them.
-
user add and user delete CLIs return an error if the add and delete lists in dtaas.toml are empty, respectively.
-
'.' is a special character. Currently, usernames which have '.'s in them cannot be added properly through the CLI. This is an active issue that will be resolved in future releases.
"},{"location":"admin/host.html","title":"Install DTaaS on a Production Server","text":"The installation instructions provided in this README are ideal for hosting the DTaaS as web application for multiple users.
"},{"location":"admin/host.html#design","title":"Design","text":"An illustration of the docker containers used and the authorization setup is shown here.
In the new application configuration, there are two OAuth2 applications.
"},{"location":"admin/host.html#requirements","title":"Requirements","text":"The installation requirements to run this docker version of the DTaaS are:
"},{"location":"admin/host.html#docker-with-compose-plugin","title":"Docker with Compose Plugin","text":"It is mandatory to have Docker installed on your computer. We highly recommend using Docker Desktop.
"},{"location":"admin/host.html#domain-name","title":"Domain name","text":"The DTaaS software is a web application and is preferably hosted on a server with a domain name like foo.com. It is also possible to use an IP address in place of domain name.
"},{"location":"admin/host.html#tls-https-certificate-optional","title":"TLS / HTTPS Certificate (Optional)","text":"It is possible to add HTTPS option to the DTaaS software installation. Creation of the required TLS certificates is possible through certbot.
"},{"location":"admin/host.html#oauth-provider","title":"OAuth Provider","text":"Gitlab Instance - The DTaaS uses Gitlab OAuth2.0 authorization for user authorization. You can either have an on-premise instance of gitlab, or use gitlab.com itself.
"},{"location":"admin/host.html#user-accounts","title":"User Accounts","text":"Create user accounts in a linked gitlab instance for all the users.
The default docker compose file contains two - user1 and user2. These names need to be changed to suitable usernames.
"},{"location":"admin/host.html#oauth2-application-registration","title":"OAuth2 Application Registration","text":"The multi-user installation setup requires dedicated authorization setup for both frontend website and backend services. Both these authorization requirements are satisfied using OAuth2 protocol.
- The frontend website is a React single page application (SPA).
- The details of Oauth2 app for the frontend website are in client docs.
- The Oauth2 authorization for backend services is managed by Traefik forward-auth. The details of this authorization setup are in server docs.
It is possible to use https://gitlab.com or a local installation of Gitlab can be used for this purpose. Based on your selection of gitlab instance, it is necessary to register these two OAuth2 applications and link them to your intended DTaaS installation.
Please see gitlab oauth provider documentation for further help with creating these two OAuth applications.
"},{"location":"admin/host.html#download-release-package","title":"Download Release Package","text":"The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
Tip
- The filepaths shown here follow Linux OS. The installation procedures also work with Windows OS.
- The description below refers to filenames. All the file paths mentioned below are relatively to the top-level DTaaS directory.
"},{"location":"admin/host.html#configuration","title":"Configuration","text":"Three following configuration files need to be updated.
"},{"location":"admin/host.html#docker-compose","title":"Docker Compose","text":"The docker compose configuration is in deploy/docker/.env.server
. it is a sample file. It contains environment variables that are used by the docker compose files. It can be updated to suit your local installation scenario. It contains the following environment variables.
Edit all the fields according to your specific case.
URL Path Example Value Explanation DTAAS_DIR '/Users/username/DTaaS' Full path to the DTaaS directory. This is an absolute path with no trailing slash. SERVER_DNS foo.com The server DNS, if you are deploying with a dedicated server. Remember not use http(s) at the beginning of the DNS string OAUTH_URL gitlab.foo.com The URL of your Gitlab instance. It can be gitlab.com if you are planning to use it for authorization. OAUTH_CLIENT_ID 'xx' The ID of your server OAuth application OAUTH_CLIENT_SECRET 'xx' The Secret of your server OAuth application OAUTH_SECRET 'random-secret-string' Any private random string. This is a password you choose for local installation. username1 'user1' The gitlab instance username of a user of DTaaS username2 'user2' The gitlab instance username of a user of DTaaS Tip
Important points to note:
- The path examples given here are for Linux OS. These paths can be Windows OS compatible paths as well.
- The client configuration file is located at
deploy/config/client/env.js
. - The Server DNS can also be an IP address. However, for proper working it is neccessary to use the same convention (IP/DNS) in the client configuration file as well.
"},{"location":"admin/host.html#website-client","title":"Website Client","text":"The frontend React website requires configuration which is specified in the client configuration file (deploy/config/client/env.js
).
Further explanation on the client configuration is available in client config.
Tip
There is a default OAuth application registered on https://gitlab.com for client. The corresponding OAuth application details are:
REACT_APP_CLIENT_ID: '1be55736756190b3ace4c2c4fb19bde386d1dcc748d20b47ea8cfb5935b8446c',\nREACT_APP_AUTH_AUTHORITY: 'https://gitlab.com/',\n
This can be used for test purposes. Please use your own OAuth application for secure production deployments.
"},{"location":"admin/host.html#create-user-workspace","title":"Create User Workspace","text":"The existing filesystem for installation is setup for files/user1
. A new filesystem directory needs to be created for the selected user.
Please execute the following commands from the top-level directory of the DTaaS project.
cp -R files/user1 files/username\n
where username is one of the selected usernames. This command needs to be repeated for all the selected users.
"},{"location":"admin/host.html#configure-authorization-rules-for-backend-authorization","title":"Configure Authorization Rules for Backend Authorization","text":"The Traefik forward-auth microservices requires configuration rules to manage authorization for different URL paths. The deploy/docker/conf.server
file can be used to configure the authorization for user workspaces.
rule.onlyu1.action=auth\nrule.onlyu1.rule=Path(`/user1`)\nrule.onlyu1.whitelist = user1@localhost\n\nrule.onlyu1.action=auth\nrule.onlyu1.rule=Path(`/user2`)\nrule.onlyu1.whitelist = user2@localhost\n
Please change the usernames and email addresses to the matching user accounts on the OAuth provider (either https://gitlab.foo.com or https://gitlab.com).
"},{"location":"admin/host.html#caveat","title":"Caveat","text":"The usernames in the deploy/docker/.env.server
file need to match those in the deploy/docker/conf.server
file.
Traefik routes are controlled by the deploy/docker/.env.server
file. Authorization on these routes is controlled by the deploy/docker/conf.server
file. If a route is not specified in deploy/docker/conf.server
file but an authorisation is requested by traefik for this unknown route, the default behavior of traefik forward-auth kicks in. This default behavior is to enable endpoint being available to any signed in user.
If there are extra routes in deploy/docker/conf.server
file but these are not in deploy/docker/.env.server
file, such routes are not served by traefik; it will give 404 server response.
"},{"location":"admin/host.html#access-rights-over-files","title":"Access Rights Over Files","text":"Warning
The default setting in docker compose file exposes all user files at http://foo.com/lib/files. All files of all the users are readable-writable by all logged in users. The compose.server.yml
/ compose.server.secure.yml
file needs to be updated to expose another directory like common assets directory.
If you wish to reduce this scope to only common assets, please change,
libms:\n image: intocps/libms:latest\n restart: unless-stopped\n volumes:\n - ${DTAAS_DIR}/deploy/config/lib.env:/dtaas/libms/.env\n - ${DTAAS_DIR}/files/common:/dtaas/libms/files\n
The change in the last line. The ${DTAAS_DIR}/files
got replaced by ${DTAAS_DIR}/files/common
. With this change, only common files are readable-writable by all logged in users.
"},{"location":"admin/host.html#add-tls-certificates-optional","title":"Add TLS Certificates (Optional)","text":"The application can be served on HTTPS connection for which TLS certificates are needed. The certificates need to be issued for foo.com
or *.foo.com
. The names of the certificates must be fullchain.pem
and privkey.pem
. Copy these two certificate files into:
certs/foo.com/fullchain.pem
certs/foo.com/privkey.pem
Traefik will run with self-issued certificates if the above two certificates are either not found or found invalid.
Remember to update dynamic/tls.yml
with correct path matching your DNS name. For example, if your DNS name is www.foo.com
, then copy the TLS certificates of www.foo.com
to certs/
directory and update dynamic/tls.yml
as follows.
tls:\n certificates:\n - certFile: /etc/traefik-certs/www.foo.com/fullchain.pem\n keyFile: /etc/traefik-certs/www.foo.com/privkey.pem\n stores:\n - default\n
"},{"location":"admin/host.html#run","title":"Run","text":""},{"location":"admin/host.html#over-http","title":"Over HTTP","text":"This docker compose file serves application over HTTP.
The commands to start and stop the appliation are:
docker compose -f compose.server.yml --env-file .env.server up -d\ndocker compose -f compose.server.yml --env-file .env.server down\n
To restart only a specific container, for example `client``
docker compose -f compose.server.yml --env-file .env.server up -d --force-recreate client\n
"},{"location":"admin/host.html#over-https","title":"Over HTTPS","text":"This docker compose file serves application over HTTP.
The commands to start and stop the appliation are:
docker compose -f compose.server.secure.yml --env-file .env.server up -d\ndocker compose -f compose.server.secure.yml --env-file .env.server down\n
To restart only a specific container, for example `client``
docker compose -f compose.server.secure.yml --env-file .env.server up -d --force-recreate client\n
"},{"location":"admin/host.html#use","title":"Use","text":"The application will be accessible at: from web browser. Sign in using your account linked to either gitlab.com or your local gitlab instance.
All the functionality of DTaaS should be available to your users through the single page client now.
You may have to click Sign in to Gitlab on the Client page and authorize access to the shown application.
"},{"location":"admin/host.html#adding-a-new-user","title":"Adding a new user","text":"Please see the add new user to add new users.
"},{"location":"admin/host.html#references","title":"References","text":"Image sources: Traefik logo, ml-workspace, reactjs, gitlab
"},{"location":"admin/localhost-secure.html","title":"Install DTaaS on localhost with Gitlab Integration","text":"This installation is ideal for single users intending to use DTaaS on their own computers.
The installation instructions provided in this README are ideal for running the DTaaS on localhost served over HTTPS connection. The intention is to integrate Gitlab into DTaaS so that both are running on localhost.
If you do not need gitlab running on localhost, please use the simpler localhost setup.
"},{"location":"admin/localhost-secure.html#design","title":"Design","text":"An illustration of the docker containers used and the authorization setup is shown here.
"},{"location":"admin/localhost-secure.html#requirements","title":"Requirements","text":"The installation requirements to run this docker version of the DTaaS are:
- docker desktop / docker CLI with compose plugin
- mkcert
"},{"location":"admin/localhost-secure.html#download-release-package","title":"Download Release Package","text":"The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
file pathnames
- The filepaths shown here follow POSIX convention. The installation procedures also work with Windows paths.
- The description below refers to filenames. All the file paths mentioned below are relatively to the top-level DTaaS directory.
"},{"location":"admin/localhost-secure.html#configure-and-run","title":"Configure and Run","text":""},{"location":"admin/localhost-secure.html#create-user-workspace","title":"Create User Workspace","text":"The existing filesystem for installation is setup for user1
. A new filesystem directory needs to be created for the selected user.
Please execute the following commands from the top-level directory of the DTaaS project.
cp -R files/user1 files/username\n
where username is the selected username to be created (in next steps) on Gitlab running at https://localhost/gitlab.
"},{"location":"admin/localhost-secure.html#obtain-tls-https-certificate","title":"Obtain TLS / HTTPS Certificate","text":"Use mkcert to generate TLS certificates using this guide. The certificates need to be generated for localhost
.
The names of the certificates must be fullchain.pem
and privkey.pem
. The fullchain.pem
corresponds to public certificate and the privkey.pem
corresponds to private key.
"},{"location":"admin/localhost-secure.html#add-tls-certificates-to-traefik","title":"Add TLS Certificates to Traefik","text":"Copy the two certificate files into:
deploy/docker/certs/localhost/fullchain.pem
deploy/docker/certs/localhost/privkey.pem
Traefik will run with self-issued certificates if the above two certificates are either not found or found invalid.
"},{"location":"admin/localhost-secure.html#configure-docker-compose","title":"Configure Docker Compose","text":"The docker compose configuration is in deploy/docker/.env.local
; it is a sample file. It contains environment variables that are used by the docker compose files. It can be updated to suit your local installation scenario. It contains the following environment variables.
Edit all the fields according to your specific case.
URL Path Example Value Explanation DTAAS_DIR '/home/Desktop/DTaaS' Full path to the DTaaS directory. This is an absolute path with no trailing slash. username1 'user1' Your gitlab username Important points to note:
- The path examples given here are for Linux OS. These paths can be Windows OS compatible paths as well.
- The client configuration file is located at
deploy/config/client/env.local.js
. Edit the URLs in this file by replacing http
with https
. Beyond this, it is not necessary to modify this file.
"},{"location":"admin/localhost-secure.html#start-dtaas-to-integrate-gitlab","title":"Start DTaaS to Integrate Gitlab","text":"Start the appliation with
docker compose -f compose.local.secure.yml --env-file .env.local up -d\n
"},{"location":"admin/localhost-secure.html#start-gitlab","title":"Start Gitlab","text":"Use the instructions provided in gitlab integration to bring up Gitlab on localhost and the Gitlab service will be available at https://localhost/gitlab
"},{"location":"admin/localhost-secure.html#register-oauth2-application","title":"Register OAuth2 Application","text":"The frontend website requires OAuth2 application registration on the integrated Gitlab. The details of Oauth2 app for the frontend website are in client docs.
The default OAuth client application provided in env.local.js
works. However, if you intend to run an integrated gitlab instance, then this application needs to be created on gitlab running at https://localhost/gitlab.
Remember to use https://localhost/Library as the Callback URL (REACT_APP_REDIRECT_URI
).
Please see gitlab oauth provider documentation for further help with creating this OAuth application.
"},{"location":"admin/localhost-secure.html#update-client-website-configuration","title":"Update Client Website Configuration","text":"Replace the contents of deploy/config/client/env.local.js
with the following.
if (typeof window !== 'undefined') {\n window.env = {\n REACT_APP_ENVIRONMENT: 'local',\n REACT_APP_URL: 'https://localhost/',\n REACT_APP_URL_BASENAME: '',\n REACT_APP_URL_DTLINK: '/lab',\n REACT_APP_URL_LIBLINK: '',\n REACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\n REACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\n REACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\n REACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\n\n REACT_APP_CLIENT_ID: 'xxxxxx',\n REACT_APP_AUTH_AUTHORITY: 'https://localhost/gitlab/',\n REACT_APP_REDIRECT_URI: 'https://localhost/Library',\n REACT_APP_LOGOUT_REDIRECT_URI: 'https://localhost/',\n REACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n };\n};\n
And then update OAuth2 client application ID (REACT_APP_CLIENT_ID
) with that of the newly registered OAuth2 application.
"},{"location":"admin/localhost-secure.html#restart-dtaas-client-website","title":"Restart DTaaS Client Website","text":"To update the client website configuration, run
docker compose -f compose.local.secure.yml --env-file .env.local up -d --force-recreate client\n
"},{"location":"admin/localhost-secure.html#use","title":"Use","text":"The application will be accessible at: https://localhost from web browser. Sign in using your https://localhost/gitlab account.
All the functionality of DTaaS should be available to you through the single page client now.
"},{"location":"admin/localhost-secure.html#limitations","title":"Limitations","text":"The library microservice is not included in the localhost installation scenario.
"},{"location":"admin/localhost-secure.html#docker-help","title":"Docker Help","text":"The commands to start and stop the appliation are:
docker compose -f compose.local.secure.yml --env-file .env.local up -d\ndocker compose -f compose.local.secure.yml --env-file .env.local down\n
To restart only a specific container, for example client
docker compose -f compose.local.secure.yml --env-file .env.local up -d --force-recreate client\n
"},{"location":"admin/localhost-secure.html#references","title":"References","text":"Image sources: Traefik logo, ml-workspace, reactjs, gitlab
"},{"location":"admin/localhost.html","title":"Install DTaaS on localhost","text":"The installation instructions provided in this README are ideal for running the DTaaS on both localhost. This installation is ideal for single users intending to use DTaaS on their own computers.
"},{"location":"admin/localhost.html#design","title":"Design","text":"An illustration of the docker containers used and the authorization setup is shown here.
"},{"location":"admin/localhost.html#requirements","title":"Requirements","text":"The installation requirements to run this docker version of the DTaaS are:
- docker desktop / docker CLI with compose plugin
- User account on gitlab.com
Tip
The frontend website requires authorization. The default authorization configuration works for gitlab.com. If you desire to use locally hosted gitlab instance, please see the client docs.
"},{"location":"admin/localhost.html#download-release-package","title":"Download Release Package","text":"The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
Tip
- The filepaths shown here follow POSIX convention. The installation procedures also work with Windows paths.
- The description below refers to filenames. All the file paths mentioned below are relatively to the top-level DTaaS directory.
"},{"location":"admin/localhost.html#configuration","title":"Configuration","text":""},{"location":"admin/localhost.html#docker-compose","title":"Docker Compose","text":"The docker compose configuration is in deploy/docker/.env.local
; it is a sample file. It contains environment variables that are used by the docker compose files. It can be updated to suit your local installation scenario. It contains the following environment variables.
Edit all the fields according to your specific case.
URL Path Example Value Explanation DTAAS_DIR '/home/Desktop/DTaaS' Full path to the DTaaS directory. This is an absolute path with no trailing slash. username1 'user1' Your gitlab username Tip
Important points to note:
- The path examples given here are for Linux OS. These paths can be Windows OS compatible paths as well.
- The client configuration file is located at
deploy/config/client/env.local.js
. If you are following the guide to use HTTPS on localhost, edit the URLs in this file by replacing http
with https
. Beyond this, it is not necessary to modify this file.
"},{"location":"admin/localhost.html#create-user-workspace","title":"Create User Workspace","text":"The existing filesystem for installation is setup for user1
. A new filesystem directory needs to be created for the selected user.
Please execute the following commands from the top-level directory of the DTaaS project.
cp -R files/user1 files/username\n
where username is the selected username registered on https://gitlab.com.
"},{"location":"admin/localhost.html#run","title":"Run","text":"The commands to start and stop the appliation are:
docker compose -f compose.local.yml --env-file .env.local up -d\ndocker compose -f compose.local.yml --env-file .env.local down\n
To restart only a specific container, for example `client``
docker compose -f compose.local.yml --env-file .env.local up -d --force-recreate client\n
"},{"location":"admin/localhost.html#use","title":"Use","text":"The application will be accessible at: http://localhost from web browser. Sign in using your https://gitlab.com account.
All the functionality of DTaaS should be available to you through the single page client now.
"},{"location":"admin/localhost.html#limitations","title":"Limitations","text":"The library microservice is not included in the localhost installation scenario.
"},{"location":"admin/localhost.html#references","title":"References","text":"Image sources: Traefik logo, ml-workspace, reactjs, gitlab
"},{"location":"admin/overview.html","title":"Overview","text":""},{"location":"admin/overview.html#install","title":"Install","text":"The goal is to install and administer the DTaaS application for users.
The DTaaS can be installed in different ways. Each version serves a different purpose.
Easy Setup on Localhost
The localhost installation is easy for first time users. Please give it a try.
Otherwise, use the installation setup that fits your needs.
Installation Setup Purpose localhost Install DTaaS on your computer for a single user; does not need a web server. This setup does not require domain name. secure localhost Install DTaaS on your computer for a single user over HTTPS with integrated gitlab installation; does not need a web server. This setup does not require domain name. Server Install DTaaS on server for multiple users. Please check the requirements. It is also possible to host the application over HTTPS with integrated gitlab installation One vagrant machine Install DTaaS on a virtual machine; can be used for single or multiple users. Two vagrant machines Install DTaaS on two virtual machines; can be used for single or multiple users. The core DTaaS application is installed on the first virtual machine and all the services (RabbitMQ, MQTT, InfluxDB, Grafana and MongoDB) are installed on second virtual machine. Independent Packages Can be used independently; do not need full installation of DTaaS. The installation steps is a good place to start the installation process.
"},{"location":"admin/overview.html#administer","title":"Administer","text":"There is a CLI to add and delete users of a running application.
"},{"location":"admin/packages.html","title":"Independent Packages","text":"The DTaaS development team publishes reusable packages which are then put together to form the complete DTaaS application.
The packages are published on github, npmjs, and docker hub repositories.
The packages on github are published more frequently but are not user tested. The packages on npmjs and docker hub are published at least once per release. The regular users are encouraged to use the packages from npm and docker hub.
A brief explanation of the packages is given below.
Package Name Description Documentation for Availability dtaas-web React web application Not useful as standalone package docker hub and github libms Library microservice npm package npmjs and github container image docker hub and github runner REST API wrapper for multiple scripts/programs npm package npmjs and github ml-workspace-minimal (fork of ml-workspace) User workspace not available docker hub. Please note that this package is highly experimental and only v0.15.0-b2 is usable now."},{"location":"admin/requirements.html","title":"Requirements","text":"Tip
These optional requirements are not needed for localhost installation. They are only required for installation of the DTaaS on a production web server.
There are two optional requirements for installing the DTaaS.
"},{"location":"admin/requirements.html#oauth-provider","title":"OAuth Provider","text":"The DTaaS software is uses OAuth for user authorization. It is possible to use either gitlab.com or your own OAuth service provider.
"},{"location":"admin/requirements.html#domain-name","title":"Domain name","text":"The DTaaS software is a web application and is preferably hosted on a server with a domain name like foo.com. However, it is possible to install the software on your computer and use access it at localhost.
"},{"location":"admin/services.html","title":"Third-party Services","text":"The DTaaS software platform uses third-party software services to provide enhanced value to users.
InfluxDB, Grafana, RabbitMQ and Mosquitto are default services integrated into the DTaaS software platform.
"},{"location":"admin/services.html#pre-requisites","title":"Pre-requisites","text":"All these services run on raw TCP/UDP ports. Thus a direct network access to these services is required for both the DTs running inside the DTaaS software and the PT located outside the DTaaS software.
There are two possible choices here:
- Configure Traefik gateway to permit TCP/UDP traffic
- Bypass Traefik altogether
Unless you are an informed user of Traefik, we recommend bypassing traefik and provide raw TCP/UDP access to these services from the Internet.
The InfluxDB service requires a dedicated hostname. The management interface of RabbitMQ service requires a dedicated hostname as well.
Grafana service can run well behind Traefik gateway. The default Traefik configuration makes permits access to Grafana at URL: http(s): foo.com/vis.
"},{"location":"admin/services.html#configure-and-install","title":"Configure and Install","text":"The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
The next step in installation is to specify the config of the services. There are two configuration files. The services.yml
contains most of configuration settings. The mqtt-default.conf
file contains the MQTT listening port. Update these two config files before proceeding with the installation of the services.
Now continue with the installation of services.
yarn install\nnode services.js\n
"},{"location":"admin/services.html#use","title":"Use","text":"After the installation is complete, you can see the following services active at the following ports / URLs.
service external url Influx services.foo.com Grafana services.foo.com:3000 RabbitMQ Broker services.foo.com:5672 RabbitMQ Broker Management Website services.foo.com:15672 MQTT Broker services.foo.com:1883 MongoDB database services.foo.com:27017 The firewall and network access settings of corporate / cloud network need to be configured to allow external access to the services. Otherwise the users of DTaaS will not be able to utilize these services from their user workspaces.
"},{"location":"admin/steps.html","title":"Installation Steps","text":""},{"location":"admin/steps.html#complete-dtaas-application","title":"Complete DTaaS Application","text":"DTaaS application is available in two flavors. One is localhost which is suitable for single-user, local usage. Another is production server which is suitable for multi-user setup.
In both cases, the installation is a three step process.
"},{"location":"admin/steps.html#setup-authorization","title":"Setup Authorization","text":"DTaaS provides security using OAuth authorization for both react client frontend and backend services.
There is a default frontend authorization application setup for all localhost and backend authorization is not required for localhost installation.
The production server installation requires both react client frontend and backend services applications.
"},{"location":"admin/steps.html#configure-components","title":"Configure Components","text":"DTaaS is available as docker compose application. There are four docker compose files
compose.local.yml
for localhost installation served over HTTP connection. compose.local.secure.yml
for secure localhost installation served over HTTPS connection. compose.server.yml
for production server installation served over HTTP connection. compose.server.secure.yml
for production server installation served over HTTPS connection.
These four compose files require environment configuration files. The explanation of this configuration file is available directly on the installation pages.
In addition, react client frontend requires configuration which is explained on this page.
"},{"location":"admin/steps.html#install","title":"Install","text":"Follow installation instructions given on either localhost or production server pages.
"},{"location":"admin/steps.html#independent-packages","title":"Independent Packages","text":"Each release of DTaaS also comes with release of three reusable packages. These packages have dedicated documentation.
"},{"location":"admin/client/auth.html","title":"OAuth for React Client","text":"To enable user authorization on DTaaS React client website, you will use the OAuth authorization protocol, specifically the PKCE authorization flow. Here are the steps to get started:
1. Choose Your GitLab Server:
- You need to set up OAuth authorization on a GitLab server. The commercial gitlab.com is not suitable for multi-user authorization (DTaaS requires this), so you'll need an on-premise GitLab instance.
- You can use GitLab Omnibus Docker for this purpose.
- Configure the OAuth application as an instance-wide authorization type.
2. Determine Your Website's Hostname:
- Before setting up OAuth on GitLab, decide on the hostname for your website. It's recommended to use a self-hosted GitLab instance, which you will use in other parts of the DTaaS application.
3. Define Callback and Logout URLs:
- For the PKCE authorization flow to function correctly, you need two URLs: a callback URL and a logout URL.
- The callback URL informs the OAuth provider of the page where signed-in users should be redirected. It's different from the landing homepage of the DTaaS application.
- The logout URL is where users will be directed after logging out.
4. OAuth Application Creation:
- During the creation of the OAuth application on GitLab, you need to specify the scope. Choose openid, profile, read_user, read_repository, and api scopes.
5. Application ID:
- After successfully creating the OAuth application, GitLab generates an application ID. This is a long string of HEX values that you will need for your configuration files.
6. Required Information from OAuth Application:
- You will need the following information from the OAuth application registered on GitLab:
GitLab Variable Name Variable Name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api 7. Create User Accounts:
Create user accounts in gitlab for all the usernames chosen during installation. The trial installation script comes with two default usernames - user1 and user2. For all other installation scenarios, accounts with specific usernames need to be created on gitlab.
"},{"location":"admin/client/config.html","title":"Configure Client Website","text":"This page describes various configuration options for react website.
if (typeof window !== 'undefined') {\n window.env = {\n REACT_APP_ENVIRONMENT: \"prod | dev | local | test\",\n REACT_APP_URL: \"URL for the gateway\",\n REACT_APP_URL_BASENAME: \"Base URL for the client website\"(optional, can be null),\n REACT_APP_URL_DTLINK: \"Endpoint for the Digital Twin\",\n REACT_APP_URL_LIBLINK: \"Endpoint for the Library Assets\",\n REACT_APP_WORKBENCHLINK_VNCDESKTOP: \"Endpoint for the VNC Desktop link\",\n REACT_APP_WORKBENCHLINK_VSCODE: \"Endpoint for the VS Code link\",\n REACT_APP_WORKBENCHLINK_JUPYTERLAB: \"Endpoint for the Jupyter Lab link\",\n REACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK:\n \"Endpoint for the Jupyter Notebook link\",\n REACT_APP_WORKBENCHLINK_DT_PREVIEW: \"Endpoint for the Digital Twins page preview\",\n REACT_APP_CLIENT_ID: 'AppID genereated by the gitlab OAuth provider',\n REACT_APP_AUTH_AUTHORITY: 'URL of the private gitlab instance',\n REACT_APP_REDIRECT_URI: 'URL of the homepage for the logged in users of the website',\n REACT_APP_LOGOUT_REDIRECT_URI: 'URL of the homepage for the anonymous users of the website',\n REACT_APP_GITLAB_SCOPES: 'OAuth scopes. These should match with the scopes set in gitlab OAuth provider',\n };\n};\n\n// Example values with no base URL. Trailing and ending slashes are optional.\nif (typeof window !== 'undefined') {\n window.env = {\n REACT_APP_ENVIRONMENT: 'prod',\n REACT_APP_URL: 'https://foo.com/',\n REACT_APP_URL_BASENAME: '',\n REACT_APP_URL_DTLINK: '/lab',\n REACT_APP_URL_LIBLINK: '',\n REACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\n REACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\n REACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\n REACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\n REACT_APP_WORKBENCHLINK_DT_PREVIEW: '/preview/digitaltwins',\n REACT_APP_CLIENT_ID: '1be55736756190b3ace4c2c4fb19bde386d1dcc748d20b47ea8cfb5935b8446c',\n REACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\n REACT_APP_REDIRECT_URI: 'https://foo.com/Library',\n REACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/',\n REACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n };\n};\n\n\n// Example values with \"bar\" as basename URL.\n//Trailing and ending slashes are optional.\nif (typeof window !== 'undefined') {\n window.env = {\n REACT_APP_ENVIRONMENT: \"dev\",\n REACT_APP_URL: 'http://localhost:4000/',\n REACT_APP_URL_BASENAME: 'bar',\n REACT_APP_URL_DTLINK: '/lab',\n REACT_APP_URL_LIBLINK: '',\n REACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\n REACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\n REACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\n REACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\n REACT_APP_WORKBENCHLINK_DT_PREVIEW: '/preview/digitaltwins',\n REACT_APP_CLIENT_ID: '1be55736756190b3ace4c2c4fb19bde386d1dcc748d20b47ea8cfb5935b8446c',\n REACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\n REACT_APP_REDIRECT_URI: 'http://localhost:4000/bar/Library',\n REACT_APP_LOGOUT_REDIRECT_URI: 'http://localhost:4000/bar',\n REACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n };\n};\n
"},{"location":"admin/client/config.html#multiple-dtaas-applications","title":"Multiple DTaaS applications","text":"Warning
This is not a regular installation scenario and is only recommended for experts. This installation setup requires significant modifications to the docker compose files.
The DTaaS is a regular web application. It is possible to host multiple DTaaS applications on the same server. The only requirement is to have a distinct URLs. You can have three DTaaS applications running at the following URLs.
https://foo.com/au\nhttps://foo.com/acme\nhttps://foo.com/bar\n
All of these instances can use the same gitlab instance for authorization.
DTaaS application URL Gitlab Instance URL Callback URL Logout URL Application ID https://foo.com/au https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab https://foo.com/acme https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab https://foo.com/bar https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab If you are hosting multiple DTaaS instances on the same server, do not install DTaaS with a null basename on the same server. Even though it works well, the setup is confusing to setup and may lead to maintenance issues.
If you choose to host your DTaaS application with a basename (say bar), then the URLs in env.js
change to:
DTaaS application URL: https://foo.com/bar\nGitlab instance URL: https://gitlab.foo.com\nCallback URL: https://foo.com/bar/Library\nLogout URL: https://foo.com/bar\n
"},{"location":"admin/gitlab/index.html","title":"Install Gitlab","text":"This guide helps with installation of a dedicated Gitlab service. This Gitlab installation can be used as OAuth2 authorization provider to the DTaaS software. In addition, it is also possible to use the integrated Gitlab for enabling the digital twin DevOps experimental features of the DTaaS.
There are two possible ways you can install Gitlab:
- At dedicated domain name (ex: gitlab.foo.com)
- At a URL path on existing WWW server (ex: foo.com/gitlab)
This guide illustrates the installation of Gitlab at: foo.com/gitlab. But the instructions can be adapted to install Gitlab at a dedicated domain name."},{"location":"admin/gitlab/index.html#download-release-package","title":"Download Release Package","text":"
The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
"},{"location":"admin/gitlab/index.html#configure-and-install","title":"Configure and Install","text":"cd DTaaS/deploy/services/gitlab\n
This directory contains files needed to set up the docker container containing the local GitLab instance.
./data
, ./config
, ./logs
are the directories that will contain data for the GitLab instance compose.gitlab.yml
and .env
are the Docker compose and environment files to manage the containerized instance of gitlab
If the DTaaS application and gitlab are to be hosted at https://foo.com, then the client config file (deploy/config/client/env.js
) needs to use the https://foo.com/gitlab as REACT_APP_AUTH_AUTHORITY
. In addition, this hosting at https://foo.com also requires changes to config file (.env.server
).
If the DTaaS application and gitlab are to be hosted at https://localhost, then the client config file (deploy/config/client/env.local.js
) needs to use the https://localhost/gitlab as REACT_APP_AUTH_AUTHORITY
. If the application and the integrated gitlab are to be hosted at https://localhost/gitlab
, then .env.server
need not be modified.
Edit the .env
file available in this directory to contain the following variables:
Variable Example Value Explanation GITLAB_HOME '/home/Desktop/DTaaS/deploy/services/gitlab' Full path to the DTaaS gitlab directory. This is an absolute path with no trailing slash. SERVER_DNS either foo.com
or localhost
The server DNS, if you are deploying with a dedicated server. Remember not use http(s) at the beginning of the DNS string. NOTE: The DTaaS client uses the react-oidc-context
node package, which incorrectly causes redirects to use the HTTPS
URL scheme. This is a known issue with the package, and forces us to use HTTPS
for the DTaaS server. If you are hosting the DTaaS locally, your GitLab instance should be available at https://localhost/gitlab. If you are hosting the DTaaS at https://foo.com, then you Gitlab instance should be available at https://foo.com/gitlab.
"},{"location":"admin/gitlab/index.html#run","title":"Run","text":"NOTE: The GitLab instance operates with the dtaas-frontend
network, which requires the DTaaS server to be running before you start it. You may refer to secure installation scenarios (../overview.md) for the same.
The commands to start and stop the instance are:
# (cd deploy/services/gitlab)\ndocker compose -f compose.gitlab.yml up -d\ndocker compose -f compose.gitlab.yml down\n
Each time you start the container, it may take a few minutes. You can monitor the progress with watch docker ps
and check if the gitlab container is healthy
.
"},{"location":"admin/gitlab/index.html#post-install-configuration","title":"Post-install Configuration","text":"Gitlab also requires post-installation configuration.
Information
This configuration needs to be done from within the running container.
docker exec -it gitlab bash\n
The configuration file to change is /etc/gitlab/gitlab.rb. The variables to change are:
external_url 'http(s)://foo.com/gitlab'\nnginx['listen_port'] = 80\nnginx['enable'] = true\n\nnginx['listen_https'] = false\nnginx['redirect_http_to_https'] = false\nletsencrypt['enable'] = false\n
The external_url
mentioned about indicates hosting of gitlab at https://foo.com/gitlab. If the gitlab needs to be available at https://localhost/gitlab, then the external_url
should be https://localhost/gitlab.
Save the changes and reconfigure gitlab by running:
# inside the gitlab docker container\ngitlab-ctl reconfigure\nexit\n
The administrator username for GitLab is: root
. The password for this user account will be available in: /etc/gitlab/initial_root_password. Be sure to save this password somewhere, as this file will be deleted after 24 hours from the first time you start the local instance.
After running the container, your local GitLab instance will be available at external_url
specified in gitlab.rb, i.e., either at https://foo.com/gitlab or at https://localhost/gitlab.
"},{"location":"admin/gitlab/index.html#create-users","title":"Create Users","text":"The newly installed gitlab only contains root
user. More users need to be created for use with DTaaS. Please see the Gitlab docs for further help.
"},{"location":"admin/gitlab/index.html#pending-tasks","title":"Pending Tasks","text":"This README helps with installation of Gitlab along side DTaaS application. But the OAuth2 integration between Gitlab and DTaaS will still be still pending. Follow the integration guide and runner setup to setup the Gitlab integration.
"},{"location":"admin/gitlab/integration.html","title":"GitLab Integration Guide","text":"This guide helps with integration of a local GitLab instance with a DTaaS server installation, and integrating the OAuth Authorization feature with the DTaaS installation.
After following this guide, the GitLab instance will be integrated as OAuth provider for both DTaaS client application and Traefik Forward Auth backend authorization.
Note
The DTaaS client uses the react-oidc-context
node package, which incorrectly causes authorization redirects to use the HTTPS
URL scheme. This is a known issue with the package, and forces us to use HTTPS
for the DTaaS server. This means your server should be set up to use either https://localhost or https://foo.com. This guide will henceforth use foo.com
to represent either localhost or a custom domain.
"},{"location":"admin/gitlab/integration.html#integration-steps","title":"Integration Steps","text":""},{"location":"admin/gitlab/integration.html#1-set-up-the-dtaas-server-over-https","title":"1. Set up the DTaaS server over HTTPS","text":"Follow the existing guides to set up the DTaaS web application over HTTPS connection on either localhost (https://localhost) or a custom domain (https://foo.com).
Note
You may ignore steps related to configuring OAuth application tokens at https://gitlab.com. We will be using the initial installation to host the local GitLab instance, on which we will later create the OAuth application tokens.
"},{"location":"admin/gitlab/integration.html#2-set-up-the-gitlab-instance","title":"2. Set up the GitLab Instance","text":"Follow the guide to set up a GitLab instance.
After this step, and once you run gitlab-ctl reconfigure
, you will have a functioning GitLab instance (at either https://localhost/gitlab or https://foo.com/gitlab). Login credentials of the root user.
"},{"location":"admin/gitlab/integration.html#3-create-oauth-tokens-in-gitlab","title":"3. Create OAuth Tokens in GitLab","text":"Follow these guides to create OAuth Application Tokens for - backend and client. Please note that backend is not required for https://localhost installation.
After this step you will have credentials for the application tokens titled \"DTaaS Server Authorization\" and \"DTaaS Client Authorization\", which we will use in the next step.
"},{"location":"admin/gitlab/integration.html#4-use-valid-oauth-application-tokens","title":"4. Use Valid Oauth Application Tokens","text":"We can now use the OAuth tokens generated on the GitLab instance to enable authorization.
If the DTaaS application is hosted at https://localhost, then configure the following files:
- DTaaS Client Authorization token in deploy/config/client/env.local.js.
- deploy/docker/.env.local Add localpath and username.
If the DTaaS application is hosted at https://foo.com, then configure the following files:
- DTaaS Client Authorization token in deploy/config/client/env.js.
- deploy/docker/.env.server - Add localpath and username, OAuth client ID and client secret from the DTaaS Server Authorization token
"},{"location":"admin/gitlab/integration.html#restart-services","title":"Restart Services","text":""},{"location":"admin/gitlab/integration.html#localhost-installation","title":"Localhost Installation","text":"The updated OAuth application configuration needs to be loaded into the client website service.
cd deploy/docker\ndocker compose -f compose.local.yml --env-file .env.local up -d --force-recreate client\n
"},{"location":"admin/gitlab/integration.html#production-server-installation","title":"Production Server Installation","text":"The updated OAuth application configuration needs to be loaded into the client website and the forward-auth services.
The production server can be installed with either http or https option. If it is installed with http option, run the following commands.
cd deploy/docker\ndocker compose -f compose.server.yml --env-file .env.server up -d --force-recreate client\ndocker compose -f compose.server.yml --env-file .env.server up -d --force-recreate traefik-forward-auth\n
If the production server is installed with https option, run the following commands.
cd deploy/docker\ndocker compose -f compose.server.secure.yml --env-file .env.server up -d --force-recreate client\ndocker compose -f compose.server.secure.yml --env-file .env.server up -d --force-recreate traefik-forward-auth\n
"},{"location":"admin/gitlab/integration.html#post-setup-usage","title":"Post Setup Usage","text":"If you have set up everything correctly:
- You will have a functioning path-prefixed GitLab instance available at
https://foo.com/gitlab
that you may use in a similar manner to https://gitlab.com. - Data, configuration settings and logs pertaining to the GitLab installation will be available on the DTaaS server within the directory: deploy/services/gitlab.
- Traefik Forward Auth will use the path-prefixed GitLab instance for authorization on the multi-user installation scenario i.e.
foo.com
(but not on localhost
).
"},{"location":"admin/gitlab/runner.html","title":"Gitlab Runner Integration","text":"To properly use the Digital Twins Preview Page, you need to configure at least one project runner in your GitLab profile. The first step is to configure the CI/CD pipeline in gitlab project. The second step is to install the runner and integrate it with the selected gitlab project.
"},{"location":"admin/gitlab/runner.html#configure-gitlab-project","title":"Configure Gitlab Project","text":"Follow the steps below:
- Navigate to the DTaaS group and select the project named after your GitLab username.
- In the project menu, go to Settings and select CI/CD.
- Expand the Runners section and click on New project runner. Follow the configuration instructions carefully:
- Add linux as a tag during configuration.
- Click on Create runner. A runner authentication token is generated. This token will be used later for registering a runner.
"},{"location":"admin/gitlab/runner.html#runner","title":"Runner","text":""},{"location":"admin/gitlab/runner.html#install-runner","title":"Install Runner","text":"A detailed guide on installation of gitlab runners on Linux OS is available on gitlab website. Remember to use linux
as tag for the runner.
"},{"location":"admin/gitlab/runner.html#register-runner","title":"Register Runner","text":"Please see this gitlab guide on registering a runner.
Remember to choose docker as executor and ruby:2.7 as the default docker image.
$sudo gitlab-runner register --url https://gitlab.foo.com \\\n --token xxxxx\n
Or, you can also register the runner in non-interactive mode by running
$sudo gitlab-runner register \\\n --non-interactive \\\n --url \"https://gitlab.foo.com/\" \\\n --token \"xxxx\" \\\n --executor \"docker\" \\\n --docker-image ruby:2.7 \\\n --description \"docker-runner\"\n
"},{"location":"admin/gitlab/runner.html#start-runner","title":"Start Runner","text":"You can manually verify that the runner is available to pick up jobs by running the following command:
$sudo gitlab-runner run\n
It can also be used to reactivate offline runners during subsequent sessions.
"},{"location":"admin/gitlab/runner.html#pipeline-trigger-token","title":"Pipeline Trigger Token","text":"You also need to create a pipeline trigger token. This token is required to trigger pipelines by using the API. You can create this token in your GitLab project's CI/CD settings under the Pipeline trigger tokens section.
"},{"location":"admin/guides/add_service.html","title":"Add other services","text":"Pre-requisite
You should read the documentation about the already available services
This guide will show you how to add more services. In the following example we will be adding MongoDB as a service, but these steps could be modified to install other services as well.
Adding other services requires more RAM and CPU power. Please make sure the host machine meets the hardware requirements for running all the services.
1. Add the configuration:
Select configuration parameters for the MongoDB service.
Configuration Variable Name Description username the username of the root user in the MongoDB password the password of the root user in the MongoDB port the mapped port on the host machine (default is 27017) datapath path on host machine to mount the data from the MongoDB container Open the file /deploy/services/services.yml
and add the configuration for MongoDB:
services:\n rabbitmq:\n username: \"dtaas\"\n password: \"dtaas\"\n vhost: \"/\"\n ports:\n main: 5672\n management: 15672\n ...\n mongodb:\n username: <username>\n password: <password>\n port: <port>\n datapath: <datapath>\n ...\n
2. Add the script:
The next step is to add the script that sets up the MongoDB container with the configuraiton.
Create new file named /deploy/services/mongodb.js
and add the following code:
#!/usr/bin/node\n/* Install the optional platform services for DTaaS */\nimport { $ } from \"execa\";\nimport chalk from \"chalk\";\nimport fs from \"fs\";\nimport yaml from \"js-yaml\";\n\nconst $$ = $({ stdio: \"inherit\" });\nconst log = console.log;\nlet config;\n\ntry {\n log(chalk.blue(\"Load services configuration\"));\n config = await yaml.load(fs.readFileSync(\"services.yml\", \"utf8\"));\n log(\n chalk.green(\n \"configuration loading is successful and config is a valid yaml file\"\n )\n );\n} catch (e) {\n log(chalk.red(\"configuration is invalid. Please rectify services.yml file\"));\n process.exit(1);\n}\n\nlog(chalk.blue(\"Start MongoDB server\"));\nconst mongodbConfig = config.services.mongodb;\n\ntry {\n log(\n chalk.green(\n \"Attempt to delete any existing MongoDB server docker container\"\n )\n );\n await $$`docker stop mongodb`;\n await $$`docker rm mongodb`;\n} catch (e) {}\n\nlog(chalk.green(\"Start new Mongodb server docker container\"));\nawait $$`docker run -d -p ${mongodbConfig.port}:27017 \\\n --name mongodb \\\n -v ${mongodbConfig.datapath}:/data/db \\\n -e MONGO_INITDB_ROOT_USERNAME=${mongodbConfig.username} \\\n -e MONGO_INITDB_ROOT_PASSWORD=${mongodbConfig.password} \\\n --restart always \\\n mongo:7.0.3`;\nlog(chalk.green(\"MongoDB server docker container started successfully\"));\n
3. Run the script:
Go to the directory /deploy/services/
and run services script with the following commands:
yarn install\nnode mongodb.js\n
The MongoDB should now be available on services.foo.com:<port>.
"},{"location":"admin/guides/add_user.html","title":"Add User","text":"This page provides steps to adding a user from a DTaaS installation. The username alice is used here to illustrate the steps involved in removing a user account.
Please do the following:
1. Add user to Gitlab instance:
Add a new account for the new user on the Gitlab instance. Note the username and email of the new account.
2. Create User Workspace:
Use the DTaaS CLI to bring up the workspaces for new users. This brings up the containers, without the backend authorization.
3. Add backend authorization for the user:
- Go to the docker directory
cd <DTaaS>/docker\n
- Add three lines to the
conf.server
file
rule.onlyu3.action=auth\nrule.onlyu3.rule=PathPrefix(`/alice`)\nrule.onlyu3.whitelist = alice@foo.com\n
4. Restart the docker container responsible for backend authorization.
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
5. The new users are now added to the DTaaS instance, with authorization enabled.
"},{"location":"admin/guides/common_workspace_readonly.html","title":"Make Common Assets Read Only","text":""},{"location":"admin/guides/common_workspace_readonly.html#why","title":"Why","text":"In some cases you might want to restrict the access rights of some users to the common assets. In order to make the common area read only, you have to change the install script section performing the creation of user workspaces.
Note
These step needs to be performed before installation of the application.
"},{"location":"admin/guides/common_workspace_readonly.html#how","title":"How","text":"To make the common assets read-only for a user, the following changes need to be made to the compose.server.yml
file.
...\n user1:\n ....\n volumes:\n - ${DTAAS_DIR}/files/common:/workspace/common:ro\n ....\n\n user2:\n ....\n volumes:\n - ${DTAAS_DIR}/files/common:/workspace/common:ro\n ....\n
Please note the :ro
at the end of the line. This suffix makes the common assets read only.
If you want to have the same kind of read only restriction for new users as well, please make a similar change in cli/users.server.yml
.
"},{"location":"admin/guides/link_service.html","title":"Link services to local ports","text":"Requirements
- User needs to have an account on server2.
- SSH server must be running on server2
To link a port from the service machine (server2) to the local port on the user workspace. You can use ssh local port forwarding technique.
1. Step:
Go to the user workspace, on which you want to map from localhost to the services machine
2. Step:
Open a terminal in your user workspace.
3. Step:
Run the following command to map a port:
ssh -fNT -L <local_port>:<destination>:<destination_port> <user>@<services.server.com>\n
Here's an example mapping the RabbitMQ broker service available at 5672 of services.foo.com to localhost port 5672.
ssh -fNT -L 5672:localhost:5672 vagrant@services.foo.com\n
Now the programs in user workspace can treat the RabbitMQ broker service as a local service running within user workspace.
"},{"location":"admin/guides/localhost_portainer.html","title":"Install DTaaS on localhost (GUI)","text":"The installation instructions provided in this README are ideal for running the DTaaS on localhost via a Graphical User Interface (GUI). This installation is ideal for single users intending to use DTaaS on their own computers.
"},{"location":"admin/guides/localhost_portainer.html#design","title":"Design","text":"An illustration of the docker containers used and the authorization setup is shown here.
"},{"location":"admin/guides/localhost_portainer.html#requirements","title":"Requirements","text":"The installation requirements to run this docker version of the DTaaS are:
- docker desktop / docker CLI with compose plugin
- User account on gitlab.com
Tip
The frontend website requires authorization. The default authorization configuration works for gitlab.com. If you desire to use locally hosted gitlab instance, please see the client docs.
"},{"location":"admin/guides/localhost_portainer.html#download-release-package","title":"Download Release Package","text":"The software is available for as zip package. Please download and unzip the same. A new DTaaS-v0.6.0 folder gets created. The rest of the installation instructions assume that you are using Windows/Linux/MacOS terminal in DTaaS-v0.6.0 folder.
cd DTaaS/deploy/services/gitlab\n
In this guide we will assume the contents of the zip file have been extracted to the directory: /home/DTaaS
.
Tip
The path given here is for Linux OS. It can be Windows compatible as well, for example: C:\\\\DTaaS
. Make sure to use this path and format in place of /home/DTaaS
in this guide.
"},{"location":"admin/guides/localhost_portainer.html#starting-portainer","title":"Starting Portainer","text":"The GUI used to run the application and docker containers will be provided by Portainer Community Edition. It is itself a Docker container that will create a website at https://localhost:9443
, which will present a graphical interface for starting and stopping the application.
You may follow the official documentation for setting up a Portainer CE Server . Alternatively, open a terminal on your system (Terminal on Linux / MacOS, Powershell on Windows, etc) and copy the following commands into it:
docker volume create portainer_data\ndocker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.21.4\n
This will start the Portainer server on your system, which will host its dashboard at https://localhost:9443
. Follow the Initial Setup Guide to set up an administrator account for Portainer on your system.
Portainer should now be set up on your system, and you can access the dashboard:
Tip
The next time you wish to start the Portainer server, run docker start portainer
.
"},{"location":"admin/guides/localhost_portainer.html#configuration","title":"Configuration","text":""},{"location":"admin/guides/localhost_portainer.html#create-user-workspace","title":"Create User Workspace","text":"The existing filesystem for installation is setup for user1
. A new filesystem directory needs to be created for the selected user.
You may use your file explorer or an equivalent application to duplicate the files/user1
directory and rename it as files/username
where username is the selected username registered on https://gitlab.com.
ALternatively, you may execute the following commands from the top-level directory of the DTaaS project.
cp -R files/user1 files/username\n
"},{"location":"admin/guides/localhost_portainer.html#creating-the-portainer-stack","title":"Creating the Portainer Stack","text":"Portainer Stacks are equivalent to using docker compose
commands to manage containers.
- Navigate to the Stacks tab on the side panel, and click on the Add Stack button.
- Name the Stack anything descriptive, for example:
dtaas-localhost
. - Select the Upload build method.
- Upload the compose file located at
deploy/docker/compose.local.yml
. - Select the option to load variables from a .env file, and upload the file
deploy/docker/.env.local
.
Tip
Sometimes the .env.local
file does not show up in the file explorer. You may fix this by selecting the option to show All Files rather than those with the extension .env.
The .env.local
file contains environment variables that are used by the compose file. Portainer allows you to modify them as shown in the screenshot above, here is a summary:
URL Path Example Value Explanation DTAAS_DIR '/home/Desktop/DTaaS' Full path to the DTaaS directory. This is an absolute path with no trailing slash. username1 'user1' Your gitlab username Tip
Important points to note:
- The path examples given here are for Linux OS. These paths can be Windows OS compatible paths as well.
- The client configuration file is located at
deploy/config/client/env.local.js
. If you are following the guide to use HTTPS on localhost, edit the URLs in this file by replacing http
with https
.
Once you have configured the environment variables, click on the button Deploy the stack.
"},{"location":"admin/guides/localhost_portainer.html#use","title":"Use","text":"The application will be accessible at: http://localhost from web browser. Sign in using your https://gitlab.com account.
All the functionality of DTaaS should be available to you through the single page client now.
"},{"location":"admin/guides/localhost_portainer.html#limitations","title":"Limitations","text":"The library microservice is not included in the localhost installation scenario.
"},{"location":"admin/guides/localhost_portainer.html#references","title":"References","text":"Image sources: Traefik logo, ml-workspace, reactjs, gitlab
"},{"location":"admin/guides/remove_user.html","title":"Remove User","text":"This page provides steps to remove a user from a DTaaS installation. The username alice is used here to illustrate the steps involved in removing a user account.
Please do the following:
1. Remove an existing user with the DTaaS CLI.
2. Remove backend authorization for the user:
- Go to the docker directory
cd <DTaaS>/docker\n
- Remove these three lines from the
conf.server
file
rule.onlyu3.action=auth\nrule.onlyu3.rule=PathPrefix(`/alice`)\nrule.onlyu3.whitelist = alice@foo.com\n
- Run the command for these changes to take effect:
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
The extra users now have no backend authorization.
3. Remove users to Gitlab instance (optional):
Please see gitlab docs for more help.
4. The user account is now deleted.
"},{"location":"admin/guides/remove_user.html#caveat","title":"Caveat","text":"You cannot delete the two base users that the DTaaS software is installed with. You can only delete the extra users that have been added to the software.
"},{"location":"admin/servers/auth.html","title":"OAuth for Traefik Gateway","text":"The traefik gateway is used to serve the DTaaS. All the services provided as part of the application are secured at the traefik gateway. The security is based on Traefik forward-auth.
An illustration of the docker containers used and the authorization setup is shown here.
The traefik forward-auth can use any OAuth2 provider, but within the DTaaS gitlab is used as authorization provider. You will use the OAuth the web / server application authorization flow.
Here are the steps to get started:
1. Choose GitLab Server:
- You need to set up OAuth authorization on a GitLab server. The commercial gitlab.com is not suitable for multi-user authorization (DTaaS requires this), so you'll need an on-premise GitLab instance.
- You can use GitLab Omnibus Docker for this purpose.
- Configure the OAuth application as an instance-wide authorization type. Select option to generate client secret and also selection option for trusted application.
2. Determine Website Hostname:
Before setting up OAuth on GitLab, decide on the hostname for your website. It's recommended to use a self-hosted GitLab instance, which you will use in other parts of the DTaaS application.
3. Determine Callback and Logout URLs:
For the web / server authorization flow to function correctly, you need two URLs: a callback URL and a logout URL.
- The callback URL informs the OAuth provider of the page where signed-in users should be redirected. It is the landing homepage of the DTaaS application. (either http://foo.com/_oauth/ or http://localhost/_oauth/)
- The logout URL is the URL for signout of gitlab and clear authorization within traefik-forward auth. (either http://foo.com/_oauth/logout or http://localhost/_oauth/logout). The logout URL is to help users logout of traefik forward-auth. The logout URL should not be entered into Gitlab OAuth application setup.
4. Create OAuth Application:
Oauth application setup on GitLab can be located on Edit Profile -> Application https://gitlab.foo.com/-/profile/applications.
During the creation of the OAuth application on GitLab, you need to specify the scope. Choose read_user scope.
5. Copy Application Credentials:
After successfully creating the OAuth application, GitLab generates an application ID and client secret.
Both these values are long string of HEX values that you will need for your configuration files.
6. Checklist: Required Information from OAuth Application:
You will need the following information from the OAuth application registered on GitLab:
GitLab Variable Name Variable Name in .env of docker compose file Default Value OAuth Provider OAUTH_URL https://gitlab.foo.com/ Application ID OAUTH_CLIENT_ID xx Application Secret OAUTH_CLIENT_SECRET xx Callback URL (to be directly entered in Gitlab OAuth registration) Forward-auth secret OAUTH_SECRET random-secret-string (password for forward-auth, can be changed to your preferred string) Scopes read_user"},{"location":"admin/servers/auth.html#development-environment","title":"Development Environment","text":"The development environment and server installation scenarios requires traefik forward-auth.
"},{"location":"admin/servers/auth.html#configure-authorization-rules-for-traefik-forward-auth","title":"Configure Authorization Rules for Traefik Forward-Auth","text":"The Traefik forward-auth microservices requires configuration rules to manage authorization for different URL paths. The conf.server file can be used to configure the specific rules. There are broadly three kinds of URLs:
"},{"location":"admin/servers/auth.html#public-path-without-authorization","title":"Public Path Without Authorization","text":"To setup a public page, an example is shown below.
rule.noauth.action=allow\nrule.noauth.rule=Path(`/public`)\n
Here, 'noauth' is the rule name, and should be changed to suit rule use. Rule names should be unique for each rule. The 'action' property is set to \"allow\" to make the resource public. The 'rule' property defines the path/route to reach the resource.
"},{"location":"admin/servers/auth.html#common-to-all-users","title":"Common to All Users","text":"To setup a common page that requires Gitlab OAuth, but is available to all users of the Gitlab instance:
rule.all.action=auth\nrule.all.rule=Path(`/common`)\n
The 'action' property is set to \"auth\", to enable Gitlab OAuth before the resource can be accessed.
"},{"location":"admin/servers/auth.html#selective-access","title":"Selective Access","text":"Selective Access refers to the scenario of allowing access to a URL path for a few users. To setup selective access to a page:
rule.onlyu1.action=auth\nrule.onlyu1.rule=Path(`/user1`)\nrule.onlyu1.whitelist = user1@localhost\n
The 'whitelist' property of a rule defines a comma separated list of email IDs that are allowed to access the resource. While signing in users can sign in with either their username or email ID as usual, but the email ID corresponding to the account should be included in the whitelist.
This restricts access of the resource, allowing only users mentioned in the whitelist.
"},{"location":"admin/servers/auth.html#user-management","title":"User management","text":"DTaaS provides an easy way to add and remove additional users from your DTaaS instance.
All such user management can be done via the DTaaS CLI
"},{"location":"admin/servers/auth.html#limitation","title":"Limitation","text":"The rules in _conf._ file are not dynamically loaded into the traefik-forward-auth microservice. Any change in the conf file requires restart of traefik-forward-auth for the changes to take effect. All the existing user sessions get invalidated when the traefik-forward-auth* restarts.
Use a simple command on the terminal.
docker compose -f compose.server.yml --env-file .env up -d --force-recreate traefik-forward-auth\n
docker compose -f compose.server.yml --env-file .env.server up -d --force-recreate traefik-forward-auth\n
"},{"location":"admin/servers/lib/config.html","title":"Configure Library Microservice","text":"The microservices requires config specified in INI format. The template configuration file is:
PORT='4001'\nMODE='local'\nLOCAL_PATH='/Users/<Username>/DTaaS/files'\nLOG_LEVEL='debug'\nAPOLLO_PATH='/lib' or ''\nGRAPHQL_PLAYGROUND='false' or 'true'\n
The LOCAL_PATH
variable is the absolute filepath to the location of the local directory which will be served to users by the Library microservice.
Replace the default values the appropriate values for your setup.
The config is saved .env
file by convention. The libms looks for .env
file in the working directory from which it is run. If you want to run libms without explicitly specifying the configuration file, run
"},{"location":"admin/servers/lib/docker.html","title":"Host Library Microservice","text":"The lib microservice is a simplified file manager serving files over graphQL and HTTP API.
It has two features:
- provide a listing of directory contents.
- transfer a file to user.
This document provides instructions for running docker container to provide a stand alone library microservice.
"},{"location":"admin/servers/lib/docker.html#setup-the-file-system","title":"Setup the File System","text":""},{"location":"admin/servers/lib/docker.html#outside-dtaas","title":"Outside DTaaS","text":"The package can be used independently of DTaaS. If this is your use case, you do not need any specific file structure. A valid file directory named files
is sufficient and it should be placed in the directory from which the compose.lib.yml
will be run.
"},{"location":"admin/servers/lib/docker.html#inside-dtaas","title":"Inside DTaaS","text":"The users of DTaaS expect the following file system structure for their reusable assets.
There is a skeleton file structure in DTaaS codebase. You can copy and create file system for your users. Remember to name the directory containing the file structure as files
and place it in the directory from which the compose.lib.yml
will be run.
"},{"location":"admin/servers/lib/docker.html#use","title":"Use","text":"Use the docker compose file to start the service.
# To bring up the container\ndocker compose -f compose.lib.yml up -d\n# To bring down the container\ndocker compose -f compose.lib.yml down\n
"},{"location":"admin/servers/lib/docker.html#service-endpoints","title":"Service Endpoints","text":"The GraphQL URL: localhost:4001/lib
The HTTP URL: localhost:4001/lib/files
The service API documentation is available on user page.
"},{"location":"admin/servers/lib/npm.html","title":"Host Library Microservice","text":"The lib microservice is a simplified file manager serving files over graphQL and HTTP API.
It has two features:
- provide a listing of directory contents.
- upload and download files
This document provides instructions for installing npm package of library microservice and running the same as a standalone service.
"},{"location":"admin/servers/lib/npm.html#setup-the-file-system","title":"Setup the File System","text":""},{"location":"admin/servers/lib/npm.html#outside-dtaas","title":"Outside DTaaS","text":"The package can be used independently of DTaaS. If this is your use case, you do not need any specific file structure. Any valid file directory is sufficient.
"},{"location":"admin/servers/lib/npm.html#inside-dtaas","title":"Inside DTaaS","text":"The users of DTaaS expect the following file system structure for their reusable assets.
There is a skeleton file structure in DTaaS codebase. You can copy and create file system for your users.
"},{"location":"admin/servers/lib/npm.html#install","title":"Install","text":"The npm package is available in Github packages registry and on npmjs. Prefer the package on npmjs over Github.
Set the registry and install the package with the one of the two following commands
"},{"location":"admin/servers/lib/npm.html#npmjs","title":"npmjs","text":"sudo npm install -g @into-cps-association/libms # requires no login\n
"},{"location":"admin/servers/lib/npm.html#github","title":"Github","text":"# requires login\nsudo npm config set @into-cps-association:registry https://npm.pkg.github.com\n
The github package registry asks for username and password. The username is your Github username and the password is your Github personal access token. In order for the npm to download the package, your personal access token needs to have read:packages scope.
"},{"location":"admin/servers/lib/npm.html#use","title":"Use","text":"Display help.
libms -h\n
Please see configuration for explanation of configuration conventions. To use .env
as configuration file, run
libms\n
To run libms with a custom config file,
libms -c FILE-PATH\nlibms --config FILE-PATH\n
If the environment file is named something other than .env
, for example as .env.libms
, you can run
libms -c \".env.libms\"\n
You can press Ctl+C
to halt the application. If you wish to run the microservice in the background, use
nohup libms [-c FILE-PATH] & disown\n
The lib microservice is now running and ready to serve files.
This microservice can also serve files in a browser with files transferred over HTTP protocol.
This option needs to be enabled with -H http.json
flag. A sample http config provided here can be used.
nohup libms [-H http.json] & disown\n
The regular file upload and download options become available.
"},{"location":"admin/servers/lib/npm.html#service-endpoints","title":"Service Endpoints","text":"The GraphQL URL: localhost:PORT/lib
The HTTP URL: localhost:PORT/lib/files
The service API documentation is available on user page.
"},{"location":"admin/vagrant/base-box.html","title":"DTaaS Vagrant Box","text":"This README provides instructions on creating a custom Operating System virtual disk for running the DTaaS software. The virtual disk is managed by vagrant. The purpose is two fold:
- Provide cross-platform installation of the DTaaS application. Any operating system supporting use of vagrant software utility can support installation of the DTaaS software.
- Create a ready to use development environment for code contributors.
There are two scripts in this directory:
Script name Purpose Default user.sh
user installation developer.sh
developer installation If you are installing the DTaaS for developers, the default installation caters to your needs. You can skip the next step and continue with the creation of vagrant box.
If you are a developer and would like additional software installed, you need to modify Vagrantfile
. The existing Vagrantfile
has two lines:
config.vm.provision \"shell\", path: \"user.sh\"\n #config.vm.provision \"shell\", path: \"developer.sh\"\n
Uncomment the second line to have more software components installed. If you are not a developer, no changes are required to the Vagrantfile
.
This vagrant box installed for users will have the following items:
- docker v24.0
- nodejs v20.10
- yarn v1.22
- npm v10.2
- containers - ml-workspace-minimal v0.13, traefik v2.10, gitlab-ce v16.4, influxdb v2.7, grafana v10.1, rabbitmq v3-management, eclipse-mosquitto (mqtt) v2, mongodb v7.0
This vagrant box installed for developers will have the following items additional items:
- docker-compose v2.20
- microk8s v1.27
- jupyterlab
- mkdocs
- container - telegraf v1.28
At the end of installation, the software stack created in vagrant box can be visualised as shown in the following figure.
The upcoming instructions will help with the creation of base vagrant box.
#create a key pair\nssh-keygen -b 4096 -t rsa -f vagrant -q -N \"\"\n\nvagrant up\n\n# let the provisioning be complete\n# replace the vagrant ssh key-pair with personal one\nvagrant ssh\n\n# install the oh-my-zsh\nsh -c \"$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\"\n# install plugins: history, autosuggestions,\ngit clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions\n\n# inside ~/.zshrc, modify the following line\nplugins=(git zsh-autosuggestions history cp tmux)\n\n# to replace the default vagrant ssh key-pair with\n# the generated private key into authorized keys\ncp /vagrant/vagrant.pub /home/vagrant/.ssh/authorized_keys\n\n# exit vagrant guest machine and then\n# copy own private key to vagrant private key location\ncp vagrant .vagrant/machines/default/virtualbox/private_key\n\n# check\nvagrant ssh #should work\n\n# exit vagrant guest machine and then\nvagrant halt\n\nvagrant package --base dtaas \\\n--info \"info.json\" --output dtaas.vagrant\n\n# Add box to the vagrant cache in ~/.vagrant.d/boxes directory\nvagrant box add --name dtaas ./dtaas.vagrant\n\n# You can use this box in other vagrant boxes using\n#config.vm.box = \"dtaas\"\n
"},{"location":"admin/vagrant/base-box.html#references","title":"References","text":"Image sources: Ubuntu logo
"},{"location":"admin/vagrant/single-machine.html","title":"DTaaS on Single Vagrant Machine","text":"These are installation instructions for running DTaaS software inside one vagrant Virtual Machine. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space to the vagrant box.
"},{"location":"admin/vagrant/single-machine.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/single-machine
). This shall be useful for logging into the vagrant machines created for two-machine deployment.
"},{"location":"admin/vagrant/single-machine.html#target-installation-setup","title":"Target Installation Setup","text":"The goal is to use the dtaas Vagrant box to install the DTaaS software on one single vagrant machine. A graphical illustration of a successful installation can be seen here.
There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue color.
Tip
The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.
"},{"location":"admin/vagrant/single-machine.html#configure-server-settings","title":"Configure Server Settings","text":"A dummy foo.com URL has been used for illustration. Please change this to your unique website URL.
Please follow the next steps to make this installation work in your local environment.
Update the Vagrantfile. Fields to update are:
- Hostname (
node.vm.hostname = \"foo.com\"
) - MAC address (
:mac => \"xxxxxxxx\"
). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged. - Other adjustments are optional.
"},{"location":"admin/vagrant/single-machine.html#installation-steps","title":"Installation Steps","text":"Execute the following commands from terminal
vagrant up\nvagrant ssh\n
Set a cronjob inside the vagrant virtual machine to remote the conflicting default route. Download the route script and run the following command.
sudo bash route.sh\n
Please follow the instructions of regular server installation setup to complete the installation.
"},{"location":"admin/vagrant/single-machine.html#references","title":"References","text":"Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs
"},{"location":"admin/vagrant/two-machines.html","title":"DTaaS on Two Vagrant Machines","text":"These are installation instructions for running DTaaS application in two vagrant virtual machines (VMs). In this setup, all the user workspaces shall be run on server1 while all the platform services will be run on server2.
The setup requires two server VMs with the following hardware configuration:
server1: 16GB RAM, 8 x64 vCPUs and 50GB Hard Disk space
server2: 6GB RAM, 3 x64 vCPUs and 50GB Hard Disk space
Under the default configuration, two user workspaces are provisioned on server1. The default installation setup also installs InfluxDB, Grafana, RabbitMQ and MQTT services on server2. If you would like to install more services, you can create shell scripts to install the same on server2.
"},{"location":"admin/vagrant/two-machines.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/two-machine
). This shall be useful for logging into the vagrant machines created for two-machine deployment.
"},{"location":"admin/vagrant/two-machines.html#target-installation-setup","title":"Target Installation Setup","text":"The goal is to use this dtaas vagrant box to install the DTaaS software on server1 and the default platform services on server2. Both the servers are vagrant machines.
There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue and red color.
A graphical illustration of a successful installation can be seen here.
In this case, both the vagrant boxes are spawed on one server using two vagrant configuration files, namely boxes.json and Vagrantfile.
Tip
The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.
"},{"location":"admin/vagrant/two-machines.html#configure-server-settings","title":"Configure Server Settings","text":" A dummy foo.com and services.foo.com URLs has been used for illustration. Please change these to your unique website URLs.
The first step is to define the network identity of the two VMs. For that, you need server name, hostname and MAC address. The hostname is the network URL at which the server can be accessed on the web. Please follow these steps to make this work in your local environment.
Update the boxes.json. There are entries one for each server. The fields to update are:
name
- name of server1 (\"name\" = \"dtaas-two\"
) hostname
- hostname of server1 (\"name\" = \"foo.com\"
) - MAC address (
:mac => \"xxxxxxxx\"
). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged. name
- name of server2 (\"name\" = \"services\"
) hostname
- hostname of server2 (\"name\" = \"services.foo.com\"
) - MAC address (
:mac => \"xxxxxxxx\"
). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged. - Other adjustments are optional.
"},{"location":"admin/vagrant/two-machines.html#installation-steps","title":"Installation Steps","text":"The installation instructions are given separately for each vagrant machine.
"},{"location":"admin/vagrant/two-machines.html#launch-dtaas-platform-default-services","title":"Launch DTaaS Platform Default Services","text":"Follow the installation guide for services to install the DTaaS platform services.
After the services are up and running, you can see the following services active within server2 (services.foo.com).
service external url InfluxDB database services.foo.com Grafana visualization service services.foo.com:3000 MQTT Broker services.foo.com:1883 RabbitMQ Broker services.foo.com:5672 RabbitMQ Broker management website services.foo.com:15672 MongoDB database services.foo.com:27017"},{"location":"admin/vagrant/two-machines.html#install-dtaas-application","title":"Install DTaaS Application","text":"Execute the following commands from terminal
vagrant up\nvagrant ssh\n
Set a cronjob inside the vagrant virtual machine to remote the conflicting default route. Download the route script and run the following command.
sudo bash route.sh\n
Please follow the instructions of regular server installation setup to complete the installation.
"},{"location":"admin/vagrant/two-machines.html#references","title":"References","text":"Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs
"},{"location":"developer/index.html","title":"Developer Guide","text":"This guide is for DTaaS platform developers. It helps development team members get familiar with the DTaaS project software design, and development processes. Please see developer-specific Slides, Video, and Research paper.
"},{"location":"developer/index.html#development-environment","title":"Development Environment","text":"Ideally, developers should work on Ubuntu/Linux Operating System.There is an ongoing effort to bring support for Windows Operating System. But, the development and software installation scripts are still heavily suited to Ubuntu/Linux
Please use the steps given here to install the required software packages.
"},{"location":"developer/index.html#ubuntulinux","title":"Ubuntu/Linux","text":"bash script/env.sh\nbash script/docker.sh\n
The docker images are large and are likely to consume about 5GB of bandwidth and 15GB of space. You will have to download the docker images on a really good network.
"},{"location":"developer/index.html#windows","title":"Windows","text":"Two powershell installation scripts, namely base.ps1
and env.ps1
are available to install the required software packages. But errors might crop up due to missing environment variables. The potential errors are:
npm is not recognized.........
in base.ps1
. gem is not recognized.........
in env.ps1
If you encounter these errors, remember to include node and ruby installation locations in PATH environment variable (Settings --> search for \"system environment variables\"
--> Advanced --> Environment Variables --> PATH
).
The base.ps1
and env.ps1
scripts can be run again after setting the correct PATH environment variable.
"},{"location":"developer/index.html#pre-install-nodejs-and-ruby-software","title":"Pre-install Nodejs and Ruby Software","text":"Another way to solve the PATH environment problem is to install Nodejs and Ruby software packages before running the powershell scripts.
- Install the latest stable version of NodeJS from the official NodeJS website.
- Install Ruby from official Ruby website and follow all the defaults during the installation.
"},{"location":"developer/index.html#run-scripts","title":"Run Scripts","text":"Then, open powershell with administrative priviledges and run the following commands in the given order:
powershell -F script/base.ps1\npowershell -F script/env.ps1\npowershell -F script/docker.ps1\n
The docker images are large and are likely to consume about 5GB of bandwidth and 15GB of space. You will have to download the docker images on a really good network.
"},{"location":"developer/index.html#development-workflow","title":"Development Workflow","text":"To manage collaboration by multiple developers on the software, a development workflow is in place. Each developer should follow these steps:
- Fork of the main repository into your github account.
- Setup Code Climate and Codecov for your fork. The codecov does not require secret token for public repositories.
- Install git-hooks for the project.
- Use Fork, Branch, PR workflow.
- Work in your fork and open a PR from your working branch to your
feature/distributed-demo
branch. The PR will run all the github actions, code climate and codecov checks. - Resolve all the issues identified in the previous step.
- If you have access to the integration server, try your working branch on the integration server.
- Once changes are verified, a PR should be made to the
feature/distributed-demo
branch of the upstream DTaaS repository. - The PR will be merged after checks by either the project administrators or the maintainers.
Remember that every PR should be meaningful and satisfies a well-defined user story or improve the code quality.
"},{"location":"developer/index.html#code-quality","title":"Code Quality","text":"The project code qualities are measured based on:
- Linting issues identified by Code Climate
- Test coverage report collected by Codecov
- Successful github actions
"},{"location":"developer/index.html#code-climate","title":"Code Climate","text":"Code Climate performs static analysis, linting and style checks. Quality checks are performed by codeclimate are to ensure the best possible quality of code to add to our project.
While any new issues introduced in your code would be shown in the PR page itself, to address any specific issue, you can visit the issues or code section of the codeclimate page.
It is highly recommended that any code you add does not introduce new quality issues. If they are introduced, they should be fixed immediately using the appropriate suggestions from Code Climate, or in worst case, adding a ignore flag (To be used with caution).
"},{"location":"developer/index.html#codecov","title":"Codecov","text":"Codecov keeps track of the test coverage for the entire project. For information about testing and workflow related to that, please see the testing page.
"},{"location":"developer/index.html#github-actions","title":"Github Actions","text":"The project has multiple github actions defined. All PRs and direct code commits must have successful status on github actions.
"},{"location":"developer/github-actions.html","title":"Secrets for Github Action","text":"The Github actions require the following secrets to be obtained from docker hub:
Secret Name Explanation DOCKERHUB_SCOPE
Username or organization name on docker hub DOCKERHUB_USERNAME
Username on docker hub DOCKERHUB_TOKEN
API token to publish images to docker hub, with Read
, Write
and Delete
permissions NPM_TOKEN
Token to publish npm packages to the default npm registry. Remember to add these secrets to Github Secrets Setting of your fork.
"},{"location":"developer/npm-packages.html","title":"Publish NPM packages","text":"The DTaaS software is developed as a monorepo with multiple npm packages.
"},{"location":"developer/npm-packages.html#default-npm-registry","title":"Default npm registry","text":"The default registry for npm packages is npmjs. The freely-accessible public packages are published to the npmjs registry. The publication step is manual for the runner.
npm login --registry=\"https://registry.npmjs.org\"\ncat ~/.npmrc #shows the auth token for the registry\n//registry.npmjs.org/:_authToken=xxxxxxxxxx\nyarn publish --registry=\"https://registry.npmjs.org\" \\\n --no-git-tag-version --access public\n
At least one version of runner package is published to this registry for each release of DTaaS.
The publication step for library microservice is automated via github actions.
"},{"location":"developer/npm-packages.html#github-npm-registry","title":"Github npm registry","text":"The Github actions of the project publish packages. The only limitation is that the users need an access token from Github.
"},{"location":"developer/npm-packages.html#private-registry","title":"Private Registry","text":""},{"location":"developer/npm-packages.html#setup-private-npm-registry","title":"Setup private npm registry","text":"Since publishing to npmjs is irrevocable and public, developers are encouraged to setup their own private npm registry for local development. A private npm registry will help with local publish and unpublish steps.
We recommend using verdaccio for this task. The following commands help you create a working private npm registry for development.
docker run -d --name verdaccio -p 4873:4873 verdaccio/verdaccio\nnpm adduser --registry http://localhost:4873 #create a user on the verdaccio registry\nnpm set registry http://localhost:4873/\nyarn config set registry \"http://localhost:4873\"\nyarn login --registry \"http://localhost:4873\" #login with the credentials for yarn utility\nnpm login #login with the credentials for npm utility\n
You can open http://localhost:4873
in your browser, login with the user credentials to see the packages published.
"},{"location":"developer/npm-packages.html#publish-to-private-npm-registry","title":"Publish to private npm registry","text":"To publish a package to your local registry, do:
yarn install\nyarn build #the dist/ directory is needed for publishing step\nyarn publish --no-git-tag-version #increments version in package.json, publishes to registry\nyarn publish #increments version in package.json, publishes to registry and adds a git tag\n
The package version in package.json gets updated as well. You can open http://localhost:4873
in your browser, login with the user credentials to see the packages published. Please see verdaccio docs for more information.
If there is a need to unpublish a package, ex: @dtaas/runner@0.0.2
, do:
npm unpublish --registry http://localhost:4873/ @dtaas/runner@0.0.2\n
To install / uninstall this utility for all users, do:
sudo npm install --registry http://localhost:4873 -g @dtaas/runner\nsudo npm list -g # should list @dtaas/runner in the packages\nsudo npm remove --global @dtaas/runner\n
"},{"location":"developer/npm-packages.html#use-the-packages","title":"Use the packages","text":"The packages available in private npm registry can be used like the regular npm packages installed from npmjs.
For example, to use @dtaas/runner@0.0.2
package, do:
sudo npm install --registry http://localhost:4873 -g @dtaas/runner\nrunner # launch the digital twin runner\n
"},{"location":"developer/client/client.html","title":"React Website","text":"The Website is how the end-users interact with the software platform. The website is being developed as a React single page web application.
A dependency graph for the entire codebase of the react application is:
"},{"location":"developer/client/client.html#dependency-graphs","title":"Dependency Graphs","text":"The figures are the dependency graphs generated from the code.
"},{"location":"developer/client/client.html#src-directory","title":"src directory","text":""},{"location":"developer/client/client.html#test-directory","title":"test directory","text":""},{"location":"developer/docker/docker.html","title":"Docker workflow for DTaaS","text":"This readme will explain the building and use of different docker files for use in development and installation of the DTaaS software.
NOTE: A local docker installation with compose plugin is a pre-requisite for using docker workflows.
"},{"location":"developer/docker/docker.html#folder-structure","title":"Folder Structure","text":"There are two dockerfiles for building the containers:
- client.dockerfile: Dockerfile for building the client application container.
- libms.dockerfile: Dockerfile for building the library microservice container from source code.
- libms.npm.dockerfile: Dockerfile for building the library microservice container from published npm package at npmjs.com. This Dockerfile is only used during publishing. It is used neither in the development builds nor in Github actions.
There is a specific compose file for development:
The compose.dev.yml: file is the docker compose file for development environment.
"},{"location":"developer/docker/docker.html#build-and-publish-docker-images","title":"Build and Publish Docker Images","text":"The github workflows publish docker images of client website and libms to github and docker hub.
"},{"location":"developer/docker/docker.html#developers","title":"Developers","text":"Use of docker images is handy for developers as well. It is suggested that developers build the required images locally on their computer and use them for development purposes. The images can be built using
docker compose -f compose.dev.yml build\n
"},{"location":"developer/docker/docker.html#running-docker-containers","title":"Running Docker Containers","text":"Follow these steps to use the application with docker.
The DTaaS application requires multiple configuration files. The list of configuration files to be modified are given for each scenario.
"},{"location":"developer/docker/docker.html#development-environment","title":"Development Environment","text":"This scenario is for software developers.
The configuration files to be updated are:
- docker/.env : please see docker installation docs for help with updating this config file
- docker/conf.dev : please see docker installation docs for help with updating this config file
- client/config/local.js please see client config for help with updating this config file
- servers/lib/config/.env.default please see lib config for help with updating this config file
The docker commands need to be executed from this directory (docker
). The relevant docker commands are:
docker compose -f compose.dev.yml up -d #start the application\ndocker compose -f compose.dev.yml down #terminate the application\n
"},{"location":"developer/docker/docker.html#access-the-application","title":"Access the Application","text":"You should access the application through the PORT mapped to the Traefik container. e.g. localhost
"},{"location":"developer/oauth/AUTHMS.html","title":"Auth Microservice","text":"This document details the workflow and implementation of the DTaaS Auth Microservice. Please go through the System Design and the summary of the OAuth2.0 technology to be able to understand the content here better.
"},{"location":"developer/oauth/AUTHMS.html#workflow","title":"Workflow","text":""},{"location":"developer/oauth/AUTHMS.html#user-identity-using-oauth20","title":"User Identity using OAuth2.0","text":"We define some constants that will help with the following discussion:
- CLIENT ID: The OAuth2 Client ID of the Auth MS
- CLIENT SECRET: The OAuth2 Client Secret of Auth MS
- REDIRECT URI: The URI where the user is redirected to after the user has approved sharing of information with the client.
- STATE: A random string used as an identifier for the specific \"GET authcode\" request (Figure 3.3)
- AUTHCODE: The one-use-only Authorization code returned by the OAuth2 provider (GitLab instance) in response to \"GET authcode\" after user approval.
Additionally, let's say DTaaS uses a dedicated gitlab instance hosted at the URL https://gitlab.foo.com (instead of https://foo.com)
A successful OAuth2 workflow (Figure 3.3) has the following steps:
- The user requests a resource, say GET/BackendMS
- The Auth MS intercepts this request, and starts the OAuth2 process.
- The Auth MS sends a authorization request to the Gitlab instance.
This is written in shorthand as GET/authcode. The actual request (a user redirect) looks like:
https ://gitlab.foo.com/oauth/\nauthorize?\nresponse_type=code&\nclient_id=OAUTH_CLIENT_ID&\nredirect_uri=REDIRECT_URI&\nscope=read_user&state = STATE\n
Here the gitlab.foo.com/oauth/authorize is the specific endpoint of the Gitlab instance that handles authorisation code requests.
The query parameters in the request include the expected response type, which is fixed as \u201dcode\u201d, meaning that we expect an Authorization code. Other query parameters are the client id, the redirect uri, the scope which is set to read user for our purpose, and the state (the random string to identify the specific request).
-
The OAuth2 provider redirects the user to the login page. Here the user logs into their protected account with their username/email ID and password.
-
The OAuth2 provider then asks the user to approve/deny sharing the requested information with the Auth MS. The user should approve this for successful authentication.
-
After approval, the user is redirected by the GitLab instance to the REDIRECT URI. This URI has the following form:
REDIRECT_URI?code=AUTHCODE&state=STATE\n
The REDIRECT URI is as defined previously, during the OAuth2 Client initialisation, i.e. the same as the one provided in the \u201dGET authcode\u201d request by the Auth MS. The query parameters are provided by the Gitlab instance. These include the AUTHCODE which is the authoriation code that the Auth MS had requested, and the STATE which is the same random string in the \u201dGET authcode\u201d request.
-
The Auth MS retrieves these query parameters. It verifies that the STATE is the same as the random string it provided during the \"GET authcode\" request. This confirms that the AUTHCODE it has received is in response to the specific request it had made.
-
The Auth MS uses this one-use-only AUTHCODE to exchange it for a general access token. This access token wouldn\u2019t be one-use-only, although it would expire after a specified duration of time. To perform this exchange, the Auth MS makes another request to the GitLab instance. This request is written in shorthand as GET/access_token in the sequence diagram. The true form of the request is:
POST https://gitlab.foo.com/oauth/token,\nparameters = 'client_id=OAUTH_CLIENT_ID&\nclient_secret=OAUTH_CLIENT_SECRET&\ncode=AUTHCODE&\ngrant_type=authorization_code&\nredirect_uri=REDIRECT_URI'\n
The request to get a token by exchanging an authorization code, is actually a POST request (for most OAuth2 providers). The https://gitlab.foo.com/oauth/token API endpoint handles the token exchange requests. The parameters sent with the POST request are the client ID, the client secret, the AUTHCODE and the redirect uri. The grant type parameter is always set to the string \u201dauthorization code\u201d, which conveys that we will be exchanging an authentication code for an access token.
- The Gitlab instance exchanges a valid AUTHCODE for an Access Token. This is sent as a response to the Auth MS. An example response is of the following form:
{\n \"access_token\": \"d8aed28aa506f9dd350e54\",\n \"token_type\": \"bearer\",\n \"expires_in\": 7200,\n \"refresh_token\":\"825f3bffb2544b976633a1\",\n \"created_at\": 1607635748\n}\n
The access token field provides the string that can be used as an access token in the headers of requests tryng to access user information. The token type field is usually \u201dbearer\u201d, the expires in field specifies the time in seconds for which the access token will be valid, and the
created at field is the Epoch timestamp at which the token was created. The refresh token field has a string that can be used to refresh the access token, increasing it\u2019s lifetime. However we do not make use of the refresh token field. If an access token expires, the Auth MS simply asks for a new one. TOKEN is the access token string returned in the response.
-
The Auth MS has finally obtained an access token that it can use to retrieve the user\u2019s information. Note that if the Auth MS already had an existing valid access token for information about this user, the steps above wouldn\u2019t be necessary, and thus wouldn\u2019t be performed by the Auth MS. The steps till now in the sequence diagram are simply to get a valid access token for the user information.
-
The Auth MS makes a final request to the Gitlab instance, shorthanded as GET user_details in the sequence diagram. The actual request is of the form:
GET https ://gitlab.foo.com/api/v4/user\n\"Authorization\": Bearer <TOKEN>\n
Here, https://gitlab.foo.com/api/v4/user is the API endpoint that responds with user information. An authorization header is required on the request, with a valid access token. The required header is added here, and TOKEN is the access token that the Auth MS holds.
- The Gitlab instance verifies the access token, and if it is valid, responds with the required user information. This includes username, email ID, etc. An example response looks like:
{\n \"id\": 8,\n \"username\": \"UserX\",\n \"name\": \"XX\",\n \"state\": \"active\",\n \"web_url\": \"http://gitlab.foo.com/UserX\",\n \"created_at\":\"2023-12-03 T10:47:21.970 Z\",\n \"bio\": \"\",\n \"location\": \"\",\n \"public_email\": null,\n \"skype\": \"\",\n \"linkedin\": \"\",\n \"twitter\": \"\",\n \"organization\": \"\",\n \"job_title\": \"\",\n \"work_information\": null,\n \"followers\": 0,\n \"following\": 0,\n \"is_followed \": false,\n \"local_time\": null,\n \"last_sign_in_at\": \"2023-12-13 T12:46:21.223 Z\",\n \"confirmed_at\": \"2023-12-03 T10:47:21.542 Z \",\n \"last_activity_on\": \"2023-12-13\",\n \"email\": \"UserX@localhost\",\n \"projects_limit\": 100000,\n}\n
The important fields from this response are the \u201demail\u201d, \u201dusername\u201d keys. These keys are unique to a user, and thus provide an identity to the user.
- The Auth MS retrieves the values of candidate key fields like \u201demail\u201d, \u201dusername\u201d from the response. Thus, the Auth MS now knows the identity of the user.
"},{"location":"developer/oauth/AUTHMS.html#checking-user-permissions-authorization","title":"Checking User permissions - Authorization","text":"An important feature of the Auth MS is to implement access policies for DTaaS resources. We may have requirements that certain resources and/or microservices in DTaaS should only be accessible to certain users. For example, we may want that /BackendMS/user1 should only be accessible to the user who has username user1. Another example may be that we may want /BackendMS/group3 to only be available to users who have an email ID in the domain @gmail.com. The Auth MS should be able to impose these restrictions and make certain services selectively available to certain users. There are two steps to doing this:
-
Firstly, the user\u2019s identity should be known and trusted. The Auth MS should know the identity of a user and believe that the user is who they claim to be. This has been achieved in the previous section
-
Secondly, this identity should be analysed against certain rules or against a database of allowed users, to determine whether this user should be allowed to access the requested resource.
The second step requires, for every service, either a set of rules that define which users should be allowed access to the service, or a database of user identities that are allowed to access the service. This database and/or set of rules should use the user identities, in our case the email ID or username, to decide whether the user should be allowed or not. This means that the rules should be built based on the kind of username/ email ID the user has, say maybe using some RegEx. In the case of a database, the database should have the user identity as a key. For any service, we can simply look up if the key exists in the database or not and allow/deny the user access based on that.
In the sequence diagram, the Auth MS has a self-request marked as \u201dChecks user permissions\u201d after receiving the user identity from the Gitlab instance. This is when the Auth MS compares the identity of the user to the rules and/or database it has for the requested service. Based on this, if the given identity has access to the requested resource, the Auth MS responds with a 200 OK. This finally marks a succcessful authentication, and the user can now access the requested resource. Note: Again, the Auth MS and user do not communicate directly. All requests/responses of the Auth MS are with the Traefik gateway, not the User directly. Infact, the Auth MS is the external server used by the ForwardAuth middleware of the specific route, and communicates with this middleware. If the authentication is successful, The gateway forwards the request to the specific resource when the 200 OK is recieved, else it drops the request and returns the error code to the user.
"},{"location":"developer/oauth/AUTHMS.html#implementation","title":"Implementation","text":""},{"location":"developer/oauth/AUTHMS.html#traefik-forward-auth","title":"Traefik-forward-auth","text":"The implementation approach is setting up and configuring the open source thomseddon/traefik-forward-auth for our specific use case. This would work as our Auth microservice.
The traefik-forward-auth software is available as a docker.io image. This works as a docker container. Thus there are no dependency management issues. Additionally, it can be added as a middleware server to traefik routers. Thus, it needs atleast Traefik to work along with it properly. It also needs active services that it will be controlling access to. Traefik, the traefikforward-auth service and any services are thus, treated as a stack of docker containers. The main setup needed for this system is configuring the compose.yml file.
There are three main steps of configuring the Auth MS properly.
- The traefik-forward-auth service needs to be configured carefully. Firstly, we set the environment variables for our specific case. Since, we are using Gitlab, we use the generic-oauth provider configuration. Some important variables that are required are the OAuth2 Client ID, Client Secret, Scope. The API endpoints for getting an AUTHCODE, exchanging the code for an access token and getting user information are also necessary
Additionally, it is necessary to create a router that handles the REDIRECT URI path. This router should have a middleware which is set to traefik-forward-auth itself. This is so that after approval, when the user is taken to REDIRECT URI, this can be handled by the gateway and passed to the Auth service for token exchange. We add the ForwardAuth middleware here, which is a necessary part of our design as discussed before. We also add a load balancer for the service. We also need to add a conf file as a volume, for selective authorization rules (discussed later). This is according to the suggested configuration. Thus, we add the following to our docker services:
traefik\u2212forward\u2212auth:\nimage: thomseddon/traefik\u2212forward\u2212auth:latest\nvolumes:\n - <filepath>/conf:/conf\nenvironment:\n - DEFAULT_PROVIDER = generic - oauth\n - PROVIDERS_GENERIC_OAUTH_AUTH_URL=https://gitlab.foo.com/oauth/authorize\n - PROVIDERS_GENERIC_OAUTH_TOKEN_URL=https://gitlab.foo.com/oauth/token\n - PROVIDERS_GENERIC_OAUTH_USER_URL=https://gitlab.foo.com/api/v4/user\n - PROVIDERS_GENERIC_OAUTH_CLIENT_ID=OAUTH_CLIENT_ID\n - PROVIDERS_GENERIC_OAUTH_CLIENT_SECRET=OAUTH_CLIENT_SECRET\n - PROVIDERS_GENERIC_OAUTH_SCOPE = read_user\n - SECRET = a - random - string\n # INSECURE_COOKIE is required if\n # not using a https entrypoint\n - INSECURE_COOKIE = true\nlabels:\n - \"traefik.enable=true\"\n - \"traefik.http.routers.redirect.entryPoints=web\"\n - \"traefik.http.routers.redirect.rule=PathPrefix(/_oauth)\"\n - \"traefik.http.routers.redirect.middlewares=traefik-forward-auth\"\n - \"traefik.http.middlewares.traefik-forward-auth.forwardauth.address=http://traefik-forward-auth:4181\"\n - \"traefik.http.middlewares.traefik-forward-auth.forwardauth.authResponseHeaders=X-Forwarded-User\"\n - \"traefik.http.services.traefik-forward-auth.loadbalancer.server.port=4181\"\n
- The traefik-forward-auth service should be added to the backend services as a middleware.
To do this, the docker-compose configurations of the services need to be updated by adding the following lines:
- \"traefik.http.routers.<service-router>.rule=Path(/<path>)\"\n - \"traefik.http.routers.<service-router>.middlewares=traefik-forward-auth\"\n
This creates a router that maps to the required route, and adds the auth middleware to the required route.
-
Finally, we need to set user permissions on user identities by creating rules in the conf file. Each rule has a name (an identifier for the rule), and an associated route for which the rule will be invoked. The rule also has an action property, which can be either \u201dauth\u201d or \u201dallow\u201d. If action is set to \u201dallow\u201d, any requests on this route are allowed to bypass even the OAuth2 identification. If the action is set to \u201dauth\u201d, requests on this route will require User identity OAuth2 and the system will follow the sequence diagram. For rules with action=\u201dauth\u201d, the user information is retrieved. The identity we use for a user is the user\u2019s email ID. For \u201dauth\u201d rules, we can configure two types of User restrictions/permissions on this identity:
-
Whitelist - This would be a list of user identities (email IDs in our case) that are allowed to access the corresponding route.
- Domain - This would be a domain (example: gmail.com), and only email IDs (user identities) of that domain (example: johndoe@gmail.com) would be allowed access to the corresponding route.
Configuring any of these two properties of an \u201dauth\u201d rule allows us to selectively permit access to certain users for certain resources. Not configuring any of these properties for an \u201dauth\u201d rule means that the OAuth2 process is carried out and the user identity is retrieved, but all known user identities (i.e. all users that successfully complete the OAuth) are allowed to access the resource.
DTaaS currently uses only the whitelist type of rules.
These rules can be used in 3 different ways described below. The exact format of lines to be added to the conf file are also shown.
- No Auth - Serves the Path(\u2018/public\u2018) route. A rule with action=\u201dallow\u201d should be imposed on this.
rule.noauth.action=allow\nrule.noauth.rule=Path(`/public`)\n
- User specific: Serves the Path(\u2018/user1\u2018) route. A rule that only allows \u201duser1@localhost\u201d identity should be imposed on this
rule.onlyu1.action=auth\nrule.onlyu1.rule=Path(`/user1`)\nrule.onlyu1.whitelist=user1@localhost\n
- Common Auth - Serves the Path(\u2018/common\u2018) route. A rule that requires OAuth, i.e. with action=\u201dallow\u201d, but allows all valid and known user identities should be imposed on this.
rule.all.action = auth\nrule.all.rule = Path(`/common`)\n
"},{"location":"developer/oauth/DESIGN.html","title":"System Design of DTaaS Authorization Microservice","text":"DTaaS requires backend authorization to protect its backend services and user workspaces. This document details the system design of the DTaaS Auth Microservice which is responsible for the same.
"},{"location":"developer/oauth/DESIGN.html#requirements","title":"Requirements","text":"For our purpose, we require the Auth MS to be able to handle only requests of the general form \u201dIs User X allowed to access /BackendMS/example?\u201d.
If the user\u2019s identity is correctly verified though the GitLab OAuth2 provider AND this user is allowed to access the requested microservice/action, then the Auth MS should respond with a 200 (OK) code and let the request pass through the gateway to the required microservice/server.
If the user\u2019s identity verification through GitLab OAuth2 fails OR this user is not permitted to access the request resource, then the Auth MS should respond with a 40X (NOT OK) code, and restrict the request from going forward.
"},{"location":"developer/oauth/DESIGN.html#forward-auth-middleware-in-traefik","title":"Forward Auth Middleware in Traefik","text":"Traefik allows middlewares to be set for the routes configured into it. These middlewares intercept the route path requests, and perform analysis/modifications before sending the requests ahead to the services. Traefik has a ForwardAuth middleware that delegates authentication to an external service. If the external authentication server responds to the middleware with a 2XX response codes, the middleware acts as a proxy, letting the request pass through to the desired service. However, if the external server responds with any other response code, the request is dropped, and the response code returned by the external auth server is returned to the user
(source: Treafik documentation)
Thus, an Auth Microservice can be integrated into the existing gateway and DTaaS system structure easily by adding it as the external authentication server for ForwardAuth middlewares. These middlewares can be added on whichever routes/requests require authentication. For our specific purpose, this will be added to all routes since we impose atleast identity verification of users for any request through the gateway
"},{"location":"developer/oauth/DESIGN.html#auth-ms-design","title":"Auth MS Design","text":"The integrated Auth MS should thus work as described in the sequence diagram.
-
Any request made by the user is made on the React website, i.e. the frontend of the DTaaS software.
-
This request then goes through the Traefik gateway. Here it should be interrupted by the respective ForwardAuth middleware.
-
The middleware asks the Auth MS if this request for the given user should be allowed.
-
The Auth MS, i.e. the Auth server verifies the identity of the user using OAuth2 with GitLab, and checks if this user should be allowed to make this request.
-
If the user is verified and allowed to make the request, the Auth server responds with a 200 OK to Traefik Gateway (more specifically to the middleware in Traefik)
-
Traefik then forwards this request to the respective service. A response by the service, if any, will be passed through the chain back to the user.
-
However, If the user is not verified or not allowed to make this request, the Auth server responds with a 40x to Traefik gateway.
-
Traefik will then drop the request and respond to the Client informing that the request was forbidden. It will also pass the Auth servers response code
"},{"location":"developer/oauth/OAUTH2.0.html","title":"OAuth 2.0 Summary","text":"The Auth MS works on the OAuth 2.0 RFC. This document provides a brief summary of the working of the OAtuh 2.0 technology.
"},{"location":"developer/oauth/OAUTH2.0.html#entities","title":"Entities","text":"OAuth2, as used for user identity verification, has 3 main entities:
- The User: This is the entity whose identity we are trying to verify/know. In our case, this is the same as the user of the DTaaS software.
- The Client: This is the entity that wishes to know/verify the identity of a user. In our case, this is the Auth MS (initialised with a Gitlab application). This shouldn\u2019t be confused with the frontend website of DTaaS (referred to as Client in the previous section).
- The OAuth2 Identity Provider: This is the entity that allows the client to know the identity of the user. In our case, this is GitLab. Most commonly, users have an existing, protected account with this entity. The account is registered using a unique key, like an email ID or username and is usually password protected so that only that specific user can login using that account. After the user has logged in, they will be asked to approve sharing their profile information with the client. If they approve, the client will have access to the user\u2019s email id, username, and other profile information. This information can be used to know/verify the identity of the user.
Note: In general, it is possible for the Authorization server (which asks user for approval) and the Resource (User Identity) provider to be 2 different servers. However, in our case the Gitlab instance itself handles both the functions, through different API endpoints. The concepts remain the same. Thus, we only discuss the 3 main entities, the User, the OAuth2 Client and the Gitlab instance in our discussion.
"},{"location":"developer/oauth/OAUTH2.0.html#the-oauth2-client","title":"The OAuth2 Client","text":"Many sites allow you to initialise an OAuth2 client. For our purposes, we will use Gitlab itself, by making an \u201dapplication\u201d in Gitlab. However, it is not necessary to initialise a client using the same website as the identity provider. These are separate things. Our OAuth2 client is initialized by creating and configuring a Gitlab instance-wide application. There are two main things in this configuration:
- Redirect URI: It is the URI where the users are redirected to after they approve sharing information with the client.
- Scopes: These are the types and levels of access that the client can have over the user\u2019s profile. For our purposes, we only require the read user scope, which allows us to access the user\u2019s profile information for knowing the identity.
After the GitLab application is successfully created, we are provided a Client ID and Client Secret. This means our initialization is complete. This Client ID and Client Secret can be used in any application, essentially making that application the OAuth2 Client. This is why the Client secret should never be shared. We will use this Client ID and Client secret in our Auth MS, making it an OAuth2 Client application. It will now be able to follow the OAuth2 workflow to verify the identity of users.
"},{"location":"developer/oauth/OAUTH2.0.html#oauth-20-workflows","title":"OAuth 2.0 Workflows","text":"Two major different OAuth2.0 flows are used in DTaaS.
"},{"location":"developer/oauth/OAUTH2.0.html#oauth2-authorization-code-flow","title":"OAuth2 Authorization Code Flow","text":"This flow involves several steps and the exchange of an authorization code for an access tokens to ensure secure authorization. This flow is used for the DTaaS AuthMS, which is responsible for securing all backend DTaaS services
The OAuth2 workflow is initiated by the Client (Auth MS) whenever it requires knowing the identity of the user. Briefly, the flow starts when the Auth MS sends an authorization request to Gitlab. The Auth MS tries to obtain an access token, using which it can gather user information. Once it has user information, it can know the identity of the user and check whether the user has permission to access the requested resource.
The requests made by the Auth MS to the OAuth2 provider are abbreviated. A detailed explanation of the workflow for DTaaS specifically can be found in the AuthMS implementation docs
"},{"location":"developer/oauth/OAUTH2.0.html#oauth2-pkce-proof-key-for-code-exchange-flow","title":"OAuth2 PKCE (Proof Key for Code Exchange) Flow","text":"This is an extension to the OAuth2 Authorization Code Flow designed to provide an additional layer of security, particularly for public clients that cannot securely store client secrets. PKCE mitigates certain attack vectors, like authorization code interception.
The DTaaS client website login works based on the PKCE OAuth2.0 flow. More information about the details of this flow can be found here
"},{"location":"developer/servers/lib/lib-ms.html","title":"Library Microservice","text":"The Library Microservices provides users with access to files in user workspaces via API. This microservice will interface with local file system and Gitlab to provide uniform Gitlab-compliant API access to files.
Warning
This microservice is still under heavy development. It is still not a good replacement for file server we are using now.
"},{"location":"developer/servers/lib/lib-ms.html#architecture-and-design","title":"Architecture and Design","text":"The C4 level 2 diagram of this microservice is:
The GraphQL API provided by the library microservice shall be compliant with the Gitlab GraphQL service.
"},{"location":"developer/servers/lib/lib-ms.html#uml-diagrams","title":"UML Diagrams","text":""},{"location":"developer/servers/lib/lib-ms.html#class-diagram","title":"Class Diagram","text":"classDiagram\n class FilesResolver {\n -filesService: IFilesService\n +listDirectory(path: string): Promise<Project>\n +readFile(path: string): Promise<Project>\n }\n\n class FilesServiceFactory {\n -configService: ConfigService\n -localFilesService: LocalFilesService\n +create(): IFilesService\n }\n\n class LocalFilesService {\n -configService: ConfigService\n -getFileStats(fullPath: string, file: string): Promise<Project>\n +listDirectory(path: string): Promise<Project>\n +readFile(path: string): Promise<Project>\n }\n\n class ConfigService {\n +get(propertyPath: string): any\n }\n\n class IFilesService{\n listDirectory(path: string): Promise<Project>\n readFile(path: string): Promise<Project>\n }\n\n IFilesService <|-- FilesResolver: uses\n IFilesService <|.. LocalFilesService: implements\n IFilesService <|-- FilesServiceFactory: creates\n ConfigService <|-- FilesServiceFactory: uses\n ConfigService <|-- LocalFilesService: uses
"},{"location":"developer/servers/lib/lib-ms.html#sequence-diagram","title":"Sequence Diagram","text":"sequenceDiagram\n actor Client\n actor Traefik\n\n box LightGreen Library Microservice\n participant FR as FilesResolver\n participant FSF as FilesServiceFactory\n participant CS as ConfigService\n participant IFS as IFilesService\n participant LFS as LocalFilesService\n end\n\n participant FS as Local File System DB\n\n Client ->> Traefik : HTTP request\n Traefik ->> FR : GraphQL query\n activate FR\n\n FR ->> FSF : create()\n activate FSF\n\n FSF ->> CS : getConfiguration(\"MODE\")\n activate CS\n\n CS -->> FSF : return configuration value\n deactivate CS\n\n alt MODE = Local\n FSF ->> FR : return filesService (LFS)\n deactivate FSF\n\n FR ->> IFS : listDirectory(path) or readFile(path)\n activate IFS\n\n IFS ->> LFS : listDirectory(path) or readFile(path)\n activate LFS\n\n LFS ->> CS : getConfiguration(\"LOCAL_PATH\")\n activate CS\n\n CS -->> LFS : return local path\n deactivate CS\n\n LFS ->> FS : Access filesystem\n alt Filesystem error\n FS -->> LFS : Filesystem error\n LFS ->> LFS : Throw new InternalServerErrorException\n LFS -->> IFS : Error\n else Successful file operation\n FS -->> LFS : Return filesystem data\n LFS ->> IFS : return Promise<Project>\n end\n deactivate LFS\n activate IFS\n end\n\n alt Error thrown\n IFS ->> FR : return Error\n deactivate IFS\n FR ->> Traefik : return Error\n Traefik ->> Client : HTTP error response\n else Successful operation\n IFS ->> FR : return Promise<Project>\n deactivate IFS\n FR ->> Traefik : return Promise<Project>\n Traefik ->> Client : HTTP response\n end\n\n deactivate FR\n
"},{"location":"developer/servers/lib/lib-ms.html#dependency-graphs","title":"Dependency Graphs","text":"The figures are the dependency graphs generated from the code.
"},{"location":"developer/servers/lib/lib-ms.html#src-directory","title":"src directory","text":""},{"location":"developer/servers/lib/lib-ms.html#test-directory","title":"test directory","text":""},{"location":"developer/system/architecture.html","title":"System Overview","text":""},{"location":"developer/system/architecture.html#user-requirements","title":"User Requirements","text":"The DTaaS software platform users expect a single platform to support the complete DT lifecycle. To be more precise, the platform users expect the following features:
- Author \u2013 create different assets of the DT on the platform itself. This step requires use of some software frameworks and tools whose sole purpose is to author DT assets.
- Consolidate \u2013 consolidate the list of available DT assets and authoring tools so that user can navigate the library of reusable assets. This functionality requires support for discovery of available assets.
- Configure \u2013 support selection and configuration of DTs. This functionality also requires support for validation of a given configuration.
- Execute \u2013 provision computing infrastructure on demand to support execution of a DT.
- Explore \u2013 interact with a DT and explore the results stored both inside and outside the platform. Exploration may lead to analytical insights.
- Save \u2013 save the state of a DT that\u2019s already in the execution phase. This functionality is required for on demand saving and re-spawning of DTs.
- Services \u2013 integrate DTs with on-platform or external services with which users can interact with.
- Share \u2013 share a DT with other users of their organisation.
"},{"location":"developer/system/architecture.html#system-architecture","title":"System Architecture","text":"The figure shows the system architecture of the the DTaaS software platform.
"},{"location":"developer/system/architecture.html#system-components","title":"System Components","text":"The users interact with the software platform using a webapp. The service router is a single point of entry for direct access to the platform services. The service router is responsible for controlling user access to the microservice components. The service mesh enables discovery of microservices, load balancing and authorization functionalities.
In addition, there are microservices for catering to managing DT reusable assets, platform services, DT lifecycle manager, DT execution manager, accouting and security. The microservices are complementary and composable; they fulfil core requirements of the system.
The microservices responsible for satisfying the user requirements are:
- The security microservice implements role-based access control (RBAC) in the platform.
- The accounting microservice is responsible for keeping track of the live status of platform, DT asset and infrastructure usage. Any licensing, usage restrictions need to be enforced by the accounting microservice. Accounting is a pre-requisite to commercialisation of the platform. Due to significant use of external infrastructure and resources via the platform, the accounting microservice needs to interface with accounting systems of the external services.
- User Workspaces are virtual environments in which users can perform lifecycle operations on DTs. These virtual environments are either docker containers or virtual machines which provide desktop interface to users.
- Reusable Assets are assets / parts from which DTs are created. Further explation is available on the assets page
- Services are dedicated services available to all the DTs and users of the DTaaS platform. Services build upon DTs and provide user interfaces to users.
- DT Execution Manager provides virtual and isolated execution environments for DTs. The execution manager is also responsible for dynamic resource provisioning of cloud resources.
- DT Lifecycle Manager manages the lifecycle operations on all DTs. It also directs DT Execution Manager to perform execute, save and terminate operations on DTs.
If you are interested, please take a look at the C4 architectural diagram.
A mapping of the architectural components to the implementation components is available in the table.
System Component Doc Page(s) Service Router Traefik Gateway Web Application React Webapplication Reusable Assets Library Microservice Digital Twins and DevOps Integrated Gitlab Services Third-party Services (MQTT, InfluxDB, RabbitMQ, Grafana and MongoDB) DT Lifecycle Not available yet Security Gitlab client OAuth and server OAuth Digital Twins as Services DT Runner Accounting Not available yet Execution Manager Not available yet"},{"location":"developer/system/architecture.html#references","title":"References","text":"Font sources: fileformat
"},{"location":"developer/system/current-status.html","title":"Current Status","text":"The DTaaS software platform is currently under development. Crucial system components are in place with ongoing development work focusing on increased automation and feature enhancement. The figure below shows the current status of the development work.
A C4 representation of the same diagram is also available.
"},{"location":"developer/system/current-status.html#user-security","title":"User Security","text":"There is a two-level authorization mechanisms in place for the react website and the Traefik gateway.
The react website component uses Gitlab for user authorization using OAuth protocol.
"},{"location":"developer/system/current-status.html#gateway-authorization","title":"Gateway Authorization","text":"The Traefik gateway has OAuth2 web server authorization provided by Traefik-forward-auth microservice. This authorization protects all the microservices and workspaces running in the backend.
"},{"location":"developer/system/current-status.html#user-workspaces","title":"User Workspaces","text":"All users have dedicated dockerized-workspaces. These docker-images are based on container images published by mltooling group.
Thus DT experts can develop DTs from existing DT components and share them with other users. A file server has been setup to act as a DT asset repository. Each user gets space to store private DT assets and also gets access to shared DT assets. Users can synchronize their private DT assets with external git repositories. In addition, the asset repository transparently gets mapped to user workspaces within which users can perform DT lifecycle operations. There is also a library microservice which in the long-run will replace the file server.
Users can run DTs in their workspaces and also permit remote access to other users. There is already shared access to internal and external services. With these two provisions, users can treat live DTs as service components in their own software systems.
"},{"location":"developer/system/current-status.html#platform-services","title":"Platform Services","text":"There are four external services integrated with the DTaaS software platform. They are: InfluxDB, Grafana, RabbitMQ, MQTT, and MongoDB.
These services can be used by DTs and PTs for communication, storing and visualization of data. There can also be monitoring services setup based on these services.
"},{"location":"developer/system/current-status.html#development-priorities","title":"Development Priorities","text":"The development priorities for the DTaaS software development team are:
- DevOps from React frontend website
- Upgrade software stack of user workspaces
- Increased automation of installation procedures
- DT Runner (API Interface to DT)
- DT Configuration DSL \u00edn the form of YAML schema
Your contributions are highly welcome.
"},{"location":"developer/system/current-status.html#references","title":"References","text":"Font sources: fileformat
"},{"location":"developer/testing/intro.html","title":"Testing","text":""},{"location":"developer/testing/intro.html#common-questions-on-testing","title":"Common Questions on Testing","text":""},{"location":"developer/testing/intro.html#what-is-software-testing","title":"What is Software Testing","text":"Software testing is a procedure to investigate the quality of a software product in different scenarios. It can also be stated as the process of verifying and validating that a software program or application works as expected and meets the business and technical requirements that guided design and development.
"},{"location":"developer/testing/intro.html#why-software-testing","title":"Why Software Testing","text":"Software testing is required to point out the defects and errors that were made during different development phases. Software testing also ensures that the product under test works as expected in all different cases \u2013 stronger the test suite, stronger is our confidence in the product that we have built. One important benefit of software testing is that it facilitates the developers to make incremental changes to source code and make sure that the current changes are not breaking the functionality of the previously existing code.
"},{"location":"developer/testing/intro.html#what-is-tdd","title":"What is TDD","text":"TDD stands for Test Driven Development. It is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards. The goal of TDD can be viewed as specification and not validation. In other words, it\u2019s one way to think through your requirements or design before your write your functional code.
"},{"location":"developer/testing/intro.html#what-is-bdd","title":"What is BDD","text":"BDD stands for \u201cBehaviour Driven Development\u201d. It is a software development process that emerged from TDD. It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. This provides software development and management teams with shared tools and a shared process to collaborate on software development. BDD is largely facilitated through the use of a simple domain-specific language (DSL) using natural language constructs (e.g., English-like sentences) that can express the behavior and the expected outcomes. Mocha and Cucumber testing libraries are built around the concepts of BDD.
"},{"location":"developer/testing/intro.html#testing-workflow","title":"Testing workflow","text":"(Ref: Ham Vocke, The Practical Test Pyramid)
We follow a testing workflow in accordance with the test pyramid diagram given above, starting with isolated tests and moving towards complete integration for any new feature changes. The different types of tests (in the order that they should be performed) are explained below:
"},{"location":"developer/testing/intro.html#unit-tests","title":"Unit Tests","text":"Unit testing is a level of software testing where individual units/ components of a software are tested. The objective of Unit Testing is to isolate a section of code and verify its correctness.
Ideally, each test case is independent from the others. Substitutes such as method stubs, mock objects, and spies can be used to assist testing a module in isolation.
"},{"location":"developer/testing/intro.html#benefits-of-unit-testing","title":"Benefits of Unit Testing","text":" - Unit testing increases confidence in changing/ maintaining code. If good unit tests are written and if they are run every time any code is changed, we will be able to promptly catch any defects introduced due to the change.
- If codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
- The cost, in terms of time, effort and money, of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels.
"},{"location":"developer/testing/intro.html#unit-tests-in-dtaas","title":"Unit Tests in DTaaS","text":"Each component DTaaS project uses unique technology stack. Thus the packages used for unit tests are different. Please check the test/
directory of a component to figure out the unit test packages used.
"},{"location":"developer/testing/intro.html#integration-tests","title":"Integration tests","text":"Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. In DTaaS, we use an integration server for software development as well as such tests.
The existing integration tests are done at the component level. There are no integration tests between the components. This task has been postponed to future.
"},{"location":"developer/testing/intro.html#end-to-end-tests","title":"End-to-End tests","text":"Testing any code changes through the end user interface of your software is essential to verify if your code has the desired effect for the user. End-to-End tests in DTaaS a functional setup.
There are end-to-end tests in the DTaaS. This task has been postponed to future.
"},{"location":"developer/testing/intro.html#feature-tests","title":"Feature Tests","text":"A Software feature can be defined as the changes made in the system to add new functionality or modify the existing functionality. Each feature is said to have a characteristics that is designed to be useful, intuitive and effective. It is important to test a new feature when it has been added. We also need to make sure that it does not break the functionality of already existing features. Hence feature tests prove to be useful.
The DTaaS project does not have any feature tests yet. Cucumber shall be used in future to implement feature tests.
"},{"location":"developer/testing/intro.html#references","title":"References","text":" - Arthur Hicken, Shift left approach to software testing
- Justin Searls and Kevin Buchanan, Contributing Tests wiki.
- This wiki has good explanation of TDD and test doubles.
"},{"location":"user/features.html","title":"Overview","text":""},{"location":"user/features.html#advantages","title":"Advantages","text":"The DTaaS software platform provides certain advantages to users:
- Support for different kinds of Digital Twins
- CFD, Simulink, co-simulation, FEM, ROM, ML etc.
- Integrates with other Digital Twin frameworks
- Facilitate availability of Digital Twin as a Service
- Collaboration and reuse
- Private workspaces for verification of reusable assets, trial run DTs
- Cost effectiveness
"},{"location":"user/features.html#software-features","title":"Software Features","text":"Each installation of DTaaS platform comes with the features highlighted in the following picture.
All the users have dedicated workspaces. These workspaces are dockerized versions of Linux Desktops. The user desktops are isolated so the installations and customizations done in one user workspace do not effect the other user workspaces.
Each user workspace comes with some development tools pre-installed. These tools are directly accessible from web browser. The following tools are available at present:
Tool Advantage Jupyter Lab Provides flexible creation and use of digital twins and their components from web browser. All the native Jupyterlab usecases are supported here. Jupyter Notebook Useful for web-based management of their files (library assets) VS Code in the browser A popular IDE for software development. Users can develop their digital twin-related assets here. ungit An interactive git client. Users can work with git repositories from web browser In addition, users have access to xfce-based remote desktop via VNC client. The VNC client is available right in the web browser. The xfce supported desktop software can also be run in their workspace.
The DTaaS software platform has some pre-installed services available. The currently available services are:
Service Advantage InfluxDB Time-series database primarly for storing time-series data from physical twins. The digital twins can use an already existing data. Users can also create visualization dashboards for their digital twins. RabbitMQ Communication broker for communication between physical and digital twins Grafana Visualization dashboards for their digital twins. MQTT Lightweight data transfer broker for IoT devices / physical twins feeding data into digital twins. MongoDB NoSQL document database for storing metadata of data from physical twins In addition, the workspaces are connected to the Internet so all the Digital Twins running in the workspace can interact with both the internal and external services.
The users can publish and reuse the digital twin assets available on the platform. In addition, users can run their digital twins and make these live digital twins available as services to their clients. The clients need not be users of the DTaaS software installation.
"},{"location":"user/motivation.html","title":"DTaaS for Users","text":""},{"location":"user/motivation.html#user-guide","title":"User Guide","text":"This guide is for users of the DTaaS platform. Please make sure that you have access to a live installation of the DTaaS platform. The easiest is localhost installation scenario.
Please see user-specific Slides and Video to get the conceptual framework behind composable digital twins in the DTaaS platform.
"},{"location":"user/motivation.html#motivation","title":"Motivation","text":"How can DT software platforms enable users collaborate to:
- Build digital twins (DTs)
- Use DTs themselves
- Share DTs with other users
- Provide the existing DTs as Service to other users
In addition, how can the DT software platforms:
- Support DT lifecycle
- Scale up rather than scale down (flexible convention over configuration)
"},{"location":"user/motivation.html#existing-approaches","title":"Existing Approaches","text":"There are quite a few solutions proposed in the recent past to solve this problem. Some of them are:
- Focus on data from Physical Twins (PTs) to perform analysis, diagnosis, planning etc\u2026
- Share DT assets across the upstream, downstream etc\u2026.
- Evaluate different models of PT
- DevOps for Cyber Physical Systems (CPS)
- Scale DT / execution of DT / ensemble of related DTs
- Support for PT product lifecycle
"},{"location":"user/motivation.html#our-approach","title":"Our Approach","text":" - Support for transition from existing workflows to DT frameworks
- Create DTs from reusable assets
- Enable users to share DT assets
- Offer DTs as a Service
- Integrate the DTs with external software systems
- Separate configurations of independent DT components
"},{"location":"user/digital-twins/create.html","title":"Create a Digital Twin","text":"The first step in digital twin creation is to use the available assets in your workspace. If you have assets / files in your computer that need to be available in the DTaaS workspace, then please follow the instructions provided in library assets.
There are dependencies among the library assets. These dependencies are shown below.
A digital twin can only be created by linking the assets in a meaningful way. This relationship can be expressed using a mathematical equation:
where D denotes data, M denotes models, F denotes functions, T denotes tools, denotes DT configuration and is a symbolic notation for a digital twin itself. The expression denotes composition of DT from D,M,T and F assets. The indicates zero or one more instances of an asset and indicates one or more instances of an asset.
The DT configuration specifies the relevant assets to use, the potential parameters to be set for these assets. If a DT needs to use RabbitMQ, InfluxDB like services supported by the platform, the DT configuration needs to have access credentials for these services.
This kind of generic DT definition is based on the DT examples seen in the wild. You are at liberty to deviate from this definition of DT. The only requirement is the ability to run the DT from either commandline or desktop.
Tip
If you are stepping into the world of Digital Twins, you might not have distinct digital twin assets. You are likely to have one directory of everything in which you run your digital twin. In such a case we recommend that you upload this monolithic digital twin into digital_twin/your_digital_twin_name directory.
"},{"location":"user/digital-twins/create.html#example","title":"Example","text":"The Examples repository contains a co-simulation setup for mass spring damper. This example illustrates the potential of using co-simulation for digital twins.
The file system contents for this example are:
workspace/\n data/\n mass-spring-damper\n input/\n output/\n\n digital_twins/\n mass-spring-damper/\n cosim.json\n time.json\n lifecycle/\n analyze\n clean\n evolve\n execute\n save\n terminate\n README.md\n\n functions/\n models/\n MassSpringDamper1.fmu\n MassSpringDamper2.fmu\n\n tools/\n common/\n data/\n functions/\n models/\n tools/\n maestro-2.3.0-jar-with-dependencies.jar\n
The workspace/data/mass-spring-damper/
contains input
and output
data for the mass-spring-damper digital twin.
The two FMU models needed for this digital twin are in models/
directory.
The co-simulation digital twin needs Maestro co-simulation orchestrator. Since this is a reusable asset for all the co-simulation based DTs, the tool has been placed in common/tools/
directory.
The actual digital twin configuration is specified in digital twins/mass-spring-damper
directory. The co-simulation configuration is specified in two json files, namely cosim.json
and time.json
. A small explanation of digital twin for its users can be placed in digital twins/mass-spring-damper/README.md
.
The launch program for this digital twin is in digital twins/mass-spring-damper/lifecycle/execute
. This launch program runs the co-simulation digital twin. The co-simulation runs till completion and then ends. The programs in digital twins/mass-spring-damper/lifecycle
are responsible for lifecycle management of this digital twin. The lifecycle page provides more explanation on these programs.
Execution of a Digital Twin
A frequent question arises on the run time characteristics of a digital twin. The natural intuition is to say that a digital twin must operate as long as its physical twin is in operation. If a digital twin runs for a finite time and then ends, can it be called a digital twin? The answer is a resounding YES. The Industry 4.0 usecases seen among SMEs have digital twins that run for a finite time. These digital twins are often run at the discretion of the user.
You can run this digital twin by,
- Go to Workbench tools page of the DTaaS website and open VNC Desktop. This opens a new tab in your browser
- A page with VNC Desktop and a connect button comes up. Click on Connect. You are now connected to the Linux Desktop of your workspace.
- Open a Terminal (black rectangular icon in the top left region of your tab) and type the following commands.
-
Download the example files by following the instructions given on examples overview.
-
Go to the digital twin directory and run
cd /workspace/examples/digital_twins/mass-spring-damper\nlifecycle/execute\n
The last command executes the mass-spring-damper digital twin and stores the co-simulation output in data/mass-spring-damper/output
.
"},{"location":"user/digital-twins/devops.html","title":"Digital Twins and DevOps","text":"DevOps has been a well established software development practice. We are bringing out an experimental feature of integration DevOps in the DTaaS.
Warning
This feature is under documented. Please expect some instability in this release. However, we are working to mature the feature and improve the documentation.
This feature requires specific installation setup.
- Integrated gitlab installation
- A valid gitlab repository for the logged in user. Please see an example repository. You can clone this repository and customize to your needs.
- A linked Gitlab Runner to the user gitlab repository.
"},{"location":"user/digital-twins/devops.html#manage","title":"Manage","text":"The manage tab allows for different operations on a digital twin:
- Checking the details (Details button)
- Delete (Delete button)
- Modify / Reconfigure (Reconfigure button)
A digital twin placed in the DTaaS has a certain recommended structure. Please see the assets pag for an explanation and this example.
The information page shown using the Details button, shows the README.md information stored inside the digital twin directory.
A reconfigure button opens an editor and shows all the files corresponding to a digital twin. All of these files can be updated. These files correspond to three categories.
- Description
- Configuration
- Lifecycle
"},{"location":"user/digital-twins/devops.html#execute","title":"Execute","text":"The execute tabs shows the possibility of executing multiple digital twins. Once an execution of digital twin is complete, you can see the execution log as well.
"},{"location":"user/digital-twins/lifecycle.html","title":"Digital Twin Lifecycle","text":"The physical products in the real world have product lifecycle. A simplified four-stage product life is illustrated here.
A digital twin tracking the physical products (twins) need to track and evolve in conjunction with the corresponding physical twin.
The possible activities undertaken in each lifecycle phases are illustrated in the figure.
(Ref: Minerva, R, Lee, GM and Crespi, N (2020) Digital Twin in the IoT context: a survey on technical features, scenarios and architectural models. Proceedings of the IEEE, 108 (10). pp. 1785-1824. ISSN 0018-9219.)
"},{"location":"user/digital-twins/lifecycle.html#lifecycle-phases","title":"Lifecycle Phases","text":"The four phase lifecycle has been extended to a lifecycle with eight phases. The new phase names and the typical activities undertaken in each phase are outlined in this section.
A DT lifecycle consists of explore, create, execute, save, analyse, evolve and terminate phases.
Phase Main Activities explore selection of suitable assets based on the user needs and checking their compatibility for the purposes of creating a DT. create specification of DT configuration. If DT already exists, there is no creation phase at the time of reuse. execute automated / manual execution of a DT based on its configuration. The DT configuration must checked before starting the execution phase. analyse checking the outputs of a DT and making a decision. The outputs can be text files, or visual dashboards. evolve reconfigure DT primarily based on analysis. save involves saving the state of DT to enable future recovery. terminate stop the execution of DT. A digital twin faithfully tracking the physical twin lifecycle will have to support all the phases. It is also possible for digital twin engineers to add more phases to digital they are developing. Thus it is important for the DTaaS software platform needs to accommodate needs of different DTs.
A potential linear representation of the tasks undertaken in a digital twin lifecycle are shown here.
Again this is only a one possible pathway. Users are at liberty to alter the sequence of steps.
It is possible to map the lifecycle phases identified so far with the Build-Use-Share approach of the DTaaS software platform.
Even though not mandatory, having a matching coding structure makes it easy to for users to create and manage their DTs within the DTaaS. It is recommended to have the following structure:
workspace/\n digital_twins/\n digital-twin-1/\n lifecycle/\n analyze\n clean\n evolve\n execute\n save\n terminate\n
A dedicated program exists for each phase of DT lifecycle. Each program can be as simple as a script that launches other programs or sends messages to a live digital twin.
The recommended way to implement lifecycle phases within DTaaS is to create scripts. These scripts can be as simple as shell scripts.
"},{"location":"user/digital-twins/lifecycle.html#example-lifecycle-scripts","title":"Example Lifecycle Scripts","text":"Here are the example programs / scripts to manage three phases in the lifecycle of mass-spring-damper DT.
lifecycle/execute#!/bin/bash\nmkdir -p /workspace/data/mass-spring-damper/output\n#cd ..\njava -jar /workspace/common/tools/maestro-2.3.0-jar-with-dependencies.jar \\\n import -output /workspace/data/mass-spring-damper/output \\\n --dump-intermediate sg1 cosim.json time.json -i -vi FMI2 \\\n output-dir>debug.log 2>&1\n
The execute phases uses the DT configuration, FMU models and Maestro tool to execute the digital twin. The script also stores the output of cosimulation in /workspace/data/mass-spring-damper/output
.
It is possible for a DT not to support a specific lifecycle phase. This intention can be specified with an empty script and a helpful message if deemed necessary.
lifecycle/analyze#!/bin/bash\nprintf \"operation is not supported on this digital twin\"\n
The lifecycle programs can call other programs in the code base. In the case of lifecycle/terminate
program, it is calling another script to do the necessary job.
lifecycle/terminate#!/bin/bash\nlifecycle/clean\n
"},{"location":"user/examples/index.html","title":"DTaaS Examples","text":"There are some example digital twins created for the DTaaS software. You can peruse these examples and follow the steps given in this Examples section to experience features of the DTaaS software platform and understand best practices for managing digital twins within the platform. Please see these slides and video to get an overview of these examples.
There are two demo vides available: CP-SENS project (slides and video) and Incubator (video). These two videos have been recorded using DTaaS v0.5.0.
"},{"location":"user/examples/index.html#copy-examples","title":"Copy Examples","text":"The first step is to copy all the example code into your user workspace within the DTaaS. Use the given shell script to copy all the examples into /workspace/examples
directory.
wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS-examples/main/getExamples.sh\nbash getExamples.sh\n
"},{"location":"user/examples/index.html#example-list","title":"Example List","text":"The digital twins provided in examples vary in their complexity. It is best to use the examples in the following order.
- Mass Spring Damper
- Water Tank Fault Injection
- Water Tank Model Swap
- Desktop Robotti and RabbitMQ
- Water Treatment Plant and OPC-UA
- Three Water Tanks with DT Manager Framework
- Flex Cell with Two Industrial Robots
- Incubator
- Firefighters in Emergency Environments
- Mass Spring Damper with NuRV Runtime Monitor FMU
- Water Tank Fault Injection with NuRV Runtime Monitor FMU
- Incubator Co-Simulation with NuRV Runtime Monitor FMU
- Incubator with NuRV Runtime Monitor as Service
- Incubator with NuRV Runtime Monitor FMU as Service
DTaaS examples
"},{"location":"user/examples/drobotti-rmqfmu/index.html","title":"Desktop Robotti with RabbitMQ","text":""},{"location":"user/examples/drobotti-rmqfmu/index.html#overview","title":"Overview","text":"This example demonstrates bidirectional communication between a mock physical twin and a digital twin of a mobile robot (Desktop Robotti). The communication is enabled by RabbitMQ Broker.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#example-structure","title":"Example Structure","text":"The mock physical twin of mobile robot is created using two python scripts
- data/drobotti_rmqfmu/rmq-publisher.py
- data/drobotti_rmqfmu/consume.py
The mock physical twin sends its physical location in (x,y) coordinates and expects a cartesian distance calculated from digital twin.
The rmq-publisher.py reads the recorded (x,y) physical coordinates of mobile robot. The recorded values are stored in a data file. These (x,y) values are published to RabbitMQ Broker. The published (x,y) values are consumed by the digital twin.
The consume.py subscribes to RabbitMQ Broker and waits for the calculated distance value from the digital twin.
The digital twin consists of a FMI-based co-simulation, where Maestro is used as co-orchestration engine. In this case, the co-simulation is created by using two FMUs - RMQ FMU (rabbitmq-vhost.fmu) and distance FMU (distance-from-zero.fmu). The RMQ FMU receives the (x,y) coordinates from rmq-publisher.py and sends calculated distance value to consume.py. The RMQ FMU uses RabbitMQ broker for communication with the mock mobile robot, i.e., rmq-publisher.py and consume.py. The distance FMU is responsible for calculating the distance between (0,0) and (x,y). The RMQ FMU and distance FMU exchange values during co-simulation.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models, one tool, one data, and two scripts to create mock physical twin. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models distance-from-zero.fmu Private No rmq-vhost.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes Data drobotti_playback_data.csv private No Mock PT rmq-publisher.py Private No consume.py Private No This DT has many configuration files. The coe.json
and multimodel.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
The RabbitMQ access credentials need to be provided in multimodel.json
. The rabbitMQ-credentials.json
provides RabbitMQ access credentials for mock PT python scripts. Please add your credentials in both these files.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool and pip packages for python scripts Execute Runs both DT and mock PT Clean Clears run logs and outputs"},{"location":"user/examples/drobotti-rmqfmu/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/drobotti_rmqfmu\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/drobotti-rmqfmu/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 in the workspace. Also install the required python pip packages for rmq-publisher.py and consume.py scripts.
lifecycle/create\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#execute","title":"Execute","text":"Run the python scripts to start mock physical twin. Also run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.
lifecycle/execute\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/digital_twins/drobotti_rmqfmu directory.
Executing the DT will generate and launch a co-simulation (RMQFMU and distance FMU), and two python scripts. One to publish data that is read from a file. And one to consume what is sent by the distance FMU.
In this examples the DT will run for 10 seconds, with a stepsize of 100ms. Thereafter it is possible to examine the logs produce in /workspace/examples/digital_twins/drobotti_rmqfmu/target
. The outputs for each FMU, xpos and ypos for the RMQFMU, and the distance for the distance FMU are recorded in the outputs.csv
file. Other logs can be examined for each FMU and the publisher scripts. Note that, the RMQFMU only sends data, if the current input is different to the previous one.
"},{"location":"user/examples/drobotti-rmqfmu/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#references","title":"References","text":"The RabbitMQ FMU github repository contains complete documentation and source code of the rmq-vhost.fmu.
More information about the case study is available in:
Frasheri, Mirgita, et al. \"Addressing time discrepancy between digital\nand physical twins.\" Robotics and Autonomous Systems 161 (2023): 104347.\n
"},{"location":"user/examples/flex-cell/index.html","title":"Flex Cell Digital Twin with Two Industrial Robots","text":""},{"location":"user/examples/flex-cell/index.html#overview","title":"Overview","text":"The flex-cell Digital Twin is a case study with two industrial robotic arms, a UR5e and a Kuka LBR iiwa 7, working in a cooperative setting on a manufacturing cell.
The case study focuses on the robot positioning in the discrete cartesian space of the flex-cell working space. Therefore, it is possible to send (X,Y,Z) commands to both robots, which refer to the target hole and height they want should move to.
The flex-cell case study is managed using the TwinManager
(formerly DT Manager
), which is packed as a jar library in the tools, and run from a java main file.
The TwinManager
uses Maestro as a slave for co-simulation, so it generates the output of the co-simulation and can interact with the real robots at the same time (with the proper configuration and setup). The mainfile can be changed according to the application scope, i.e., the /workspace/examples/tools/flex-cell/FlexCellDTaaS.java
can be manipulated to get a different result.
The /workspace/examples/models/flex-cell/
folder contains the .fmu
files for the kinematic models of the robotic arms, the .urdf
files for visualization (including the grippers), and the .aasx
files for the schema representation with Asset Administration Shell.
The case study also uses RabbitMQFMU to inject values into the co-simulation, therefore, there is the rabbitmqfmu in the models folder as well. Right now, RabbitMQFMU is only used for injecting values into the co-simulation, but not the other way around. The TwinManager
is in charge of reading the values from the co-simulation output and the current state of the physical twins.
"},{"location":"user/examples/flex-cell/index.html#example-structure","title":"Example Structure","text":"The example structure represents the components of the flex-cell DT implementation using the TwinManager
architecture.
The TwinManager orchestrates the flex-cell DT via the Flex-cell DT System, which is composed of 2 smaller Digital Twins, namely, the DT UR5e and the DT Kuka lbr iiwa 7. The TwinManager also provides the interface for the Physical Twins, namely, PT UR5e and PT Kuka lbr iiwa 7. Each Physical Twin and Digital Twin System has a particular endpoint (with a different specialization), which is initialized from configuration files and data model (twin schema).
The current endpoints used in this implementation are:
Digital or Physical Twin Endpoint Flex-cell DT System MaestroEndpoint DT UR5e FMIEndpoint DT Kuka lbr iiwa 7 FMIEndpoint PT UR5e MQTTEndpoint and RabbitMQEndpoint PT Kuka lbr iiwa 7 MQTTEndpoint and RabbitMQEndpoint The Flex-cell DT System uses another configuration to be integrated with the Maestro co-simulation engine.
In the lower part, the Flex-cell System represents the composed physical twin, including the two robotic arms and controller and the Flex-cell Simulation is the mock-up representation for the real system, which is implemented by FMU blocks and their connections.
"},{"location":"user/examples/flex-cell/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses seven models, five tools, six data files, two functions, and one script. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Model kukalbriiwa_model.fmu Private No kuka_irw_gripper_rg6.urdf Private No kuka.aasx Private No ur5e_model.fmu Private No ur5e_gripper_2fg7.urdf Private No ur5e.aasx Private No rmq-vhost.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes TwinManagerFramework-0.0.2.jar Private Yes urinterface (installed with pip) Private No kukalbrinterface Private No robots_flexcell Private No FlexCellDTaaS.java (main script) Private No Data publisher-flexcell-physical.py Private No ur5e_mqtt_publisher.py Private No connections.conf Private No outputs.csv Private No kukalbriiwa7_actual.csv Private No ur5e_actual.csv Private No Function plots.py Private No prepare.py Private No"},{"location":"user/examples/flex-cell/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
- Installation of dependencies in the create phase.
- Preparing the credentials for connections in the prepare phase.
- Execution of the experiment in the execution phase.
- Saving experiments in the save phase.
- Plotting the results of the co-simulation and the real data coming from the robots in the analyze phase.
- Terminating the background processes and cleaning up the outputs in the termination phase.
Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool, Compiles source code of TwinManager to create a usable jar package (used as tool) Prepare Takes the RabbitMQ and MQTT credentials in connections.conf file and configures different assets of DT. Execute The TwinManager executes the flex-cell DT and produces output in data/flex-cell/output
directory Save Save the experimental results Analyze Uses plotting functions to generate plots of co-simulation results Terminate Terminating the background processes Clean Cleans up the output data"},{"location":"user/examples/flex-cell/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/flex-cell\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
This example requires Java 11. The create script installs Java 11; however if you have already installed other Java versions, your default java might be pointing to another version. You can check and modify the default version using the following commands.
java -version\nupdate-alternatives --config java\n
Now, run the following scripts:
"},{"location":"user/examples/flex-cell/index.html#create","title":"Create","text":"Installs Open Java Development Kit 11 and a python virtual environment with pip dependencies. Also builds the TwinManager
tool (TwinManagerFramework-0.0.2.jar) from source code.
lifecycle/create\n
"},{"location":"user/examples/flex-cell/index.html#prepare","title":"Prepare","text":"Configure different assets of DT with these credentials. The functions/flex-cell/prepare.py
script is used for this purpose. The only thing needed to set up the connection is to update the file /workspace/examples/data/flex-cell/input/connections.conf
with the connection parameters for MQTT and RabbitMQ and then execute the prepare
script.
lifecycle/prepare\n
The following files are updated with the configuration information:
/workspace/examples/digital_twins/flex-cell/kuka_actual.conf
/workspace/examples/digital_twins/flex-cell/ur5e_actual.conf
/workspace/examples/data/flex-cell/input/publisher-flexcell-physical.py
modelDescription.xml
for the RabbitMQFMU require special credentials to connect to the RabbitMQ and the MQTT brokers.
"},{"location":"user/examples/flex-cell/index.html#execute","title":"Execute","text":"Execute the flex-cell digital twin using TwinManager. TwinManager in-turn runs the co-simulation using Maestro. Generates the co-simulation output.csv file at /workspace/examples/data/flex-cell/output
. The execution needs to be stopped with control + c
since the TwinManager
runs the application in a non-stopping loop.
lifecycle/execute\n
"},{"location":"user/examples/flex-cell/index.html#save","title":"Save","text":"Each execution of the DT is treated as a single run. The results of one execution are saved as time-stamped co-simulation output file in The TwinManager executes the flex-cell digital twin and produces output in data/flex-cell/output/saved_experiments
directory.
lifecycle/save\n
The execute
and save
scripts can be executed in that order any number of times. A new file data/flex-cell/output/saved_experiments
directory with each iteration.
"},{"location":"user/examples/flex-cell/index.html#analyze","title":"Analyze","text":"There are dedicated plotting functions in functions/flex-cell/plots.py
. This script plots the co-simulation results against the recorded values from the two robots.
lifecycle/analyze\n
"},{"location":"user/examples/flex-cell/index.html#terminate","title":"Terminate","text":"Stops the Maestro running in the background. Also stops any other jvm process started during execute phase.
lifecycle/terminate\n
"},{"location":"user/examples/flex-cell/index.html#clean","title":"Clean","text":"Removes the output generated during execute phase.
lifecycle/clean\n
"},{"location":"user/examples/flex-cell/index.html#examining-the-results","title":"Examining the results","text":"Executing this Digital Twin will generate a co-simulation output, but the results can also be monitored from updating the /workspace/examples/tools/flex-cell/FlexCellDTaaS.java
with a specific set of getAttributeValue
commands, such as shown in the code. That main file enables the online execution and comparison on Digital Twin and Physical Twin at the same time and at the same abstraction level.
The output is generated to the /workspace/examples/data/flex-cell/output
folder. In case a specific experiments is to be saved, the save lifecycle script stores the co-simulation results into the /workspace/examples/data/flex-cell/output/saved_experiments
folder.
In the default example, the co-simulation is run for 11 seconds in steps of 0.2 seconds. This can be modified for a longer period and different step size. The output stored in outputs.csv
contains the joint position of both robotic arms and the current discrete (X,Y,Z) position of the TCP of the robot. Additional variables can be added, such as the discrete (X,Y,Z) position of the other joints.
When connected to the real robots, the tools urinterface
and kukalbrinterface
log their data at a higher sampling rate.
"},{"location":"user/examples/flex-cell/index.html#references","title":"References","text":"The RabbitMQ FMU github repository contains complete documentation and source code of the rmq-vhost.fmu.
More information about the TwinManager (formerly DT Manager) and the case study is available in:
- D. Lehner, S. Gil, P. H. Mikkelsen, P. G. Larsen and M. Wimmer, \"An Architectural Extension for Digital Twin Platforms to Leverage Behavioral Models,\" 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, 2023, pp. 1-8, doi: 10.1109/CASE56687.2023.10260417.
- S. Gil, P. H. Mikkelsen, D. Tola, C. Schou and P. G. Larsen, \"A Modeling Approach for Composed Digital Twins in Cooperative Systems,\" 2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), Sinaia, Romania, 2023, pp. 1-8, doi: 10.1109/ETFA54631.2023.10275601.
- S. Gil, C. Schou, P. H. Mikkelsen, and P. G. Larsen, \u201cIntegrating Skills into Digital Twins in Cooperative Systems,\u201d in 2024 IEEE/SICE International Symposium on System Integration (SII), 2024, pp. 1124\u20131131, doi: 10.1109/SII58957.2024.10417610.
"},{"location":"user/examples/incubator/index.html","title":"Incubator Digital Twin","text":""},{"location":"user/examples/incubator/index.html#overview","title":"Overview","text":"This is a case study of an Incubator with the purpose of understanding the steps and processes involved in developing a digital twin system. This incubator is an insulated container with the ability to keep a temperature and heat, but not cool. A picture of the incubator is given below.
The overall purpose of the system is to reach a certain temperature within a box and keep the temperature regardless of content. An overview of the system can be seen below:
The system consists of:
- 1x styrofoam box in order to have an insulated container
- 1x heat source to heat up the content within the Styrofoam box.
- 1x fan to distribute the heating within the box
- 2x temperature sensor to monitor the temperature within the box
- 1x temperature Sensor to monitor the temperature outside the box
- 1x controller to actuate the heat source and the fan and read sensory information from the temperature sensors, and communicate with the digital twin.
The original repository for the example can be found: Original repository. This trimmed version of the codebase does not have the following:
- docker support
- tests
- datasets
The original repository contains the complete documentation of the example, including the full system architecture, instructions for running with a physical twin, and instructions for running a 3D visualization of the incubator.
"},{"location":"user/examples/incubator/index.html#digital-twin-structure","title":"Digital Twin Structure","text":"This diagrams shows the main components and the interfaces they use to communicate. All components communicate via the RabbitMQ message exchange, and the data is stored in a time series database. The RabbitMQ and InfluxDB are platform services hosted by the DTaaS.
The Incubator digital twin is a pre-packaged digital twin. It can be used as is or integrated with other digital twins.
The mock physical twin is executed from incubator/mock_plant/real_time_model_solver.py
script.
"},{"location":"user/examples/incubator/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses a plethora of Python scripts to run the digital twin. By default it is configured to run with a mock physical twin. Furthermore, it depends on a RabbitMQ and an InfluxDB instances.
There is one configuration file: simulation.conf
. The RabbitMQ and InfluxDB configuration parameters need to be updated.
"},{"location":"user/examples/incubator/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Create Potentially updates the system and installs Python dependencies Execute Executes the Incubator digital twin and produces output in the terminal and in incubator/log.log. Clean Removes the log file."},{"location":"user/examples/incubator/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/incubator\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/incubator/index.html#create","title":"Create","text":"Potentially updates the system and installs Python dependencies.
lifecycle/create\n
"},{"location":"user/examples/incubator/index.html#execute","title":"Execute","text":"Executes the Incubator digital twin with a mock physical twin. Pushes the results in the terminal, incubator/log.log, and in InfluxDB.
lifecycle/execute\n
"},{"location":"user/examples/incubator/index.html#clean","title":"Clean","text":"Removes the output log file.
lifecycle/clean\n
"},{"location":"user/examples/incubator/index.html#examining-the-results","title":"Examining the results","text":"After starting all services successfully, the controller service will start producing output that looks like the following:
time execution_interval elapsed heater_on fan_on room box_air_temperature state\n19/11 16:17:59 3.00 0.01 True False 10.70 19.68 Heating\n19/11 16:18:02 3.00 0.03 True True 10.70 19.57 Heating\n19/11 16:18:05 3.00 0.01 True True 10.70 19.57 Heating\n19/11 16:18:08 3.00 0.01 True True 10.69 19.47 Heating\n19/11 16:18:11 3.00 0.01 True True 10.69 19.41 Heating\n
An InfluxDB dashboard can be setup based on incubator/digital_twin/data_access/influxdbserver/dashboards/incubator_data.json
. If the dashboard on the InfluxDB is setup properly, the following visualization can be seen:
"},{"location":"user/examples/incubator/index.html#references","title":"References","text":" Forked from: Incubator repository with commit ID: 989ccf5909a684ad26a9c3ec16be2390667643aa
To understand what a digital twin is, we recommend you read/watch one or more of the following resources:
- Feng, Hao, Cl\u00e1udio Gomes, Casper Thule, Kenneth Lausdahl, Alexandros Iosifidis, and Peter Gorm Larsen. \u201cIntroduction to Digital Twin Engineering.\u201d In 2021 Annual Modeling and Simulation Conference (ANNSIM), 1\u201312. Fairfax, VA, USA: IEEE, 2021. https://doi.org/10.23919/ANNSIM52504.2021.9552135.
- Video recording of presentation by Claudio Gomes
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html","title":"Incubator Digital Twin with NuRV FMU Monitoring Service","text":""},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#overview","title":"Overview","text":"This example demonstrates how an FMU can be used as a runtime monitoring service (in this example NuRV[1]) and connected with the Incubator digital twin to verify runtime behavior of the Incubator.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#simulated-scenario","title":"Simulated scenario","text":"This example simulates a scenario where the lid of the Incubator is removed and later put back on. The Incubator is equipped with anomaly detection capabilities, which can detect anomalous behavior (i.e. the removal of the lid). When an anomaly is detected, the Incubator triggers an energy saving mode where the heater is turned off.
From a monitoring perspective, we wish to verify that within 3 messages of an anomaly detection, the energy saving mode is turned on. To verify this behavior, we construct the property:
.
The monitor will output the unknown state as long as the property is satisfied and will transition to the false state once a violation is detected.
The simulated scenario progresses as follows:
- Initialization: The services are initialized and the Kalman filter in the Incubator is given 2 minutes to stabilize. Sometimes, the anomaly detection algorithm will detect an anomaly at startup even though the lid is still on. It will disappear after approx 15 seconds.
- After 2 minutes: The lid is lifted and an anomaly is detected. The energy saver is turned on shortly after.
- After another 30 seconds: The energy saver is manually disabled producing a false verdict.
- After another 30 seconds: The lid is put back on and the anomaly detection is given time to detect that the lid is back on. The monitor is then reset producing an Unknown verdict again. The simulation then ends.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#example-structure","title":"Example structure","text":"A diagram depicting the logical software structure of the example can be seen below.
The execute script is responsible for starting the NuRV service and running the Python script that controls the scenario (execute.py).
The execute.py script starts the Incubator services and runs the example scenario. Once the Incubator DT is started, a RabbitMQ client is created that subscribes to changes in the anomaly and energy_saving states of the DT, as well as the verdicts produced by the NuRV service. Each time an update is received, the full state and verdict is printed to the console.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#digital-twin-configuration","title":"Digital Twin configuration","text":"Before running the example, please configure the simulation.conf file with your RabbitMQ credentials.
The example uses the following assets:
Asset Type Names of Assets Visibility Reuse in other Examples Tools common/tool/NuRV/NuRV Common Yes Other common/fmi2_headers Common Yes DT common/digital_twins/incubator Common Yes Specification safe-operation.smv Private No Script execute.py Private No The safe-operation.smv file contains the default monitored specification as described in the Simulated scenario section. These can be configured as desired.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#lifecycle-phases","title":"Lifecycle phases","text":"The lifecycle phases for this example include:
Lifecycle phase Completed tasks create Downloads the necessary tools and creates a virtual python environment with the necessary dependencies execute Runs a python script that starts up the necessary services as well as the Incubator simulation. Various status messages are printed to the console, including the monitored system states and monitor verdict. If required, change the execute permissions of lifecycle scripts you need to execute. This can be done using the following command.
chmod +x lifecycle/{script}\n
where {script} is the name of the script, e.g. create, execute etc.
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#running-the-example","title":"Running the example","text":"To run the example, first run the following command in a terminal:
cd /workspace/examples/digital_twins/incubator-NuRV-fmu-monitor-service/\n
Then, first execute the create script followed by the execute script using the following command:
lifecycle/{script}\n
The execute script will then start outputting system states and the monitor verdict approx every 3 seconds. The output is printed as follows.
\"State: {anomaly state} & {energy_saving state}\"
where \"anomaly\" indicates that an anomaly is detected and \"!anomaly\" indicates that an anomaly is not currently detected. The same format is used for the energy_saving state. NuRV verdicts are printed as follows
\"Verdict from NuRV: {verdict}\".
The monitor verdict can be false or unknown, where the latter indicates that the monitor does not yet have sufficient information to determine the satisfaction of the property. The monitor will never produce a true verdict as the entire trace must be verified to ensure satisfaction due to the G operator. Thus the unknown state can be viewed as a tentative true verdict.
An example output trace is provided below:
....\nUsing LIFECYCLE_PATH: /workspace/examples/digital_twins/incubator-NuRV-fmu-monitor-service/lifecycle\nUsing INCUBATOR_PATH: /workspace/examples/digital_twins/incubator-NuRV-fmu-monitor-service/lifecycle/../../../common/digital_twins/incubator\nStarting NuRV FMU Monitor Service, see output at /tmp/nurv-fmu-service.log\nNuRVService.py PID: 13496\nStarting incubator\nConnected to rabbitmq server.\nRunning scenario with initial state: lid closed and energy saver on\nSetting energy saver mode: enable\nSetting G_box to: 0.5763498\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\nState: !anomaly & !energy_saving\nState: !anomaly & !energy_saving\nVerdict from NuRV: unknown\n
"},{"location":"user/examples/incubator-NuRV-fmu-monitor-service/index.html#references","title":"References","text":" - Information on the NuRV monitor can be found on FBK website.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html","title":"Incubator Digital Twin with NuRV monitoring service","text":""},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#overview","title":"Overview","text":"This example demonstrates how a runtime monitoring service (in this example NuRV[1]) can be connected with the Incubator digital twin to verify runtime behavior of the Incubator.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#simulated-scenario","title":"Simulated scenario","text":"This example simulates a scenario where the lid of the Incubator is removed and later put back on. The Incubator is equipped with anomaly detection capabilities, which can detect anomalous behavior (i.e. the removal of the lid). When an anomaly is detected, the Incubator triggers an energy saving mode where the heater is turned off.
From a monitoring perspective, we wish to verify that within 3 simulation steps of an anomaly detection, the energy saving mode is turned on. To verify this behavior, we construct the property: . Whenever a True or False verdict is produced by the monitor, it is reset, allowing for the detection of repeated satisfaction/violation detections of the property.
The simulated scenario progresses as follows:
- Initialization: The services are initialized and the Kalman filter in the Incubator is given 2 minutes to stabilize. Sometimes, the anomaly detection algorithm will detect an anomaly at startup even though the lid is still on. It will disappear after approx 15 seconds.
- After 2 minutes: The lid is lifted and an anomaly is detected. The energy saver is turned on shortly after
- After another 30 seconds: The energy saver is manually disabled producing a False verdict.
- After another 30 seconds: The lid is put back on and the anomaly detection is given time to detect that the lid is back on. The simulation then ends.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#example-structure","title":"Example structure","text":"A diagram depicting the logical software structure of the example can be seen below.
The execute.py script is responsible for orchestrating and starting all the relevant services in this example. This includes the Incubator DT, CORBA naming service (omniNames) and the NuRV monitor server as well as implementing the Monitor connector component that connects the DT output to the NuRV monitor server.
The NuRV monitor server utilizes a CORBA naming service where it registers under a specific name. A user can then query the naming service for the specific name, to obtain a reference to the monitor server. For more information on how the NuRV monitor server works, please refer to [1].
After establishing connection with the NuRV monitor server, the Incubator DT is started and a RabbitMQ client is created that subscribes to changes in the anomaly and energy_saving states of the DT. Each time an update is received of either state, the full state (the new updated state and the previous other state) is pushed to the NuRV monitor server whereafter the verdict is printed to the console.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#digital-twin-configuration","title":"Digital Twin configuration","text":"Before running the example, please configure the simulation.conf file with your RabbitMQ credentials.
The example uses the following assets:
Asset Type Names of Assets Visibility Reuse in other Examples Service common/services/NuRV_orbit Common Yes DT common/digital_twins/incubator Common Yes Specification safe-operation.smv Private No Script execute.py Private No The safe-operation.smv file contains the default monitored specification as described in the Simulated scenario section. These can be configured as desired.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#lifecycle-phases","title":"Lifecycle phases","text":"The lifecycle phases for this example include:
Lifecycle phase Completed tasks create Downloads the necessary tools and creates a virtual python environment with the necessary dependencies execute Runs a python script that starts up the necessary services as well as the Incubator simulation. Various status messages are printed to the console, including the monitored system states and monitor verdict. clean Removes created data directory and incubator log files. If required, change the execute permissions of lifecycle scripts you need to execute. This can be done using the following command
chmod +x lifecycle/{script}\n
where {script} is the name of the script, e.g. create, execute etc.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#running-the-example","title":"Running the example","text":"To run the example, first run the following command in a terminal:
cd /workspace/examples/digital_twins/incubator-monitor-server/\n
Then, first execute the create script (this can take a few mins depending on your network connection) followed by the execute script using the following command:
lifecycle/{script}\n
The execute script will then start outputting system states and the monitor verdict approx every 3 seconds. The output is printed as follows \"State: {anomaly state} & {energy_saving state}, verdict: {Verdict}\" where \"anomaly\" indicates that an anomaly is detected and \"!anomaly\" indicates that an anomaly is not currently detected. The same format is used for the energy_saving state.
The monitor verdict can be True, False or Unknown, where the latter indicates that the monitor does not yet have sufficient information to determine the satisfaction of the property.
An example output trace is provided below:
....\nRunning scenario with initial state: lid closed and energy saver on\nSetting energy saver mode: enable\nSetting G_box to: 0.5763498\nState: !anomaly & !energy_saving, verdict: True\nState: !anomaly & !energy_saving, verdict: True\n....\nState: anomaly & !energy_saving, verdict: Unknown\nState: anomaly & energy_saving, verdict: True\nState: anomaly & energy_saving, verdict: True\n
There is currently some startup issues with connecting to the NuRV server, and it will likely take a few tries before the connection is established. This is however handled automatically.
"},{"location":"user/examples/incubator-NuRV-monitor-service/index.html#references","title":"References","text":" - Information on the NuRV monitor can be found on FBK website.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html","title":"Incubator Co-Simulation Digital Twin validation with NuRV Monitor","text":""},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#overview","title":"Overview","text":"This example demonstrates how to validate some digital twin components using FMU monitors (in this example, the monitors are generated with NuRV[1]).
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#simulated-scenario","title":"Simulated scenario","text":"This example validates some components of the Incubator digital twin, by performing a simulation in which the commponents are wrapped inside FMUs, and are then inspected at runtime by some a FMU monitor generated by NuRV. Please note that the link to Incubator digital twin is only provided to know the details of the incubator physical twin. The digital twin (DT) presented here is a co-simulation DT of the Incubator.
The input data for the simulation is generated by a purpose-built FMU component named source, which supplies testing data to the anomaly detector, simulating an anomaly occurring at time t=60s. An additional component, watcher, is employed to verify whether the energy saver activates in response to an anomaly reported by the anomaly detector.
The output of the watcher is the passed to the monitor, which ensures that when an anomaly is detected, the energy saver activates within a maximum of three simulation cycles.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#example-structure","title":"Example structure","text":"A diagram depicting the logical software structure of the example can be seen below.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#digital-twin-configuration","title":"Digital Twin configuration","text":"The example uses the following assets:
Asset Type Names of Assets Visibility Reuse in other Examples Models anomaly_detection.fmu Private No energy_saver.fmu Private No Source.fmu Private No Watcher.fmu Private No Specification safe-operation.smv Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The safe-operation.smv file contains the default monitored specification as described in the Simulated scenario section.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#lifecycle-phases","title":"Lifecycle phases","text":"The lifecycle phases for this example include:
Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro toolGenerates and compiles the monitor FMU Execute Produces and stores output in data/incubator-NuRV-monitor-validation/output directory Clean Clears run logs and outputs If required, change the execute permissions of lifecycle scripts you need to execute. This can be done using the following command
chmod +x lifecycle/{script}\n
where {script} is the name of the script, e.g. create, execute etc.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/incubator-NuRV-monitor-validation\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#create","title":"Create","text":" - Installs Open Java Development Kit 17 in the workspace.
- Generates and compiles the monitor FMU from the NuRV specification
lifecycle/create\n
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#execute","title":"Execute","text":"Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the FMU models.
lifecycle/execute\n
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/incubator-NuRV-monitor-validation/output/plots
.
lifecycle/analyze\n
A sample plot is given here.
In the plot, four color-coded indicators provide a visual representation of distinct values. The blue line depicts the simulated temperature, while orange one represents the temperature estimate. A red dashed line indicates the target temperature set by the energy saver component. The green line shows the monitor's output verdict. As observed, when there is a disparity between the estimated and actual temperatures, the energy saver adjusts the target temperature downward, indicating that the component is working properly. Thus, the output of the monitor is fixed at unknown
(0
), signifying that the monitoring property is not violated.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/incubator-NuRV-monitor-validation/output directory where the logs are also included.
Figures of the output results can be found in the /workspace/examples/data/incubator-NuRV-monitor-validation/output directory.
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/incubator-NuRV-monitor-validation/index.html#references","title":"References","text":" - More information about NuRV is available here.
"},{"location":"user/examples/mass-spring-damper/index.html","title":"Mass Spring Damper","text":""},{"location":"user/examples/mass-spring-damper/index.html#overview","title":"Overview","text":"The mass spring damper digital twin (DT) comprises two mass spring dampers and demonstrates how a co-simulation based DT can be used within DTaaS.
"},{"location":"user/examples/mass-spring-damper/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/mass-spring-damper/index.html#example-structure","title":"Example Structure","text":"There are two simulators included in the study, each representing a mass spring damper system. The first simulator calculates the mass displacement and speed of for a given force acting on mass . The second simulator calculates force given a displacement and speed of mass . By coupling these simulators, the evolution of the position of the two masses is computed.
"},{"location":"user/examples/mass-spring-damper/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models MassSpringDamper1.fmu Private Yes MassSpringDamper2.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The co-sim.json
and time.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
"},{"location":"user/examples/mass-spring-damper/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/mass-spring-damper/output directory Clean Clears run logs and outputs"},{"location":"user/examples/mass-spring-damper/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/mass-spring-damper\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/mass-spring-damper/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 in the workspace.
lifecycle/create\n
"},{"location":"user/examples/mass-spring-damper/index.html#execute","title":"Execute","text":"Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.
lifecycle/execute\n
"},{"location":"user/examples/mass-spring-damper/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/mass-spring-damper/output directory.
You can also view run logs in the /workspace/examples/digital_twins/mass-spring-damper.
"},{"location":"user/examples/mass-spring-damper/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/mass-spring-damper/index.html#references","title":"References","text":"More information about co-simulation techniques and mass spring damper case study are available in:
Gomes, Cl\u00e1udio, et al. \"Co-simulation: State of the art.\"\narXiv preprint arXiv:1702.00686 (2017).\n
The source code for the models used in this DT are available in mass spring damper github repository.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html","title":"Mass Spring Damper with NuRV Runtime Monitor","text":""},{"location":"user/examples/mass-spring-damper-monitor/index.html#overview","title":"Overview","text":"This digital twin is derives from te Mass String Damper digital twin.
The mass spring damper digital twin (DT) comprises two mass spring dampers and demonstrates how a co-simulation based DT can be used within DTaaS. This version of the example is expanded with a monitor generated by NuRV. More information about NuRV is available here.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/mass-spring-damper-monitor/index.html#example-structure","title":"Example Structure","text":"There are two simulators included in the study, each representing a mass spring damper system. The first simulator calculates the mass displacement and speed of for a given force acting on mass . The second simulator calculates force given a displacement and speed of mass . By coupling these simulators, the evolution of the position of the two masses is computed.
Additionally, a monitor is inserted in the simulation to check at runtime whether the displacement of the two masses stays below a maximum threshold.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models MassSpringDamper1.fmu Private Yes MassSpringDamper2.fmu Private Yes m2.fmu Private No RtI.fmu Private Yes Specification m2.smv Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The co-sim.json
and time.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro toolGenerates and compiles the monitor FMU Execute Produces and stores output in data/mass-spring-damper-monitor/output directory Clean Clears run logs and outputs"},{"location":"user/examples/mass-spring-damper-monitor/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/mass-spring-damper-monitor\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#create","title":"Create","text":" - Installs Open Java Development Kit 17 in the workspace.
- Generates and compiles the monitor FMU from the NuRV specification
lifecycle/create\n
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#execute","title":"Execute","text":"Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.
lifecycle/execute\n
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/mass-spring-damper-monitor/output/plots
.
lifecycle/analyze\n
A sample plot is given here.
In the plot, three color-coded indicators are used to represent different values. The blue line shows the distance between the two masses, while the green indicates the monitor's verdict. A red dashed line serves as a reference point, marking the distance checked by the monitor. Since the distance of the masses is always below the threshold, the output of the monitor is fixed to unknown
(0
).
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/mass-spring-damper-monitor/output directory.
You can also view run logs in the /workspace/examples/digital_twins/mass-spring-damper-monitor.
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#terminate-phase","title":"Terminate phase","text":"Terminate to clean up the debug files and co-simulation output files.
lifecycle/terminate\n
"},{"location":"user/examples/mass-spring-damper-monitor/index.html#references","title":"References","text":"More information about co-simulation techniques and mass spring damper case study are available in:
Gomes, Cl\u00e1udio, et al. \"Co-simulation: State of the art.\"\narXiv preprint arXiv:1702.00686 (2017).\n
The source code for the models used in this DT are available in mass spring damper github repository.
"},{"location":"user/examples/o5g/index.html","title":"Firefighter Mission in a Burning Building","text":"In an firefighter mission it is important to monitor the oxygen levels of each firefighters Self Contained Breating Aparatus (SCBA) in context of their mission.
"},{"location":"user/examples/o5g/index.html#physical-twin-overview","title":"Physical Twin Overview","text":"Image: Schematic overview of a firefighter mission. Note the mission commander on the lower left documenting the air supply pressure levels provided by radio communication from the firefighters inside and around the burning building. This image was created with the assistance of DALL\u00b7E.
We assume the following scenario:
- a set of firefighters work to extinguish a burning building
- they each use an SCBA with pressurised oxygen to breath
- a mission commander on the outside coordinates the efforts and surveills the oxygen levels
"},{"location":"user/examples/o5g/index.html#digital-twin-overview","title":"Digital Twin Overview","text":"In this example a monitor is implemented, that calculates how much time the firefighers have left, until they need to leave the building. To that end, the inputs used are:
- 3D-model of the building in which the mission takes place,
- pressure data of a firefighters SCBA and
- firefighters location inside of the building
are used to estimate:
- the shortest way out,
- how much time this will need and
- how much time is left until all oxygen from the SCBA is used up.
The remaining mission time is monitored and the firefighter receive a warning if it drops under a certain threshold.
This example is an implementation of the the paper Digital Twin for Rescue Missions--a Case Study by Leucker et al.
"},{"location":"user/examples/o5g/index.html#quick-check","title":"Quick Check","text":"Before runnnig this example please make sure the following files are at the correct locations:
/workspace/examples/\n data/o5g/input/\n runTessla.sh\n sensorSimulation.py\n telegraf.conf\n\n models/\n lab.ifc\n makefmu.mos\n PathOxygenEstimate.mo\n\n tools/\n graphToPath.py\n ifc_to_graph\n pathToTime.py\n tessla-telegraf-connector/\n tessla-telegraf-connector/\n tessla.jar\n specification.tessla (run-time specification)\n\n digital_twins/o5g/\n main.py\n config\n lifecycle/ (scripts)\n
"},{"location":"user/examples/o5g/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"All configuration for this example is contained in digital_twins/o5g/config
.
To use the MQTT-Server, account information needs to be provided. The topics are set to their default values, which allow the DT to access the mock physical twins sensor metrics and to send back alerts.
export O5G_MQTT_SERVER=\nexport O5G_MQTT_PORT=\nexport O5G_MQTT_USER=\nexport O5G_MQTT_PASS=\n\nexport O5G_MQTT_TOPIC_SENSOR='vgiot/ue/metric'\nexport O5G_MQTT_TOPIC_AIR_PREDICTION='vgiot/dt/prediction'\nexport O5g_MQTT_TOPIC_ALERT='vgiot/dt/alerts'\n
This example uses InfuxDB as a data storage, which will need to be configured to use your Access data. The following configuration steps are needed:
- Log into the InfluxDB Web UI
- Obtain org name (is below your username in the sidebar)
- Create a data bucket if you don't have one already in
Load Data -> Buckets
- Create an API access token in
Load Data -> API Tokens
, Copy and save this token somewhere immediately, you can not access it later!
export O5G_INFLUX_SERVER=\nexport O5G_INFLUX_PORT=\nexport O5G_INFLUX_TOKEN=\nexport O5G_INFLUX_ORG=\nexport O5G_INFLUX_BUCKET=\n
"},{"location":"user/examples/o5g/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Install Installs Open Modelica, Rust, Telegraf and the required pip dependencies Create Create FMU from Open Modelica file Execute Execute the example in the background tmux terminal session Terminate Terminate the tmux terminal session running in the background Clean Delete the temporary files"},{"location":"user/examples/o5g/index.html#run-the-example","title":"Run the example","text":""},{"location":"user/examples/o5g/index.html#install","title":"Install","text":"Run the install script by executing
lifecycle/install\n
This will install all the required dependencies from apt and pip, as well as Open Modelica, Rust, Telegraf and the required pip dependencies from their respective repos.
"},{"location":"user/examples/o5g/index.html#create","title":"Create","text":"Run the create script by executing
lifecycle/create\n
This will compile the modelica model to an Functional Mockup Unit (FMU) for the correct platform.
"},{"location":"user/examples/o5g/index.html#exceute","title":"Exceute","text":"To run the Digital Twin execute
lifecycle/execute\n
This will start all the required components in a single tmux session called o5g
in the background. To view the running Digital Twin attatch to this tmux session by executing
tmux a -t o5g\n
To detatch press Ctrl-b
followed by d
.
The tmux session contains 4 components of the digital twin:
Panel location Purpose Top Left Sensor simulator generating random location and O2-level data Top Right Main Digital Twin receives the sensor data and calculates an estimate of how many minutes of air remain Bottom Left Telegraf to convert between different message formats, also displays all messages between components Bottom Right TeSSLa monitor raises an alarm, if the remaining time is to low."},{"location":"user/examples/o5g/index.html#examine-the-results","title":"Examine the Results","text":"For additional mission awareness, we recommend utilising the Influx data visualisation. We provide a dashboard configuration in the file influx-dashoard.json. Log in to your Influx Server to import (usually port 8086). A screenshot of the dashboard is given here.
The data gets stored in o5g->prediction->air-remaining->37ae3e4fb3ea->true->vgiot/dt/prediction
variable of the InfluxDB. In addition to importing dashboard configuration given above, it is possible to create your custom dashboards using the stored data.
"},{"location":"user/examples/o5g/index.html#terminate","title":"Terminate","text":"To stop the all components and close the tmux session execute
lifecycle/terminate\n
"},{"location":"user/examples/o5g/index.html#clean","title":"Clean","text":"To remove temoporary files created during execution
lifecycle/clean\n
"},{"location":"user/examples/opc-ua-waterplant/index.html","title":"Waste Water Plant with OPC-UA","text":""},{"location":"user/examples/opc-ua-waterplant/index.html#introduction","title":"Introduction","text":"Waste water treatment (WWT) plants must comply with substance and species concentration limits established by regulation in order to ensure the good quality of the water. This is usually done taking periodic samples that are analyzed in the laboratory. This means that plant operators do not have continuous information for making decisions and, therefore, operation setpoints are set to higher values than needed to guarantee water quality. Some of the processes involved in WWT plants consume a lot of power, thus adjusting setpoints could significantly reduce energy consumption.
"},{"location":"user/examples/opc-ua-waterplant/index.html#physical-twin-overview","title":"Physical Twin Overview","text":"This example demonstrates the communication between a physical ultraviolet (UV) disinfection process (the tertiary treatment of a WWT plant) and its digital twin, which is based on Computational Fluid Dynamics (CFD) and compartment models. The aim of this digital twin is to develop \"virtual sensors\" that provide continuous information that facilitates the decision making process for the plant operator.
The physical twin of the waste water plant is composed of an ultraviolet channel controlled by a PLC that controls the power of the UV lamps needed to kill all the pathogens of the flow. The channel has 3 groups of UV lamps, therefore the real channel (and is mathematical model) is subdivided into 7 zones: 4 correspond to zones without UV lamps (2 for the entrance and exit of the channel + 2 zones between UV lamps) and the 3 reamaining for the UV lamps.
The dose to be applied (related with the power) changes according to the residence time (computed from the measure of the volume flow) and the UV intensity (measured by the intensity sensor).
The information of the volumetric flow and power (in the three parts of the channel) is transmitted to the PLC of the plant. Furthermore, the PLC is working as OPC UA Server to send and receive data to and from an OPC UA Client. Additionally, some sizing parameters and initial values are read from a spreadsheet filled in by the plant operator. In this case, the spreadsheet is an Open Office file (.ods) due to the software installed in the SCADA PC. Some of the variables like initial concentration of disinfectant and pathogens are included, among others. Some values defined by the plant operator correspond to input signals that are not currently being measured, but are expected to be measured in the future.
"},{"location":"user/examples/opc-ua-waterplant/index.html#digital-twin-overview","title":"Digital Twin Overview","text":"The digital twin is a reduced model (developed in C)\u00a0that solves physical conservation laws (mass, energy and momentum), but simplifies details (geometry, mainly) to ensure real-time calculations and accurate results. The results are compared to the ones obtained by the CFD. C solver developed is used by the OpenModelica\u00a0model. OpenModelica converts\u00a0it into the FMI standard, to be integrated in the OPC UA Client (client-opcua.py).
"},{"location":"user/examples/opc-ua-waterplant/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"Asset Type Name of Asset Visibility Reuse in Other Examples Model Test_DTCONEDAR.mo private No Data configuration_freopcua.ods private No model_description.csv (generated by client-asyncua.py) private No Mock OPC UA Server: opcua-mock-server.py private No Tool OPC UA Client: client-opcua.py private No FMU builder: create-fmu.mos private No In this example, a dummy model representation of the plant is used, instead of the real model. The simplified model (with not the real equations) is developed in Open Modelica (Test_DTCONEDAR.mo). The FMU is generated from the Open Modelica interface to obtain the needed binaries to run the FMU. It is possible to run an FMU previously generated, however, to ensure that we are using the right binaries it is recommended to install Open Modelica Compiler and run script.mos
to build the FMU from the Modelica file Test_DTCONEDAR.mo
.
The FMU model description file (modelDescription.xml
file inside Test_DTCONEDAR.fmu
) has the information of the value references configuration_freeopcua.ods has the information of the OPC-UA node IDs And both have in common the variable name
The client python script (client-opcua.py
) does the following actions:
"},{"location":"user/examples/opc-ua-waterplant/index.html#input-data-variables","title":"Input Data Variables","text":"The configuration_freeopcua.ods date file is used for customizing the initial input data values used by the server.
"},{"location":"user/examples/opc-ua-waterplant/index.html#dt-config","title":"DT Config","text":"The config.json specifies the configuration parameters for the OPC UA client.
Optional parameters can be modified:
- stop_time
- step_size
- record = True, if we want to save the results of the simulation
- record_interval. Sometimes the simulation step_size is small and a the size of the results file can be too big. For instance, if the simulation step_size is 0.01 seconds, we can increase the record_interval so as to reduce the result file size.
- record_variables: we can specify the list of variables that we want to record.
- enable_send = True, if we want to send results to the OPC UA Server.
"},{"location":"user/examples/opc-ua-waterplant/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Install Installs Open Modelica, Python 3.10 and the required pip dependencies Create Create FMU from Open Modelica file Execute Run OPC UA mock server and normal OPC-UA client Clean Delete the temporary files"},{"location":"user/examples/opc-ua-waterplant/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital twins/opc-ua-waterplant\n
If required, change the execute permission of lifecycle scripts.
chmod +x lifecycle/*\n
Now, run the following scripts:
"},{"location":"user/examples/opc-ua-waterplant/index.html#install","title":"Install","text":"Installs Open Modelica, Python 3.10 and the required pip dependencies
lifecycle/install\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#create","title":"Create","text":"Create Test_DTCONEDAR.fmu
co-simulation model from `Test_DTCONEDAR.mo open modelica file.
lifecycle/create\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#execute","title":"Execute","text":"Start the mock OPC UA server in the background. Run the OPC UA client.
lifecycle/execute\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#clean","title":"Clean","text":"Remove the temporary files created by Open Modelica and output files generated by OPC UA client.
lifecycle/clean\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#references","title":"References","text":"More explanation about this example is available at:
Royo, L., Labar\u00edas, A., Arnau, R., G\u00f3mez, A., Ezquerra, A., Cilla, I., &\nD\u00edez-Anto\u00f1azas, L. (2023). Improving energy efficiency in the tertiary\ntreatment of Alguazas WWTP (Spain) by means of digital twin development\nbased on Physics modelling . Cambridge Open Engage.\n[doi:10.33774/coe-2023-1vjcw](https://doi.org/10.33774/coe-2023-1vjcw)\n
"},{"location":"user/examples/opc-ua-waterplant/index.html#acknowledgements","title":"Acknowledgements","text":"The work on this example was done in a project subsidised thanks to the support grants for Agrupaci\u00f3n Empresarial Innovadora (AEI) of the Ministry of Industry, Trade and Tourism (MINCOTUR) with the aim of improving the competitiveness of small and medium-sized enterprises within the Recovery, Transformation and Resilience Plan (PRTR) financed by the Next Generation EU (NGEU) funds, with Grant Number: AEI-010500-2022b-196.
"},{"location":"user/examples/three-tank/index.html","title":"Three-Tank System Digital Twin","text":""},{"location":"user/examples/three-tank/index.html#overview","title":"Overview","text":"The three-tank system is a simple case study allows us to represent a system that is composed of three individual components that are coupled in a cascade as follows: The first tank is connected to the input of the second tank, and the output of the second tank is connected to the input of the third tank.
This example contains only the simulated components for demonstration purposes; therefore, there is no configuration for the connection with the physical system.
The three-tank system case study is managed using the DTManager
, which is packed as a jar library in the tools, and run from a java main file. The DTManager
uses Maestro as a slave for co-simulation, so it generates the output of the co-simulation.
The main file can be changed according to the application scope, i.e., the /workspace/examples/tools/three-tank/TankMain.java
can be manipulated to get a different result.
The /workspace/examples/models/three-tank/
folder contains the Linear.fmu
file, which is a non-realistic model for a tank with input and output and the TankSystem.aasx
file for the schema representation with Asset Administration Shell. The three instances use the same .fmu
file and the same schema due to being of the same object class. The DTManager
is in charge of reading the values from the co-simulation output.
"},{"location":"user/examples/three-tank/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/three-tank/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models, two tools, one data, and one script. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Model Linear.fmu Private No TankSystem.aasx Private No Tool DTManager-0.0.1-Maestro.jar (wraps Maestro) Common Yes maestro-2.3.0-jar-with-dependencies.jar (used by DTManager) Common Yes TankMain.java (main script) Private No Data outputs.csv Private No This DT has multiple configuration files. The coe.json and multimodel.json are used by Maestro tool. The tank1.conf, tank2.conf and tank3.conf are the config files for three different instances of one model (Linear.fmu).
"},{"location":"user/examples/three-tank/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"The lifecycles that are covered include:
Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute The DT Manager executes the three-tank digital twin and produces output in data/three-tank/output
directory Terminate Terminating the background processes and cleaning up the output"},{"location":"user/examples/three-tank/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital twins/three-tank\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/three-tank/index.html#create","title":"Create","text":"Installs Open Java Development Kit 11 and pip dependencies. Also creates DTManager
tool (DTManager-0.0.1-Maestro.jar) from source code.
lifecycle/create\n
"},{"location":"user/examples/three-tank/index.html#execute","title":"Execute","text":"Execute the three-tank digital twin using DTManager. DTManager in-turn runs the co-simulation using Maestro. Generates the co-simulation output.csv file at /workspace/examples/data/three-tank/output
.
lifecycle/execute\n
"},{"location":"user/examples/three-tank/index.html#terminate","title":"Terminate","text":"Stops the Maestro running in the background. Also stops any other jvm process started during execute phase.
lifecycle/terminate\n
"},{"location":"user/examples/three-tank/index.html#clean","title":"Clean","text":"Removes the output generated during execute phase.
lifecycle/terminate\n
"},{"location":"user/examples/three-tank/index.html#examining-the-results","title":"Examining the results","text":"Executing this Digital Twin will generate a co-simulation output, but the results can also be monitored from updating the/workspace/examples/tools/three-tank/TankMain.java
with a specific set of getAttributeValue
commands, such as shown in the code.
That main file enables the online execution of the Digital Twin and its internal components.
The output of the co-simulation is generated to the /workspace/examples/data/three-tank/output
folder.
In the default example, the co-simulation is run for 10 seconds in steps of 0.5 seconds. This can be modified for a longer period and different step size. The output stored in outputs.csv
contains the level, in/out flow, and leak values.
No data from the physical twin are generated/used.
"},{"location":"user/examples/three-tank/index.html#references","title":"References","text":"More information about the DT Manager is available at:
D. Lehner, S. Gil, P. H. Mikkelsen, P. G. Larsen and M. Wimmer,\n\"An Architectural Extension for Digital Twin Platforms to Leverage\nBehavioral Models,\" 2023 IEEE 19th International Conference on\nAutomation Science and Engineering (CASE), Auckland, New Zealand,\n2023, pp. 1-8, doi: 10.1109/CASE56687.2023.10260417.\n
"},{"location":"user/examples/water_tank_FI/index.html","title":"Water Tank Fault Injection","text":""},{"location":"user/examples/water_tank_FI/index.html#overview","title":"Overview","text":"This example shows a fault injection (FI) enabled digital twin (DT). A live DT is subjected to simulated faults received from the environment. The simulated faults is specified as part of DT configuration and can be changed for new instances of DTs.
In this co-simulation based DT, a watertank case-study is used; co-simulation consists of a tank and controller. The goal of which is to keep the level of water in the tank between Level-1
and Level-2
. The faults are injected into output of the water tank controller (Watertankcontroller-c.fmu) from 12 to 20 time units, such that the tank output is closed for a period of time, leading to the water level increasing in the tank beyond the desired level (Level-2
).
"},{"location":"user/examples/water_tank_FI/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/water_tank_FI/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_FI/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models watertankcontroller-c.fmu Private Yes singlewatertank-20sim.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The multimodelFI.json
and simulation-config.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
The faults are defined in wt_fault.xml.
"},{"location":"user/examples/water_tank_FI/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_FI/output directory Clean Clears run logs and outputs"},{"location":"user/examples/water_tank_FI/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/water_tank_FI\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/water_tank_FI/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 and pip dependencies. The pandas and matplotlib are the pip dependencies installated.
lifecycle/create\n
"},{"location":"user/examples/water_tank_FI/index.html#execute","title":"Execute","text":"Run the co-simulation. Generates the co-simulation output.csv file at /workspace/examples/data/water_tank_FI/output
.
lifecycle/execute\n
"},{"location":"user/examples/water_tank_FI/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/
.
lifecycle/analyze\n
"},{"location":"user/examples/water_tank_FI/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/water_tank_FI/output directory.
You can also view run logs in the /workspace/examples/digital_twins/water_tank_FI.
"},{"location":"user/examples/water_tank_FI/index.html#terminate-phase","title":"Terminate phase","text":"Clean up the temporary files and delete output plot
lifecycle/terminate\n
"},{"location":"user/examples/water_tank_FI/index.html#references","title":"References","text":"More details on this case-study can be found in the paper:
M. Frasheri, C. Thule, H. D. Macedo, K. Lausdahl, P. G. Larsen and\nL. Esterle, \"Fault Injecting Co-simulations for Safety,\"\n2021 5th International Conference on System Reliability and Safety (ICSRS),\nPalermo, Italy, 2021.\n
The fault-injection plugin is an extension to the Maestro co-orchestration engine that enables injecting inputs and outputs of FMUs in an FMI-based co-simulation with tampered values. More details on the plugin can be found in fault injection git repository. The source code for this example is also in the same github repository in a example directory.
"},{"location":"user/examples/water_tank_FI_monitor/index.html","title":"Water Tank Fault Injection with NuRV Runtime Monitor","text":""},{"location":"user/examples/water_tank_FI_monitor/index.html#overview","title":"Overview","text":"This example shows a fault injection (FI) enabled digital twin (DT). A live DT is subjected to simulated faults received from the environment. The simulated faults is specified as part of DT configuration and can be changed for new instances of DTs. This version of the example is expanded with a monitor generated by NuRV. More information about NuRV is available here.
In this co-simulation based DT, a watertank case-study is used; co-simulation consists of a tank and controller. The goal of which is to keep the level of water in the tank between Level-1
and Level-2
. The faults are injected into output of the water tank controller (Watertankcontroller-c.fmu) from 12 to 20 time units, such that the tank output is closed for a period of time, leading to the water level increasing in the tank beyond the desired level (Level-2
). Additionally, a monitor is inserted in the simulation to check at runtime whether the level of the water stays below a maximum threshold.
"},{"location":"user/examples/water_tank_FI_monitor/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/water_tank_FI_monitor/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_FI_monitor/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"This example uses two models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models watertankcontroller-c.fmu Private Yes singlewatertank-20sim.fmu Private Yes m1.fmu Private No RtI.fmu Private Yes Specification m1.smv Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes The multimodelFI.json
and simulation-config.json
are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.
The faults are defined in wt_fault.xml.
"},{"location":"user/examples/water_tank_FI_monitor/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro toolGenerates and compiles the monitor FMU Execute Produces and stores output in data/water_tank_FI_monitor/output directory Clean Clears run logs and outputs"},{"location":"user/examples/water_tank_FI_monitor/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/water_tank_FI_monitor\n
If required, change the execute permission of lifecycle scripts you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/water_tank_FI_monitor/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 and pip dependencies. The pandas and matplotlib are the pip dependencies installated. The monitor FMU from the NuRV specification is generated and compiled.
lifecycle/create\n
"},{"location":"user/examples/water_tank_FI_monitor/index.html#execute","title":"Execute","text":"Run the co-simulation. Generates the co-simulation output.csv file at /workspace/examples/data/water_tank_FI_monitor/output
.
lifecycle/execute\n
"},{"location":"user/examples/water_tank_FI_monitor/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI_monitor/output/plots/
.
lifecycle/analyze\n
A sample plot is given here.
In the plot, four color-coded indicators are used to represent different values. The blue line shows the water tank level, while orange represents the control output and green indicates the monitor's verdict. A red dashed line serves as a reference point, marking the level checked by the monitor. As the water level exceeds this threshold, the monitor's verdict changes from unknown
(0
) to false
(2
).
"},{"location":"user/examples/water_tank_FI_monitor/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the /workspace/examples/data/water_tank_FI_monitor/output directory.
You can also view run logs in the /workspace/examples/digital_twins/water_tank_FI_monitor.
"},{"location":"user/examples/water_tank_FI_monitor/index.html#terminate-phase","title":"Terminate phase","text":"Clean up the temporary files and delete output plot
lifecycle/terminate\n
"},{"location":"user/examples/water_tank_FI_monitor/index.html#references","title":"References","text":"More details on this case-study can be found in the paper:
M. Frasheri, C. Thule, H. D. Macedo, K. Lausdahl, P. G. Larsen and\nL. Esterle, \"Fault Injecting Co-simulations for Safety,\"\n2021 5th International Conference on System Reliability and Safety (ICSRS),\nPalermo, Italy, 2021.\n
The fault-injection plugin is an extension to the Maestro co-orchestration engine that enables injecting inputs and outputs of FMUs in an FMI-based co-simulation with tampered values. More details on the plugin can be found in fault injection git repository. The source code for this example is also in the same github repository in a example directory.
"},{"location":"user/examples/water_tank_swap/index.html","title":"Water Tank Model Swap","text":""},{"location":"user/examples/water_tank_swap/index.html#overview","title":"Overview","text":"This example shows multi-stage execution and dynamic reconfiguration of a digital twin (DT). Two features of DTs are demonstrated here:
- Fault injection into live DT
- Dynamic auto-reconfiguration of live DT
The co-simulation methodology is used to construct this DT.
"},{"location":"user/examples/water_tank_swap/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_swap/index.html#configuration-of-assets","title":"Configuration of assets","text":"This example uses four models and one tool. The specific assets used are:
Asset Type Names of Assets Visibility Reuse in Other Examples Models Watertankcontroller-c.fmu Private Yes Singlewatertank-20sim.fmu Private Yes Leak_detector.fmu Private No Leak_controller.fmu Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes This DT has many configuration files. The DT is executed in two stages. There exist separate DT configuration files for each stage. The following table shows the configuration files and their purpose.
Configuration file name Execution Stage Purpose mm1. json stage-1 DT configuration wt_fault.xml, FaultInject.mabl stage-1 faults injected into DT during stage-1 mm2.json stage-2 DT configuration simulation-config.json Both stages Configuration for specifying DT execution time and output logs"},{"location":"user/examples/water_tank_swap/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_swap/output directory Analyze Process the co-simulation output and produce plots Clean Clears run logs, outputs and plots"},{"location":"user/examples/water_tank_swap/index.html#run-the-example","title":"Run the example","text":"To run the example, change your present directory.
cd /workspace/examples/digital_twins/water_tank_swap\n
If required, change the permission of files you need to execute, for example:
chmod +x lifecycle/create\n
Now, run the following scripts:
"},{"location":"user/examples/water_tank_swap/index.html#create","title":"Create","text":"Installs Open Java Development Kit 17 and pip dependencies. The matplotlib pip package is also installated.
lifecycle/create\n
"},{"location":"user/examples/water_tank_swap/index.html#execute","title":"Execute","text":"This DT has two-stage execution. In the first-stage, a co-simulation is executed. The Watertankcontroller-c.fmu and Singlewatertank-20sim.fmu models are used to execute the DT. During this stage, faults are injected into one of the models (Watertankcontroller-c.fmu) and the system performance is checked.
In the second-stage, another co-simulation is run in which three FMUs are used. The FMUs used are: watertankcontroller, singlewatertank-20sim, and leak_detector. There is an in-built monitor in the Maestro tool. This monitor is enabled during the stage and a swap condition is set at the beginning of the second-stage. When the swap condition is satisfied, the Maestro swaps out Watertankcontroller-c.fmu model and swaps in Leakcontroller.fmu model. This swapping of FMU models demonstrates the dynamic reconfiguration of a DT.
The end of execution phase generates the co-simulation output.csv file at /workspace/examples/data/water_tank_swap/output
.
lifecycle/execute\n
"},{"location":"user/examples/water_tank_swap/index.html#analyze-phase","title":"Analyze phase","text":"Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/
.
lifecycle/analyze\n
"},{"location":"user/examples/water_tank_swap/index.html#examine-the-results","title":"Examine the results","text":"The results can be found in the workspace/examples/data/water_tank_swap/output directory.
You can also view run logs in the workspace/examples/digital_twins/water_tank_swap.
"},{"location":"user/examples/water_tank_swap/index.html#terminate-phase","title":"Terminate phase","text":"Clean up the temporary files and delete output plot
lifecycle/terminate\n
"},{"location":"user/examples/water_tank_swap/index.html#references","title":"References","text":"The complete source of this example is available on model swap github repository.
The runtime model (FMU) swap mechanism demonstrated by the experiment is detailed in the paper:
Ejersbo, Henrik, et al. \"fmiSwap: Run-time Swapping of Models for\nCo-simulation and Digital Twins.\" arXiv preprint arXiv:2304.07328 (2023).\n
The runtime reconfiguration of co-simulation by modifying the Functional Mockup Units (FMUs) used is further detailed in the paper:
Ejersbo, Henrik, et al. \"Dynamic Runtime Integration of\nNew Models in Digital Twins.\" 2023 IEEE/ACM 18th Symposium on\nSoftware Engineering for Adaptive and Self-Managing Systems\n(SEAMS). IEEE, 2023.\n
"},{"location":"user/servers/execution/runner/index.html","title":"Runner","text":"A utility service to manage safe execution of remote scripts / commands. User launches this from commandline and let the utility manage the commands to be executed.
The runner utility runs as a service and provides REST API interface to safely execute remote commands. Multiple runners can be active simultaneously on one computer. The commands are sent via the REST API and are executed on the computer with active runner.
"},{"location":"user/servers/execution/runner/index.html#install","title":"Install","text":""},{"location":"user/servers/execution/runner/index.html#npm-registry","title":"NPM Registry","text":"The package is available on npmjs.
Install the package with the following command:
sudo npm install -g @into-cps-association/runner\n
"},{"location":"user/servers/execution/runner/index.html#github-registry","title":"Github Registry","text":"The package is available in Github packages registry.
Set the registry and install the package with the following commands
sudo npm config set @into-cps-association:registry https://npm.pkg.github.com\nsudo npm install -g @into-cps-association/runner\n
The npm install command asks for username and password. The username is your Github username and the password is your Github personal access token. In order for the npm to download the package, your personal access token needs to have read:packages scope.
"},{"location":"user/servers/execution/runner/index.html#configure","title":"Configure","text":"The utility requires config specified in YAML format. The template configuration file is:
port: 5000\nlocation: 'script' #directory location of scripts\ncommands: #list of permitted scripts\n - create\n - execute\n - terminate\n
It is suggested that the configuration file be named as runner.yaml and placed in the directory in which the runner microservice is run.
The location
refers to the relative location of the scripts directory with respect to the location of runner.yaml file.
However, there is no limitation on either the configuration filename or the location
. The path to runner.yaml can either be relative or absolute path. However, the location
path is always relative path with respect to the path of runner.yaml file.
The commands must be executable. Please make sure that the commands have execute permission on Linux platforms.
"},{"location":"user/servers/execution/runner/index.html#create-commands","title":"Create Commands","text":"The runner requires commands / scripts to be run. These need to be placed in the location
specified in runner.yaml file. The location must be relative to the directory in which the runner microservice is being run.
"},{"location":"user/servers/execution/runner/index.html#use","title":"Use","text":"Display help.
$runner -h\nUsage: runner [options]\n\nRemote code execution for humans\n\nOptions:\n -v --version package version\n -c --config <string> runner config file specified in yaml format (default: \"runner.yaml\")\n -h --help display help\n
The config option is not mandatory. If it is not used, runner looks for runner.yaml in the directory from which it is being run. Once launched, the utility runs at the port specified in runner.yaml file.
runner #use runner.yaml of the present working directory\nrunner -c FILE-PATH #absolute or relative path to config file\nrunner --config FILE-PATH #absolute or relative path to config file\n
If launched on one computer, you can access the same at http://localhost:<port>
.
Access to the service on network is available at http://<ip or hostname>:<port>/
.
"},{"location":"user/servers/execution/runner/index.html#application-programming-interface-api","title":"Application Programming Interface (API)","text":"Three REST API methods are active. The route paths and the responses given for these two sources are:
REST API Route HTTP Method Return Value Comment localhost:port POST Returns the execution status of command Executes the command provided. Each invocation appends to array of commands executed so far. localhost:port GET Returns the execution status of the last command sent via POST request. localhost:port/history GET Returns the array of POST requests received so far."},{"location":"user/servers/execution/runner/index.html#post-request-to","title":"POST Request to /","text":"Executes a command. The command name given here must exist in location directory.
Valid HTTP RequestHTTP Response - Valid CommandHTTP Response - Inalid Command POST / HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n \"name\": \"<command-name>\"\n}\n
Connection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"status\": \"success\"\n}\n
Connection: close\nContent-Length: 28\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"status\": \"invalid command\"\n}\n
"},{"location":"user/servers/execution/runner/index.html#get-request-to","title":"GET Request to /","text":"Shows the status of the command last executed.
Valid HTTP RequestHTTP Response - Valid CommandHTTP Response - Inalid Command GET / HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n \"name\": \"<command-name>\"\n}\n
Connection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"name\": \"<command-name>\",\n \"status\": \"valid\",\n \"logs\": {\n \"stdout\": \"<output log of command>\",\n \"stderr\": \"<error log of command>\"\n }\n}\n
Connection: close\nContent-Length: 70\nContent-Type: application/json; charset=utf-8\nDate: Tue, 09 Apr 2024 08:51:11 GMT\nEtag: W/\"86-ja15r8P5HJu72JcROfBTV4sAn2I\"\nX-Powered-By: Express\n\n{\n \"name\": \"<command-name\",\n \"status\": \"invalid\",\n \"logs\": {\n \"stdout\": \"\",\n \"stderr\": \"\"\n }\n}\n
"},{"location":"user/servers/execution/runner/index.html#get-request-to-history","title":"GET Request to /history","text":"Returns the array of POST requests received so far. Both valid and invalid commands are recorded in the history.
[\n {\n \"name\": \"valid command\"\n },\n {\n \"name\": \"valid command\"\n },\n {\n \"name\": \"invalid command\"\n }\n]\n
"},{"location":"user/servers/lib/LIB-MS.html","title":"Library Microservice","text":"The lib microservice is responsible for handling and serving the contents of library assets of the DTaaS platform. It provides API endpoints for clients to query, and fetch these assets.
This document provides instructions for using the library microservice.
Please see assets for a suggested storage conventions of your library assets.
Once the assets are stored in the library, they become available in user workspace.
"},{"location":"user/servers/lib/LIB-MS.html#application-programming-interface-api","title":"Application Programming Interface (API)","text":"The lib microservice application provides services at two end points:
GraphQL API Endpoint: http://foo.com/lib
HTTP Endpoint: http://foo.com/lib/files
"},{"location":"user/servers/lib/LIB-MS.html#http-protocol","title":"HTTP protocol","text":"Endpoint: localhost:PORT/lib/files
This option needs to be enabled with -H http.json
flag. The regular file upload and download options become available.
Here are sample screenshots.
"},{"location":"user/servers/lib/LIB-MS.html#graphql-protocol","title":"GraphQL protocol","text":"Endpoint: localhost:PORT/lib
The http://foo.com/lib
URL opens a graphql playground.
You can check the query schema and try sample queries here. The graphql queries need to be sent as HTTP POST requests and get responses.
The library microservice services two API calls:
- Provide a list of contents for a directory
- Fetch a file from the available files
The API calls are accepted over GraphQL and HTTP API end points. The format of the accepted queries are:
"},{"location":"user/servers/lib/LIB-MS.html#provide-list-of-contents-for-a-directory","title":"Provide list of contents for a directory","text":"To retrieve a list of files in a directory, use the following GraphQL query.
Replace path
with the desired directory path.
send requests to: https://foo.com/lib
"},{"location":"user/servers/lib/LIB-MS.html#graphql-query-for-list-of-contents","title":"GraphQL query for list of contents","text":"query {\n listDirectory(path: \"user1\") {\n repository {\n tree {\n blobs {\n edges {\n node {\n name\n type\n }\n }\n }\n trees {\n edges {\n node {\n name\n type\n }\n }\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#graphql-response-for-list-of-contents","title":"GraphQL response for list of contents","text":"{\n \"data\": {\n \"listDirectory\": {\n \"repository\": {\n \"tree\": {\n \"blobs\": {\n \"edges\": []\n },\n \"trees\": {\n \"edges\": [\n {\n \"node\": {\n \"name\": \"common\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"data\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"digital twins\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"functions\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"models\",\n \"type\": \"tree\"\n }\n },\n {\n \"node\": {\n \"name\": \"tools\",\n \"type\": \"tree\"\n }\n }\n ]\n }\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-request-for-list-of-contents","title":"HTTP request for list of contents","text":"POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n \"query\":\"query {\\n listDirectory(path: \\\"user1\\\") {\\n repository {\\n tree {\\n blobs {\\n edges {\\n node {\\n name\\n type\\n }\\n }\\n }\\n trees {\\n edges {\\n node {\\n name\\n type\\n }\\n }\\n }\\n }\\n }\\n }\\n}\"\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-response-for-list-of-contents","title":"HTTP response for list of contents","text":"HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 306\nContent-Type: application/json; charset=utf-8\nDate: Tue, 26 Sep 2023 20:26:49 GMT\nX-Powered-By: Express\n{\"data\":{\"listDirectory\":{\"repository\":{\"tree\":{\"blobs\":{\"edges\":[]},\"trees\":{\"edges\":[{\"node\":{\"name\":\"data\",\"type\":\"tree\"}},{\"node\":{\"name\":\"digital twins\",\"type\":\"tree\"}},{\"node\":{\"name\":\"functions\",\"type\":\"tree\"}},{\"node\":{\"name\":\"models\",\"type\":\"tree\"}},{\"node\":{\"name\":\"tools\",\"type\":\"tree\"}}]}}}}}}\n
"},{"location":"user/servers/lib/LIB-MS.html#fetch-a-file-from-the-available-files","title":"Fetch a file from the available files","text":"This query receives directory path and send the file contents to user in response.
To check this query, create a file files/user2/data/welcome.txt
with content of hello world
.
"},{"location":"user/servers/lib/LIB-MS.html#graphql-query-for-fetch-a-file","title":"GraphQL query for fetch a file","text":"query {\n readFile(path: \"user2/data/sample.txt\") {\n repository {\n blobs {\n nodes {\n name\n rawBlob\n rawTextBlob\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#graphql-response-for-fetch-a-file","title":"GraphQL response for fetch a file","text":"{\n \"data\": {\n \"readFile\": {\n \"repository\": {\n \"blobs\": {\n \"nodes\": [\n {\n \"name\": \"sample.txt\",\n \"rawBlob\": \"hello world\",\n \"rawTextBlob\": \"hello world\"\n }\n ]\n }\n }\n }\n }\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-request-for-fetch-a-file","title":"HTTP request for fetch a file","text":"POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 217\n{\n \"query\":\"query {\\n readFile(path: \\\"user2/data/welcome.txt\\\") {\\n repository {\\n blobs {\\n nodes {\\n name\\n rawBlob\\n rawTextBlob\\n }\\n }\\n }\\n }\\n}\"\n}\n
"},{"location":"user/servers/lib/LIB-MS.html#http-response-for-fetch-a-file","title":"HTTP response for fetch a file","text":"HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Wed, 27 Sep 2023 09:17:18 GMT\nX-Powered-By: Express\n{\"data\":{\"readFile\":{\"repository\":{\"blobs\":{\"nodes\":[{\"name\":\"welcome.txt\",\"rawBlob\":\"hello world\",\"rawTextBlob\":\"hello world\"}]}}}}}\n
The path refers to the file path to look at: For example, user1 looks at files of user1; user1/functions looks at contents of functions/ directory.
"},{"location":"user/servers/lib/assets.html","title":"Reusable Assets","text":"The reusability of digital twin assets makes it easy for users to work with the digital twins. The reusability of assets is a fundamental feature of the platform.
"},{"location":"user/servers/lib/assets.html#kinds-of-reusable-assets","title":"Kinds of Reusable Assets","text":"The DTaaS software categorizes all the reusable library assets into six categories:
"},{"location":"user/servers/lib/assets.html#data","title":"Data","text":"The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.
"},{"location":"user/servers/lib/assets.html#models","title":"Models","text":"The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.
"},{"location":"user/servers/lib/assets.html#tools","title":"Tools","text":"The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.
"},{"location":"user/servers/lib/assets.html#functions","title":"Functions","text":"The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.
"},{"location":"user/servers/lib/assets.html#digital-twins","title":"Digital Twins","text":"These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.
"},{"location":"user/servers/lib/assets.html#file-system-structure","title":"File System Structure","text":"Each user has their assets put into five different directories named above. In addition, there will also be common library assets that all users have access to. A simplified example of the structure is as follows:
workspace/\n data/\n data1/ (ex: sensor)\n filename (ex: sensor.csv)\n README.md\n data2/ (ex: turbine)\n README.md (remote source; no local file)\n ...\n digital_twins/\n digital_twin-1/ (ex: incubator)\n config (yaml and json)\n README.md (usage instructions)\n description.md (short summary of digital twin)\n lifecycle/ (directory containing lifecycle scripts)\n digital_twin-2/ (ex: mass spring damper)\n config (yaml and json)\n README.md (usage instructions)\n description.md (short summary of digital twin)\n lifecycle/ (directory containing lifecycle scripts)\n digital_twin-3/ (ex: model swap)\n config (yaml and json)\n README.md (usage instructions)\n description.md (short summary of digital twin)\n lifecycle/ (directory containing lifecycle scripts)\n ...\n functions/\n function1/ (ex: graphs)\n filename (ex: graphs.py)\n README.md\n function2/ (ex: statistics)\n filename (ex: statistics.py)\n README.md\n ...\n models/\n model1/ (ex: spring)\n filename (ex: spring.fmu)\n README.md\n model2/ (ex: building)\n filename (ex: building.skp)\n README.md\n model3/ (ex: rabbitmq)\n filename (ex: rabbitmq.fmu)\n README.md\n ...\n tools/\n tool1/ (ex: maestro)\n filename (ex: maestro.jar)\n README.md\n ...\n common/\n data/\n functions/\n models/\n tools/\n
Tip
The DTaaS is agnostic to the format of your assets. The only requirement is that they are files which can be uploaded on the Library page. Any directories can be compressed as one file and uploaded. You can decompress the file into a directory from a Terminal or xfce Desktop available on the Workbench page.
A recommended file system structure for storing assets is also available in DTaaS examples.
"},{"location":"user/servers/lib/assets.html#upload-assets","title":"Upload Assets","text":"Users can upload assets into their workspace using Library page of the website.
You can go into a directory and click on the upload button to upload a file or a directory into your workspace. This asset is then available in all the workbench tools you can use. You can also create new assets on the page by clicking on new drop down menu. This is a simple web interface which allows you to create text-based files. You need to upload other files using upload button.
The user workbench has the following services:
- Jupyter Notebook and Lab
- VS Code
- XFCE Desktop Environment available via VNC
- Terminal
Users can also bring their DT assets into user workspaces from outside using any of the above mentioned services. The developers using git repositories can clone from and push to remote git servers. Users can also use widely used file transfer protocols such as FTP, and SCP to bring the required DT assets into their workspaces.
"},{"location":"user/website/index.html","title":"DTaaS Website Screenshots","text":"This page contains a screenshot driven preview of the website serving the DTaaS software platform.
"},{"location":"user/website/index.html#visit-the-dtaas-application","title":"Visit the DTaaS Application","text":"Start off by simply visiting the website of the DTaaS instance for which you are a user.
"},{"location":"user/website/index.html#redirected-to-authorization-provider","title":"Redirected to Authorization Provider","text":"You will be redirected to the Gitlab Authorization for DTaaS.
Enter your email/username and password. If the email ID registered with DTaaS, is the same as your Google Login email ID, you can also opt to sign in using Google.
You will be redirected to the OAuth Application page.
"},{"location":"user/website/index.html#permit-dtaas-server-to-use-gitlab","title":"Permit DTaaS Server to Use Gitlab","text":"Click on Authorize to allow the OAuth application to access the information connected to your Gitlab account. This is a necessary step.
You are now logged into the DTaaS server. You will be redirected to the login page of the DTaaS website.
The DTaaS website uses an additional layer of security - the third-party authorization protocol known as OAuth. This protocol provides secure access to a DTaaS installation if users have a working active accounts at the selected OAuth service provider. This also uses Gitlab as OAuth provider.
You can see the Gitlab signin button. A click on this button takes you to Gitlab instance providing authorization for DTaaS. You will not need to sign in to Gitlab again, unless you have explicitly logged out of your Gitlab account.
"},{"location":"user/website/index.html#permit-dtaas-website-to-use-gitlab","title":"Permit DTaaS Website to Use Gitlab","text":"The DTaaS website needs your permission to use your Gitlab account for authorization. Click on Authorize button.
After successful authorization, you will be redirected to the Library page of the DTaaS website.
There are two icons on the top-right of the webpage. The hyperlink on question mark icon redirects to help page while the hyperlink on github icon redirects to github code repository.
"},{"location":"user/website/index.html#overview-of-menu-items","title":"Overview of menu items","text":"The menu is hidden by default. Only the icons of menu items are visible. You can click on the icon in the top-left corner of the page to see the menu.
There are three menu items:
Library: for management of reusable library assets. You can upload, download, create and modify new files on this page.
Digital Twins: for management of digital twins. You are presented with the Jupyter Lab page from which you can run the digital twins.
Workbench: Not all digital twins can be managed within Jupyter Lab. You have more tools at your disposal on this page.
"},{"location":"user/website/index.html#library-tabs-and-their-help-text","title":"Library tabs and their help text","text":"You can see five tabs each corresponding to one type of digital twin assets. Each tab provides help text to guide users on the type of an asset.
Functions The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.
Data The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.
Models The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.
Tools The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.
Digital Twins These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.
There are two sub-tabs, namely private and common. The library assets in private category are visible only to the logged in user while the library assets in common category are available to all the users.
Further explanation on the placement of reusable assets within each type and the underlying directory structure on the server is in the assets page
Note
You can upload assets (files) using the upload button.
The file manager is based on Jupyter notebook and all the tasks you can perform in the Jupyter Notebook can be undertaken here.
"},{"location":"user/website/index.html#digital-twins-page","title":"Digital Twins page","text":"The digital twins page has three tabs and the central pane opens Jupyter lab. There are three tabs with helpful instructions on the suggested tasks you can undertake in the Create - Execute - Analyze life cycle phases of digital twin. You can see more explanation on the life cycle phases of digital twin.
Create Create digital twins from tools provided within user workspaces. Each digital twin will have one directory. It is suggested that user provide one bash shell script to run their digital twin. Users can create the required scripts and other files from tools provided in Workbench page.
Execute Digital twins are executed from within user workspaces. The given bash script gets executed from digital twin directory. Terminal-based digital twins can be executed from VSCode and graphical digital twins can be executed from VNC GUI. The results of execution can be placed in the data directory.
Analyze The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI. The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI.
The reusable assets (files) seen in the file manager are available in the Jupyter Lab. In addition, there is a git plugin installed in the Jupyter Lab using which you can link your files with the external git repositories.
"},{"location":"user/website/index.html#workbench","title":"Workbench","text":"The workbench page provides links to four integrated tools.
- Desktop
- VS Code
- Jupyter Lab
- Jupyter Notebook
The hyperlinks open in new browser tab. The screenshots of pages opened in new browser are:
Terminal
The Terminal hyperlink does not exist on workbench page. If you want terminal. Please use the tools dropdown in the Jupyter Notebook.
"},{"location":"user/website/index.html#digital-twins-preview-page","title":"Digital Twins Preview Page","text":"There is a fifth link on the Workbench page. It is Digital Twins Preview Page. It is an experimental feature at the moment. Clicking on the link opens a new page that is similar to the Digital Twins page.
There are two tabs, namely Manage and Execute.
Manage Read the complete description of digital twins. If necessary, users can delete a digital twin, removing it from the workspace with all its associated data. Users can also reconfigure the digital twin.
Execute Execute the Digital Twins using Gitlab CI/CD workflows.
"},{"location":"user/website/index.html#finally-logout","title":"Finally logout","text":"You have to close the browser in order to completely exit the DTaaS software platform.
"}]}
\ No newline at end of file
diff --git a/version0.6/sitemap.xml.gz b/version0.6/sitemap.xml.gz
index fb1b3510e..4c66b1f52 100644
Binary files a/version0.6/sitemap.xml.gz and b/version0.6/sitemap.xml.gz differ