diff --git a/DTaaS-development.pdf b/DTaaS-development.pdf index 17efa7002..6e6e3c57c 100644 Binary files a/DTaaS-development.pdf and b/DTaaS-development.pdf differ diff --git a/development/admin/guides/add_service.html b/development/admin/guides/add_service.html index 2713a0eb8..b7f67d421 100644 --- a/development/admin/guides/add_service.html +++ b/development/admin/guides/add_service.html @@ -1477,7 +1477,8 @@

Add other services

42 43 44 -45
#!/usr/bin/node
+45
+46
#!/usr/bin/node
 /* Install the optional platform services for DTaaS */
 import { $ } from "execa";
 import chalk from "chalk";
@@ -1516,12 +1517,13 @@ 

Add other services

log(chalk.green("Start new Mongodb server docker container")); await $$`docker run -d -p ${mongodbConfig.port}:27017 \ ---name mongodb \ --v ${mongodbConfig.datapath}:/data/db \ --e MONGO_INITDB_ROOT_USERNAME=${mongodbConfig.username} \ --e MONGO_INITDB_ROOT_PASSWORD=${mongodbConfig.password} \ -mongo:7.0.3`; -log(chalk.green("MongoDB server docker container started successfully")); + --name mongodb \ + -v ${mongodbConfig.datapath}:/data/db \ + -e MONGO_INITDB_ROOT_USERNAME=${mongodbConfig.username} \ + -e MONGO_INITDB_ROOT_PASSWORD=${mongodbConfig.password} \ + --restart always \ + mongo:7.0.3`; +log(chalk.green("MongoDB server docker container started successfully"));

3. Run the script:

Go to the directory /deploy/services/ diff --git a/development/admin/host.html b/development/admin/host.html index 62bcc4592..9dc68c5b5 100644 --- a/development/admin/host.html +++ b/development/admin/host.html @@ -1631,7 +1631,9 @@

Traefik gateway server

Authentication

This step requires htpasswd commandline utility. If it is not available on your system, please install the same by using

-
+ + + +
sudo apt-get install -y apache2-utils
+
1
+2
sudo apt-get update
+sudo apt-get install -y apache2-utils
 

You can now proceed with update of the gateway authentication setup. The dummy username is foo and the password is bar. diff --git a/development/admin/services.html b/development/admin/services.html index d27bd87f6..f352474f8 100644 --- a/development/admin/services.html +++ b/development/admin/services.html @@ -1499,6 +1499,10 @@

Use

MQTT Broker services.foo.com:1883
MongoDB databaseservices.foo.com:27017

The firewall and network access settings of corporate / cloud network need to be diff --git a/development/admin/vagrant/basebox.png b/development/admin/vagrant/basebox.png index 360b9f189..78d5f4f9b 100644 Binary files a/development/admin/vagrant/basebox.png and b/development/admin/vagrant/basebox.png differ diff --git a/development/admin/vagrant/single-machine.png b/development/admin/vagrant/single-machine.png index 6de845acd..469cfa63b 100644 Binary files a/development/admin/vagrant/single-machine.png and b/development/admin/vagrant/single-machine.png differ diff --git a/development/admin/vagrant/two-machine-use-legend.png b/development/admin/vagrant/two-machine-use-legend.png index e59534b16..06482ec0a 100644 Binary files a/development/admin/vagrant/two-machine-use-legend.png and b/development/admin/vagrant/two-machine-use-legend.png differ diff --git a/development/admin/vagrant/two-machine.png b/development/admin/vagrant/two-machine.png index 21f716153..8d3c51f34 100644 Binary files a/development/admin/vagrant/two-machine.png and b/development/admin/vagrant/two-machine.png differ diff --git a/development/admin/vagrant/two-machines.html b/development/admin/vagrant/two-machines.html index b29abd045..7748f88be 100644 --- a/development/admin/vagrant/two-machines.html +++ b/development/admin/vagrant/two-machines.html @@ -1580,7 +1580,7 @@

Launch DTaaS Platform Default Se -InfluxDB and visualization service +InfluxDB database services.foo.com @@ -1588,20 +1588,20 @@

Launch DTaaS Platform Default Se services.foo.com:3000 -MQTT communication service +MQTT Broker services.foo.com:1883 -RabbitMQ communication service +RabbitMQ Broker services.foo.com:5672 -RabbitMQ management service +RabbitMQ Broker management website services.foo.com:15672 - - +MongoDB database +services.foo.com:27017 diff --git a/development/developer/system/DTaaS.drawio b/development/developer/system/DTaaS.drawio index fbc1a6f5a..cda921ec6 100644 --- a/development/developer/system/DTaaS.drawio +++ b/development/developer/system/DTaaS.drawio @@ -1,6 +1,6 @@ - + - + @@ -2741,7 +2741,7 @@ - + @@ -3058,7 +3058,7 @@ - + @@ -3159,7 +3159,7 @@ - + @@ -3168,31 +3168,31 @@ - + - + - + - + - + - + - + - + - + @@ -3200,43 +3200,43 @@ - + - + - + - + - + - + - + - + - + - + - + - + @@ -3293,13 +3293,13 @@ - + - + - + @@ -3359,4 +3359,644 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/development/search/search_index.json b/development/search/search_index.json index 4f7087aa4..ac75dbf27 100644 --- a/development/search/search_index.json +++ b/development/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"What is DTaaS?","text":"

The Digital Twin as a Service (DTaaS) software platform is useful to Build, Use and Share digital twins (DTs).

Build: The DTs are built on the software platform using the reusable DT components available on the platform.

Use: Use the DTs on the software platform.

Share: Share ready to use DTs with other users. It is also possible to share the services offered by one DT with other users.

There is an overview of the software available in the form of slides, video, and feature walkthrough.

"},{"location":"index.html#license","title":"License","text":"

This software is owned by The INTO-CPS Association and is available under the INTO-CPS License.

The DTaaS software platform uses Tr\u00e6fik, ML Workspace, Grafana, InfluxDB, MQTT and RabbitMQ open-source components. These software components have their own licenses.

"},{"location":"FAQ.html","title":"Frequently Asked Questions","text":""},{"location":"FAQ.html#abreviations","title":"Abreviations","text":"Term Full Form DT Digital Twin DTaaS Digital Twin as a Service PT Physical Twin"},{"location":"FAQ.html#general-questions","title":"General Questions","text":"What is DTaaS?

DTaaS is software platform on which you can create and run digital twins. Please see the features page to get a sense of the things you can do in DaaS.

Are there any Key Performance / Capability Indicators for DTaaS? Key Performance Indicator Value Processor Two AMD EPYC 7443 24-Core Processors Maximum Storage Capacity 4TB SSD, RAID 0 configuration Storage Type File System Maximum file size 10 GB Data transfer speed 100 Mbps Data Security Yes Data Privacy Yes Redundancy None Availability It is a matter of human resources. If you have human resources to maintain DTaaS round the clock, upwards 95% is easily possible. Do you provide licensed software like Matlab?

The licensed software are not available on the software platform. But users have private workspaces which are based on Linux-based xfce Desktop environment. Users can install software in their workspaces. The licensed software installed by one user is not available to another user.

"},{"location":"FAQ.html#digital-twin-models","title":"Digital Twin Models","text":"Can DTaaS create new DT models?

DTaaS is not a model creation tool. You can put model creation tool inside DTaaS and create new models. The DTaaS itself does not create digital twin models but it can help users create digital twin models. You can run Linux desktop / terminal tools inside the DTaaS. So you can create models inside DTaaS and run them using tools that can run in Linux. The Windows only tools can not run in DTaaS.

How can DTaaS help to design geometric model? Does it support 3D modeling and simulation?

Well, DTaaS by itself does not produce any models. DTaaS only provides a platform and an ecosystem of services to facilitate digital twins to be run as services. Since each user has a Linux OS at their disposal, they can also run digital twins that have graphical interface. In summary, DTaaS is neither a modeling nor simulation tool. If you need these kinds of tools, you need to bring them onto the platform. For example, if you need Matlab for your work, you need to bring he licensed Matlab software.

Commercial DT platforms in market provide modelling and simulation alongside integration and UI. DTaas is not able to do any modelling or simulation on its own like other commercial platforms. Is this a correct understanding?

Yes, you are right

Can DTaaS support only the information models (or behavioral models) or some other kind of models?

The DTaaS as such is agnostic to the kind of models you use. DTaaS can run all kinds of models. This includes behavioral and data models. As long as you have models and the matching solvers that can run in Linux OS, you are good to go in DTaaS. In some cases, models and solvers (tools) are bundled together to form monolithic DTs. The DTaaS does not limit you from running such DTs as well. DTaaS does not provide dedicated solvers. But if you can install a solver in your workspace, then you don't need the platform to provide one.

Does it support XML-based representation and ontology representation?

Currently No. We are looking for users needing this capability. If you have concrete requirements and an example, we can discuss a way of realizing it in DTaaS.

"},{"location":"FAQ.html#communication-between-physical-twin-and-digital-twin","title":"Communication Between Physical Twin and Digital Twin","text":"How would you measure a physical entity like shape, size, weight, structure, chemical attributes etc. using DTaaS? Any specific technology used in this case?

The real measurements are done at physical twin which are then communicated to the digital twin. Any digital twin platform like DTaaS can only facilitate this communication of these measurements from physical twin. The DTaaS provides InfluxDB, RabbitMQ and Mosquitto services for this purpose. These three are probably most widely used services for digital twin communication. Having said that, DTaaS allows you to utilize other communication technologies and services hosted elsewhere on the Internet.

How a real-time data can be differed from static data and what is the procedure to identify dynamic data? Is there any UI or specific tool used here?

DTaaS can not understand the static or dynamic nature of data. It can facilitate storing names, units and any other text description of interesting quantities (weight of batter, voltage output etc). It can also store the data being sent by the physical twin. The distinction between static and dynamic data needs to be made by the user. Only metadata of the data can reveal such more information about the nature of data. A tool can probably help in very specific cases, but you need metadata. If there is a human being making this distinction, then the need for metadata goes down but does not completely go away. In some of the DT platforms supported by manufacturers, there is a tight integration between data and model. In this case, the tool itself is taking care of the metadata. The DTaaS is a generic platform which can support execution of digital twins. If a tool can be executed on a Linux desktop / commandline, the tool can be supported within DTaaS. The tool (ex. Matlab) itself can take care of the metadata requirements.

How can DTaaS control the physical entity? Which technologies it uses for controlling the physical world?

At a very abstract level, there is a communication from physical entity to digital entity and back to physical entity. How this communication should happen is decided by the person designing the digital entity. The DTaaS can provide communication services that can help you do this communication with relative ease. You can use InfluxDB, RabbitMQ and Mosquitto services hosted on DTaaS for two communication between digital and physical entities.

"},{"location":"FAQ.html#data-management","title":"Data Management","text":"Does DTaaS support data collection from different sources like hardware, software and network? Is there any user interface or any tracking instruments used for data collection?

The DTaaS provids InfluxDB, RabbitMQ, MQTT services. Both the physical twin and digital twin can utilize these protocols for communication. The IoT (time-series) data can be collected using InfluxDB and MQTT broker services. There is a user interface for InfluxDB which can be used to analyze the data collected. Users can also manually upload their data files into DTaaS.

Which transmission protocol does DTaaS allow?

InfluxDB, RabbitMQ, MQTT and anything else that can be used from Cloud service providers.

Does DTaaS support multisource information and combined multi sensor input data? Can it provide analysis and decision-supporting inferences?

You can store information from multiple sources. The existing InfluxDB services hosted on DTaaS already has a dedicated Influx / Flux query language for doing sensor fusion, analysis and inferences.

Which kinds of visualization technologies DTaaS can support (e.g. graphical, geometry, image, VR/AR representation)?

Graphical, geometric and images. If you need specific licensed software for the visualization, you will have to bring the license for it. DTaaS does not support AR/VR.

Can DTaaS collect data directly from sensors?

Yes

Is DTaaS able to transmit data to cloud in real time?

Yes

"},{"location":"FAQ.html#platform-native-services-on-dtaas-platform","title":"Platform Native Services on DTaaS Platform","text":"Is DTaaS able to detect the anomalies about-to-fail components and prescribe solutions?

This is the job of a digital twin. If you have a ready to use digital twin that does the job, DTaaS allows others to use your solution.

"},{"location":"FAQ.html#comparison-with-other-dt-platforms","title":"Comparison with other DT Platforms","text":"All the DT platforms seem to provide different features. Is there a comparison chart?

Here is a qualitative comparison of different DT integration platforms:

Legend: high performance (H), mid performance (M) and low performance (L)

DT Platforms License DT Development Process Connectivity Security Processing power, performance and Scalability Data Storage Visualization Modeling and Simulation Microsoft Azure DT Commercial Cloud H H H M H H H AWS IOT Greengrass Open source commercial H H H M H H H Eclipse Ditto Open source M H M H H L L Asset Administration Shell Open source H H L H M L M PTC Thingworx Commercial H H H H H M M GE Predix Commercial M H H M L M L AU's DTaaS Open source H H L L M M M

Adopted by Tanusree Roy from Table 4 and 5 of the following paper.

Ref: Naseri, F., Gil, S., Barbu, C., Cetkin, E., Yarimca, G., Jensen, A. C., ... & Gomes, C. (2023). Digital twin of electric vehicle battery systems: Comprehensive review of the use cases, requirements, and platforms. Renewable and Sustainable Energy Reviews, 179, 113280.

All the comparisons between DT platforms seems so confusing. Why?

The fundamental confusion comes from the fact that different DT platforms (Azure DT, GE Predix) provide different kind of DT capabilities. You can run all kinds of models natively in GE Predix. In fact you can run models even next to (on) PTs using GE Predix. But you cannot natively do that in Azure DT service. You have to do the leg work of integrating with other Azure services or third-party services to get the kind of capabilities that GE Predix natively provides in one interface. The takeaway is that we pick horses for the courses.

"},{"location":"FAQ.html#create-assets","title":"Create Assets","text":"Can DTaaS be used to create new DT assets?

The core feature of DTaaS software is to help users create DTs from assets already available in the library. However, it is possible for users to take advantage of services available in their workspace to install asset authoring tools in their own workspace. These authoring tools can then be used to create and publish new assets. User workspaces are private and are not shared with other users. Thus any licensed software tools installed in their workspace is only available to them.

"},{"location":"FAQ.html#gdpr-concerns","title":"GDPR Concerns","text":"Does your platform adhere to GDPR compliance standards? If so, how?

The DTaaS software platform does not store any personal information of users. It only stores username to identify users and these usernames do not contain enough information to deduce the true identify of users.

Which security measures are deployed? How is data encrypted (if exists)?

The default installation requires a HTTPS terminating reverse proxy server from user to the DTaaS software installation. The administrators of DTaaS software can also install HTTPS certificates into the application. The codebase can generate HTTPS application and the users also have the option of installing their own certificates obtained from certification agencies such as LetsEncrypt.

What security measures does your cloud provider offer?

The current installation of DTaaS software runs on Aarhus University servers. The university network offers firewall access control to servers so that only permitted user groups have access to the network and physical access to the server.

How is user access controlled and authenticated?

There is a two-level authentication mechanism in place in each default installation of DTaaS. The first-level is HTTP basic authentication over secure HTTPS connection. The second-level is the OAuth PKCE authentication flow for each user. The OAuth authentication is provider by a Gitlab instance. The DTaaS does not store the account and authentication information of users.

Does you platform manage personal data? How is data classified and tagged based on the sensitivity? Who has access to the critical data?

The platform does not store personal data of users.

How are identities and roles managed within the platform?

There are two roles for users on the platform. One is the administrator and the other one is user. The user roles are managed by the administrator.

"},{"location":"LICENSE.html","title":"License","text":"

--- Start of Definition of INTO-CPS Association Public License ---

/*

  • This file is part of the INTO-CPS Association.

  • Copyright (c) 2017-CurrentYear, INTO-CPS Association (ICA),

  • c/o Peter Gorm Larsen, Aarhus University, Department of Engineering,
  • Finlandsgade 22, 8200 Aarhus N, Denmark.

  • All rights reserved.

  • THIS PROGRAM IS PROVIDED UNDER THE TERMS OF GPL VERSION 3 LICENSE OR

  • THIS INTO-CPS ASSOCIATION PUBLIC LICENSE (ICAPL) VERSION 1.0.
  • ANY USE, REPRODUCTION OR DISTRIBUTION OF THIS PROGRAM CONSTITUTES
  • RECIPIENT'S ACCEPTANCE OF THE INTO-CPS ASSOCIATION PUBLIC LICENSE OR
  • THE GPL VERSION 3, ACCORDING TO RECIPIENTS CHOICE.

  • The INTO-CPS tool suite software and the INTO-CPS Association

  • Public License (ICAPL) are obtained from the INTO-CPS Association, either
  • from the above address, from the URLs: http://www.into-cps.org or
  • in the INTO-CPS tool suite distribution.
  • GNU version 3 is obtained from: http://www.gnu.org/copyleft/gpl.html.

  • This program is distributed WITHOUT ANY WARRANTY; without

  • even the implied warranty of MERCHANTABILITY or FITNESS
  • FOR A PARTICULAR PURPOSE, EXCEPT AS EXPRESSLY SET FORTH
  • IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF
  • THE INTO-CPS ASSOCIATION PUBLIC LICENSE.

  • See the full ICAPL conditions for more details.

*/

--- End of INTO-CPS Association Public License Header ---

The ICAPL is a public license for the INTO-CPS tool suite with three modes/alternatives (GPL, ICA-Internal-EPL, ICA-External-EPL) for use and redistribution, in source and/or binary/object-code form:

  • GPL. Any party (member or non-member of the INTO-CPS Association) may use and redistribute INTO-CPS tool suite under GPL version 3.

  • Silver Level members of the INTO-CPS Association may also use and redistribute the INTO-CPS tool suite under ICA-Internal-EPL conditions.

  • Gold Level members of the INTO-CPS Association may also use and redistribute The INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL conditions.

Definitions of the INTO-CPS Association Public license modes:

  • GPL = GPL version 3.

  • ICA-Internal-EPL = These INTO-CPA Association Public license conditions together with Internally restricted EPL, i.e., EPL version 1.0 with the Additional Condition that use and redistribution by a member of the INTO-CPS Association is only allowed within the INTO-CPS Association member's own organization (i.e., its own legal entity), or for a member of the INTO-CPS Association paying a membership fee corresponding to the size of the organization including all its affiliates, use and redistribution is allowed within/between its affiliates.

  • ICA-External-EPL = These INTO-CPA Association Public license conditions together with Externally restricted EPL, i.e., EPL version 1.0 with the Additional Condition that use and redistribution by a member of the INTO-CPS Association, or by a Licensed Third Party Distributor having a redistribution agreement with that member, to parties external to the INTO-CPS Association member\u2019s own organization (i.e., its own legal entity) is only allowed in binary/object-code form, except the case of redistribution to other members the INTO-CPS Association to which source is also allowed to be distributed.

[This has the consequence that an external party who wishes to use the INTO-CPS Association in source form together with its own proprietary software in all cases must be a member of the INTO-CPS Association].

In all cases of usage and redistribution by recipients, the following conditions also apply:

a) Redistributions of source code must retain the above copyright notice, all definitions, and conditions. It is sufficient if the ICAPL Header is present in each source file, if the full ICAPL is available in a prominent and easily located place in the redistribution.

b) Redistributions in binary/object-code form must reproduce the above copyright notice, all definitions, and conditions. It is sufficient if the ICAPL Header and the location in the redistribution of the full ICAPL are present in the documentation and/or other materials provided with the redistribution, if the full ICAPL is available in a prominent and easily located place in the redistribution.

c) A recipient must clearly indicate its chosen usage mode of ICAPL, in accompanying documentation and in a text file ICA-USAGE-MODE.txt, provided with the distribution.

d) Contributor(s) making a Contribution to the INTO-CPS Association thereby also makes a Transfer of Contribution Copyright. In return, upon the effective date of the transfer, ICA grants the Contributor(s) a Contribution License of the Contribution. ICA has the right to accept or refuse Contributions.

Definitions:

\"Subsidiary license conditions\" means:

The additional license conditions depending on the by the recipient chosen mode of ICAPL, defined by GPL version 3.0 for GPL, and by EPL for ICA-Internal-EPL and ICA-External-EPL.

\"ICAPL\" means:

INTO-CPS Association Public License version 1.0, i.e., the license defined here (the text between \"--- Start of Definition of INTO-CPS Association Public License ---\" and \"--- End of Definition of INTO-CPS Association Public License ---\", or later versions thereof.

\"ICAPL Header\" means:

INTO-CPS Association Public License Header version 1.2, i.e., the text between \"--- Start of Definition of INTO-CPS Association Public License ---\" and \"--- End of INTO-CPS Association Public License Header ---, or later versions thereof.

\"Contribution\" means:

a) in the case of the initial Contributor, the initial code and documentation distributed under ICAPL, and

b) in the case of each subsequent Contributor: i) changes to the INTO-CPS tool suite, and ii) additions to the INTO-CPS tool suite;

where such changes and/or additions to the INTO-CPS tool suite originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the INTO-CPS tool suite by such Contributor itself or anyone acting on such Contributor's behalf.

For Contributors licensing the INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL conditions, the following conditions also hold:

Contributions do not include additions to the distributed Program which: (i) are separate modules of software distributed in conjunction with the INTO-CPS tool suite under their own license agreement, (ii) are separate modules which are not derivative works of the INTO-CPS tool suite, and (iii) are separate modules of software distributed in conjunction with the INTO-CPS tool suite under their own license agreement where these separate modules are merged with (weaved together with) modules of The INTO-CPS tool suite to form new modules that are distributed as object code or source code under their own license agreement, as allowed under the Additional Condition of internal distribution according to ICA-Internal-EPL and/or Additional Condition for external distribution according to ICA-External-EPL.

\"Transfer of Contribution Copyright\" means that the Contributors of a Contribution transfer the ownership and the copyright of the Contribution to the INTO-CPS Association, the INTO-CPS Association Copyright owner, for inclusion in the INTO-CPS tool suite. The transfer takes place upon the effective date when the Contribution is made available on the INTO-CPS Association web site under ICAPL, by such Contributors themselves or anyone acting on such Contributors' behalf. The transfer is free of charge. If the Contributors or the INTO-CPS Association so wish, an optional Copyright transfer agreement can be signed between the INTO-CPS Association and the Contributors.

\"Contribution License\" means a license from the INTO-CPS Association to the Contributors of the Contribution, effective on the date of the Transfer of Contribution Copyright, where the INTO-CPS Association grants the Contributors a non-exclusive, world-wide, transferable, free of charge, perpetual license, including sublicensing rights, to use, have used, modify, have modified, reproduce and or have reproduced the contributed material, for business and other purposes, including but not limited to evaluation, development, testing, integration and merging with other software and distribution. The warranty and liability disclaimers of ICAPL apply to this license.

\"Contributor\" means any person or entity that distributes (part of) the INTO-CPS tool chain.

\"The Program\" means the Contributions distributed in accordance with ICAPL.

\"The INTO-CPS tool chain\" means the Contributions distributed in accordance with ICAPL.

\"Recipient\" means anyone who receives the INTO-CPS tool chain under ICAPL, including all Contributors.

\"Licensed Third Party Distributor\" means a reseller/distributor having signed a redistribution/resale agreement in accordance with ICAPL and the INTO-CPS Association Bylaws, with a Gold Level organizational member which is not an Affiliate of the reseller/distributor, for distributing a product containing part(s) of the INTO-CPS tool suite. The Licensed Third Party Distributor shall only be allowed further redistribution to other resellers if the Gold Level member is granting such a right to it in the redistribution/resale agreement between the Gold Level member and the Licensed Third Party Distributor.

\"Affiliate\" shall mean any legal entity, directly or indirectly, through one or more intermediaries, controlling or controlled by or under common control with any other legal entity, as the case may be. For purposes of this definition, the term \"control\" (including the terms \"controlling,\" \"controlled by\" and \"under common control with\") means the possession, direct or indirect, of the power to direct or cause the direction of the management and policies of a legal entity, whether through the ownership of voting securities, by contract or otherwise.

NO WARRANTY

EXCEPT AS EXPRESSLY SET FORTH IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF ICAPL, THE INTO-CPS ASSOCIATION IS PROVIDED ON AN \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the INTO-CPS tool suite and assumes all risks associated with its exercise of rights under ICAPL , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.

DISCLAIMER OF LIABILITY

EXCEPT AS EXPRESSLY SET FORTH IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF ICAPL, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE INTO-CPS TOOL SUITE OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

A Contributor licensing the INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL may choose to distribute (parts of) the INTO-CPS tool suite in object code form under its own license agreement, provided that:

a) it complies with the terms and conditions of ICAPL; or for the case of redistribution of the INTO-CPS tool suite together with proprietary code it is a dual license where the INTO-CPS tool suite parts are distributed under ICAPL compatible conditions and the proprietary code is distributed under proprietary license conditions; and

b) its license agreement: i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; iii) states that any provisions which differ from ICAPL are offered by that Contributor alone and not by any other party; and iv) states from where the source code for the INTO-CPS tool suite is available, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.

When the INTO-CPS tool suite is made available in source code form:

a) it must be made available under ICAPL; and

b) a copy of ICAPL must be included with each copy of the INTO-CPS tool suite.

c) a copy of the subsidiary license associated with the selected mode of ICAPL must be included with each copy of the INTO-CPS tool suite.

Contributors may not remove or alter any copyright notices contained within The INTO-CPS tool suite.

If there is a conflict between ICAPL and the subsidiary license conditions, ICAPL has priority.

This Agreement is governed by the laws of Denmark. The place of jurisdiction for all disagreements related to this Agreement, is Aarhus, Denmark.

The EPL 1.0 license definition has been obtained from: http://www.eclipse.org/legal/epl-v10.html. It is also reproduced in the INTO-CPS distribution.

The GPL Version 3 license definition has been obtained from http://www.gnu.org/copyleft/gpl.html. It is also reproduced in the INTO-CPS distribution.

--- End of Definition of INTO-CPS Association Public License ---

"},{"location":"PUBLISH.html","title":"Project Documentation","text":"

This file contains instructions for creation, compilation and publication of project documentation.

The documentation system is based on Material for Mkdocs. The documentation is generated based on the configuration files:

  • mkdocs.yml: used for generating online documentation which is hosted on the web
  • mkdocs-github.yml: used for generating documentation in github actions

Install Mkdocs using the following command.

pip install -r docs/requirements.txt\n
"},{"location":"PUBLISH.html#fix-linting-errors","title":"Fix Linting Errors","text":"

This project uses markdownlint linter tool for identifying the formatting issues in markdown files. Run

mdl docs\n

from top-directory of the project and fix any identified issues. This needs to be done before committing changes to the documentation.

"},{"location":"PUBLISH.html#create-documentation","title":"Create documentation","text":"

The document generation pipeline can generate both html and pdf versions of documentation.

The generation of pdf version of documentation is controlled via a shell variable.

export MKDOCS_ENABLE_PDF_EXPORT=0 #disables generation of pdf document\nexport MKDOCS_ENABLE_PDF_EXPORT=1 #enables generation of pdf document\n

The mkdocs utility allows for live editing of documentation on the developer computer.

You can add, and edit the markdown files in docs/ directory to update the documentation. There is a facility to check the status of your documentation by using:

mkdocs serve --config-file mkdocs.yml\n
"},{"location":"PUBLISH.html#publish-documentation","title":"Publish documentation","text":"

You can compile and place the html version of documentation on the webpage-docs branch of the codebase.

export MKDOCS_ENABLE_PDF_EXPORT=1 #enable generation of pdf document\nsource script/docs.sh [version]\n

The command takes an optional version parameter. This version parameter is needed for making a release. Otherwise, the documentation gets published with the latest version tag. This command makes a new commit on webpage-docs branch. You need to push the branch to upstream.

git push webpage-docs\n

The github pages system serves the project documentation from this branch.

"},{"location":"bugs.html","title":"Few issues in the Software","text":""},{"location":"bugs.html#third-party-software","title":"Third-Party Software","text":"
  • We use third-party software which have certain known issues. Some of the issues are listed below.
"},{"location":"bugs.html#ml-workspace","title":"ML Workspace","text":"
  • the docker container loses network connectivity after three days. The only known solution is to restart the docker container. You don't need to restart the complete DTaaS platform, restart of the docker container of ml-workspace is sufficient.
  • the terminal tool doesn't seem to have the ability to refresh itself. If there is an issue, the only solution is to close and reopen the terminal from \"open tools\" drop down of notebook
  • terminal app does not show at all after some time: terminal always comes if it is open from drop-down menu of Jupyter Notebook, but not as a direct link.
"},{"location":"bugs.html#gitlab","title":"Gitlab","text":"
  • The gilab oauth authentication service does not have a way to sign out of a third-party application. Even if you sign out of DTaaS, the gitlab still shows user as signed in. The next time you click on the sign in button on the DTaaS page, user is not shown the login page. Instead user is directly taken to the Library page. So close the brower window after you are done. Another way to overcome this limitation is to open your gitlab instance (https://gitlab.foo.com) and signout from there. Thus user needs to sign out of two places, namely DTaaS and gitlab, in order to completely exit the DTaaS application.
"},{"location":"thanks.html","title":"Contributors","text":"

code contributors

"},{"location":"thanks.html#users","title":"Users","text":"

Cl\u00e1udio \u00c2ngelo Gon\u00e7alves Gomes, Dmitri Tcherniak, Elif Ecem Bas, Giuseppe Abbiati, Hao Feng, Henrik Ejersbo, Tanusree Roy, Farshid Naseri

"},{"location":"thanks.html#documentation","title":"Documentation","text":"
  1. Talasila, P., Gomes, C., Mikkelsen, P. H., Arboleda, S. G., Kamburjan, E., & Larsen, P. G. (2023). Digital Twin as a Service (DTaaS): A Platform for Digital Twin Developers and Users arXiv preprint arXiv:2305.07244.
  2. Astitva Sehgal for developer and example documentation.
  3. Tanusree Roy and Farshid Naseri for asking interesting questions that ended up in FAQs.
"},{"location":"admin/host.html","title":"DTaaS on Linux Operating System","text":"

These are installation instructions for running DTaaS application on a Ubuntu Server 22.04 Operating System. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space.

A dummy foo.com URL has been used for illustration. Please change this to your unique website URL. It is assumed that you are going to serve the application in only HTTPS mode.

A successful installation will create a setup similar to the one shown in the figure.

Please follow these steps to make this work in your local environment. Download the DTaaS.zip from the releases page. Unzip the same into a directory named DTaaS. The rest of the instructions assume that your working directory is DTaaS.

Note

If you only want to test the application and are not setting up a production instance, you can follow the instructions of trial installation.

"},{"location":"admin/host.html#configuration","title":"Configuration","text":"

You need to configure the Traefik gateway, library microservice and react client website.

The first step is to decide on the number of users and their usenames. The traefik gateway configuration has a template for two users. You can modify the usernames in the template to the usernames chosen by you.

"},{"location":"admin/host.html#traefik-gateway-server","title":"Traefik gateway server","text":"

You can run the Traefik gateway server in both HTTP and HTTPS mode to experience the DTaaS application. The installation guide assumes that you can run the application in HTTPS mode.

The Traefik gateway configuration is at deploy/config/gateway/fileConfig.yml. Change foo.com to your local hostname and user1/user2 to the usernames chosen by you.

Tip

Do not use http:// or https:// in deploy/config/gateway/fileConfig.yml.

"},{"location":"admin/host.html#authentication","title":"Authentication","text":"

This step requires htpasswd commandline utility. If it is not available on your system, please install the same by using

sudo apt-get install -y apache2-utils\n

You can now proceed with update of the gateway authentication setup. The dummy username is foo and the password is bar. Please change this before starting the gateway.

rm deploy/config/gateway/auth\ntouch deploy/config/gateway/auth\nhtpasswd deploy/config/gateway/auth <first_username>\npassword: <your password>\n

The user credentials added in deploy/config/gateway/auth should match the usernames in deploy/config/gateway/fileConfig.yml.

"},{"location":"admin/host.html#lib-microservice","title":"Lib microservice","text":"

The library microservice requires configuration. A template of this configuration file is given in deploy/config/lib file. Please modify this file as per your needs.

The first step in this configuration is to prepare the a filesystem for users. An example file system in files/ directory. You can rename the top-level user1/user2 to the usernames chosen by you.

Add an environment file named .env in lib for the library microservice. An example .env file is given below. The simplest possibility is to use local mode with the following example. The filepath is the absolute filepath to files/ directory. You can copy this configuration into deploy/config/lib file to get started.

PORT='4001'\nMODE='local'\nLOCAL_PATH ='filepath'\nLOG_LEVEL='debug'\nAPOLLO_PATH='/lib'\nGRAPHQL_PLAYGROUND='true'\n
"},{"location":"admin/host.html#react-client-website","title":"React Client Website","text":""},{"location":"admin/host.html#gitlab-oauth-application","title":"Gitlab OAuth application","text":"

The DTaaS react website requires Gitlab OAuth provider. If you need more help with this step, please see the Authentication page.

You need the following information from the OAuth application registered on Gitlab:

Gitlab Variable Name Variable name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api

You can also see Gitlab help page for getting the Gitlab OAuth application details. Remember to Create gitlab accounts for usernames chosen by you.

"},{"location":"admin/host.html#update-client-config","title":"Update Client Config","text":"

Change the React website configuration in deploy/config/client/env.js.

window.env = {\nREACT_APP_ENVIRONMENT: \"prod\",\nREACT_APP_URL: \"https://foo.com/\",\nREACT_APP_URL_BASENAME: \"dtaas\",\nREACT_APP_URL_DTLINK: \"/lab\",\nREACT_APP_URL_LIBLINK: \"\",\nREACT_APP_WORKBENCHLINK_TERMINAL: \"/terminals/main\",\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: \"/tools/vnc/?password=vncpassword\",\nREACT_APP_WORKBENCHLINK_VSCODE: \"/tools/vscode/\",\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: \"/lab\",\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: \"\",\nREACT_APP_CLIENT_ID:\n\"934b98f03f1b6f743832b2840bf7cccaed93c3bfe579093dd0942a433691ccc0\",\nREACT_APP_AUTH_AUTHORITY: \"https://gitlab.foo.com/\",\nREACT_APP_REDIRECT_URI: \"https://foo.com/Library\",\nREACT_APP_LOGOUT_REDIRECT_URI: \"https://foo.com/\",\nREACT_APP_GITLAB_SCOPES: \"openid profile read_user read_repository api\",\n};\n
"},{"location":"admin/host.html#update-the-installation-script","title":"Update the installation script","text":"

Open deploy/install.sh and update user1/user2 to usernames chosen by you.

"},{"location":"admin/host.html#perform-the-installation","title":"Perform the Installation","text":"

Go to the DTaaS directory and execute

source deploy/install.sh\n

You can run this script multiple times until the installation is successful.

Note

While installing you might encounter multiple dialogs asking, which services should be restarted. Just click OK to all of those.

"},{"location":"admin/host.html#post-install-check","title":"Post-install Check","text":"

Now you should be able to access the DTaaS application at: https://foo.com.

If you can following all the screenshots from user website. Everything is correctly setup.

"},{"location":"admin/host.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"admin/overview.html","title":"Overview","text":""},{"location":"admin/overview.html#what-is-the-goal","title":"What is the goal?","text":"

The goal is to set up the DTaaS infrastructure in order to enable your users to use the DTaaS. As an admin you will administrate the users and the servers of the system.

"},{"location":"admin/overview.html#what-are-the-requirements","title":"What are the requirements?","text":""},{"location":"admin/overview.html#oauth-provider","title":"OAuth Provider","text":"

You need to have an OAuth Provider running, which the DTaaS can use for authentication. This is described further in the authentication section.

"},{"location":"admin/overview.html#domain-name","title":"Domain name","text":"

The DTaaS software can only be hosted on a server with a domain name like foo.com.

"},{"location":"admin/overview.html#reverse-proxy","title":"Reverse Proxy","text":"

The installation setup assumes that the foo.com server is behind a reverse proxy / load balancer that provides https termination. You can still use the DTaaS software even if you do not have this reverse proxy. If you do not have a reverse proxy, please replace https://foo.com with http://foo.com in client env.js file and in OAuth registration. Other installation configuration remains the same.

"},{"location":"admin/overview.html#what-to-install","title":"What to install?","text":"

The DTaaS can be installed in different ways. Each version is for different purposes:

  • Trial installation on single host
  • Production installation on single host
  • On one or two Vagrant virtual machines
  • Seperater Packages: client website and lib microservice

Follow the installation that fits your usecase.

"},{"location":"admin/services.html","title":"Third-party Services","text":"

The DTaaS software platform uses third-party software services to provide enhanced value to users.

InfluxDB, Grafana, RabbitMQ and Mosquitto are default services integrated into the DTaaS software platform.

"},{"location":"admin/services.html#pre-requisites","title":"Pre-requisites","text":"

All these services run on raw TCP/UDP ports. Thus a direct network access to these services is required for both the DTs running inside the DTaaS software and the PT located outside the DTaaS software.

There are two possible choices here:

  • Configure Traefik gateway to permit TCP/UDP traffic
  • Bypass Traefik altogether

Unless you are an informed user of Traefik, we recommend bypassing traefik and provide raw TCP/UDP access to these services from the Internet.

The InfluxDB service requires a dedicated hostname. The management interface of RabbitMQ service requires a dedicated hostname as well.

Grafana service can run well behind Traefik gateway. The default Traefik configuration makes permits access to Grafana at URL: http(s): foo.com/vis.

"},{"location":"admin/services.html#configure-and-install","title":"Configure and Install","text":"

If you have not cloned the DTaaS git repository, cloning would be the first step. In case you already have the codebase, you can skip the cloning step. To clone, do:

git clone https://github.com/into-cps-association/DTaaS.git\ncd DTaaS/deploy/services\n

The next step in installation is to specify the config of the services. There are two configuration files. The services.yml contains most of configuration settings. The mqtt-default.conf file contains the MQTT listening port. Update these two config files before proceeding with the installation of the services.

Now continue with the installation of services.

yarn install\nnode services.js\n
"},{"location":"admin/services.html#use","title":"Use","text":"

After the installation is complete, you can see the following services active at the following ports / URLs.

service external url Influx services.foo.com Grafana services.foo.com:3000 RabbitMQ Broker services.foo.com:5672 RabbitMQ Broker Management Website services.foo.com:15672 MQTT Broker services.foo.com:1883

The firewall and network access settings of corporate / cloud network need to be configured to allow external access to the services. Otherwise the users of DTaaS will not be able to utilize these services from their user workspaces.

"},{"location":"admin/trial.html","title":"Trial Installation","text":"

To try out the software, you can install it on Ubuntu Server 22.04 Operating System. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space to the vagrant box. A successful installation will create a setup similar to the one shown in the figure.

A one-step installation script is provided on this page. This script sets up the DTaaS software with default credentials and users. You can use it to check a test installation of DTaaS software.

"},{"location":"admin/trial.html#pre-requisites","title":"Pre-requisites","text":""},{"location":"admin/trial.html#1-domain-name","title":"1. Domain name","text":"

You need a domain name to run the application. The install script assumes foo.com to be your domain name. You will change this after running the script.

"},{"location":"admin/trial.html#2-gitlab-oauth-application","title":"2. Gitlab OAuth application","text":"

The DTaaS react website requires Gitlab OAuth provider. If you need more help with this step, please see the Authentication page.

You need the following information from the OAuth application registered on Gitlab:

Gitlab Variable Name Variable name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api

You can also see Gitlab help page for getting the Gitlab OAuth application details.

Remember to create gitlab accounts for user1 and user2.

"},{"location":"admin/trial.html#install","title":"Install","text":"

Note

While installing you might encounter multiple dialogs asking, which services should be restarted. Just click OK to all of those.

Run the following scripts.

wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS/feature/distributed-demo/deploy/single-script-install.sh\nbash single-script-install.sh\n

Warning

This test installation has default credentials and is thus highly insecure.

"},{"location":"admin/trial.html#post-install","title":"Post install","text":"

After the install-script. Please change foo.com and Gitlab OAuth details to your local settings in the following files.

~/DTaaS/client/build/env.js\n~/DTaaS/servers/config/gateway/dynamic/fileConfig.yml\n
"},{"location":"admin/trial.html#post-install-check","title":"Post-install Check","text":"

Now when you visit your domain, you should be able to login through your OAuth Provider and be able to access the DTaas web UI.

If you can following all the screenshots from user website. Everything is correctly setup.

"},{"location":"admin/trial.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"admin/client/CLIENT.html","title":"Host the DTaaS Client Website","text":"

To host DTaaS client website on your server, follow these steps:

  • Download the DTaaS-client.zip from the releases page.
  • Inside the DTaaS-client directory, there is site directory. The site directory contains all the optimized static files that are ready for deployment.

  • Setup the oauth application on gitlab instance. See the instructions in authentication page for completing this task.

  • Locate the file site/env.js and replace the example values to match your infrastructure. The constructed links will be \"REACT_APP_URL/REACT_APP_URL_BASENAME/{username}/{Endpoint}\". See the definitions below:
window.env = {\nREACT_APP_ENVIRONMENT: \"prod | dev\",\nREACT_APP_URL: \"URL for the gateway\",\nREACT_APP_URL_BASENAME: \"Base URL for the client website\"(optional),\nREACT_APP_URL_DTLINK: \"Endpoint for the Digital Twin\",\nREACT_APP_URL_LIBLINK: \"Endpoint for the Library Assets\",\nREACT_APP_WORKBENCHLINK_TERMINAL: \"Endpoint for the terminal link\",\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: \"Endpoint for the VNC Desktop link\",\nREACT_APP_WORKBENCHLINK_VSCODE: \"Endpoint for the VS Code link\",\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: \"Endpoint for the Jupyter Lab link\",\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK:\n\"Endpoint for the Jupyter Notebook link\",\nREACT_APP_CLIENT_ID: 'AppID genereated by the gitlab OAuth provider',\nREACT_APP_AUTH_AUTHORITY: 'URL of the private gitlab instance',\nREACT_APP_REDIRECT_URI: 'URL of the homepage for the logged in users of the website',\nREACT_APP_LOGOUT_REDIRECT_URI: 'URL of the homepage for the anonymous users of the website',\nREACT_APP_GITLAB_SCOPES: 'OAuth scopes. These should match with the scopes set in gitlab OAuth provider',\n};\n// Example values with no base URL. Trailing and ending slashes are optional.\nwindow.env = {\nREACT_APP_ENVIRONMENT: 'prod',\nREACT_APP_URL: 'https://foo.com/',\nREACT_APP_URL_BASENAME: '',\nREACT_APP_URL_DTLINK: '/lab',\nREACT_APP_URL_LIBLINK: '',\nREACT_APP_WORKBENCHLINK_TERMINAL: '/terminals/main',\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\nREACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\nREACT_APP_CLIENT_ID: '934b98f03f1b6f743832b2840bf7cccaed93c3bfe579093dd0942a433691ccc0',\nREACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\nREACT_APP_REDIRECT_URI: 'https://foo.com/Library',\nREACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/',\nREACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n};\n// Example values with \"bar\" as basename URL.\n//Trailing and ending slashes are optional.\nwindow.env = {\nREACT_APP_ENVIRONMENT: \"dev\",\nREACT_APP_URL: 'https://foo.com/',\nREACT_APP_URL_BASENAME: 'bar',\nREACT_APP_URL_DTLINK: '/lab',\nREACT_APP_URL_LIBLINK: '',\nREACT_APP_WORKBENCHLINK_TERMINAL: '/terminals/main',\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\nREACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\nREACT_APP_CLIENT_ID: '934b98f03f1b6f743832b2840bf7cccaed93c3bfe579093dd0942a433691ccc0',\nREACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\nREACT_APP_REDIRECT_URI: 'https://foo.com/bar/Library',\nREACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/bar',\nREACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n};\n
  • Copy the entire contents of the build folder to the root directory of your server where you want to deploy the app. You can use FTP, SFTP, or any other file transfer protocol to transfer the files.

  • Make sure your server is configured to serve static files. This can vary depending on the server technology you are using, but typically you will need to configure your server to serve files from a specific directory.

  • Once the files are on your server, you should be able to access your app by visiting your server's IP address or domain name in a web browser.

The website depends on Traefik gateway and ML Workspace components to be available. Otherwise, you only get a skeleton non-functional website.

"},{"location":"admin/client/CLIENT.html#complementary-components","title":"Complementary Components","text":"

The website requires background services for providing actual functionality. The minimum background service required is atleast one ML Workspace serving the following routes.

https://foo.com/<username>/lab\nhttps://foo.com/<username>/terminals/main\nhttps://foo.com/<username>/tools/vnc/?password=vncpassword\nhttps://foo.com/<username>/tools/vscode/\n

The username is the user workspace created using ML Workspace docker container. Please follow the instructions in README. You can create as many user workspaces as you want. If you have two users - alice and bob - on your system, then the following the commands in will instantiate the required user workspaces.

mkdir -p files/alice files/bob files/common\n\nprintf \"\\n\\n start the user workspaces\"\ndocker run -d \\\n-p 8090:8080 \\\n--name \"ml-workspace-alice\" \\\n-v \"$(pwd)/files/alice:/workspace\" \\\n-v \"$(pwd)/files/common:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"alice\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2\n\ndocker run -d \\\n-p 8091:8080 \\\n--name \"ml-workspace-bob\" \\\n-v \"$(pwd)/files/bob:/workspace\" \\\n-v \"$(pwd)/files/common:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"bob\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2\n

Given that multiple services are running at different routes, a reverse proxy is needed to map the background services to external routes. You can use Apache, NGINX, Traefik or any other software to work as reverse proxy.

The website screenshots and usage information is available in user page.

"},{"location":"admin/client/auth.html","title":"Setting Up OAuth","text":"

To enable user authentication on DTaaS React client website, you will use the OAuth authentication protocol, specifically the PKCE authentication flow. Here are the steps to get started:

1. Choose Your GitLab Server:

  • You need to set up OAuth authentication on a GitLab server. The commercial gitlab.com is not suitable for multi-user authentication (DTaaS requires this), so you'll need an on-premise GitLab instance.
  • You can use GitLab Omnibus Docker for this purpose.
  • Configure the OAuth application as an instance-wide authentication type.

2. Determine Your Website's Hostname:

  • Before setting up OAuth on GitLab, decide on the hostname for your website. It's recommended to use a self-hosted GitLab instance, which you will use in other parts of the DTaaS application.

3. Define Callback and Logout URLs:

  • For the PKCE authentication flow to function correctly, you need two URLs: a callback URL and a logout URL.
  • The callback URL informs the OAuth provider of the page where signed-in users should be redirected. It's different from the landing homepage of the DTaaS application.
  • The logout URL is where users will be directed after logging out.

4. OAuth Application Creation:

  • During the creation of the OAuth application on GitLab, you need to specify the scope. Choose openid, profile, read_user, read_repository, and api scopes.

5. Application ID:

  • After successfully creating the OAuth application, GitLab generates an application ID. This is a long string of HEX values that you will need for your configuration files.

6. Required Information from OAuth Application:

  • You will need the following information from the OAuth application registered on GitLab:
GitLab Variable Name Variable Name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api

7. Create User Accounts:

Create user accounts in gitlab for all the usernames chosen during installation. The trial installation script comes with two default usernames - user1 and user2. For all other installation scenarios, accounts with specific usernames need to be created on gitlab.

"},{"location":"admin/client/auth.html#development-environment","title":"Development Environment","text":"

There needs to be a valid callback and logout URLs for development and testing purposes. You can use the same oauth application id for both development, testing and deployment scenarios. Only the callback and logout URLs change. It is possible to register multiple callback URLs in one oauth application. In order to use oauth for development and testing on developer computer (localhost), you need to add the following to oauth callback URL.

DTaaS application URL: http://localhost:4000\nCallback URL: http://localhost:4000/Library\nLogout URL: http://localhost:4000\n

The port 4000 is the default port for running the client website.

"},{"location":"admin/client/auth.html#multiple-dtaas-applications","title":"Multiple DTaaS applications","text":"

The DTaaS is a regular web application. It is possible to host multiple DTaaS applications on the same server. The only requirement is to have a distinct URLs. You can have three DTaaS applications running at the following URLs.

https://foo.com/au\nhttps://foo.com/acme\nhttps://foo.com/bar\n

All of these instances can use the same gitlab instance for authentication.

DTaaS application URL Gitlab Instance URL Callback URL Logout URL Application ID https://foo.com/au https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab https://foo.com/acme https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab https://foo.com/bar https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab

If you are hosting multiple DTaaS instances on the same server, do not install DTaaS with a null basename on the same server. Even though it works well, the setup is confusing to setup and may lead to maintenance issues.

If you choose to host your DTaaS application with a basename (say bar), then the URLs in env.js change to:

DTaaS application URL: https://foo.com/bar\nGitlab instance URL: https://foo.gitlab.com\nCallback URL: https://foo.com/bar/Library\nLogout URL: https://foo.com/bar\n
"},{"location":"admin/guides/add_service.html","title":"Add other services","text":"

Pre-requisite

You should read the documentation about the already available services

This guide will show you how to add more services. In the following example we will be adding MongoDB as a service, but these steps could be modified to install other services as well.

Adding other services requires more RAM and CPU power. Please make sure the host machine meets the hardware requirements for running all the services.

1. Add the configuration:

Select configuration parameters for the MongoDB service.

Configuration Variable Name Description username the username of the root user in the MongoDB password the password of the root user in the MongoDB port the mapped port on the host machine (default is 27017) datapath path on host machine to mount the data from the MongoDB container

Open the file /deploy/services/services.yml and add the configuration for MongoDB:

services:\n    rabbitmq:\n        username: \"dtaas\"\n        password: \"dtaas\"\n        vhost: \"/\"\n        ports:\n            main: 5672\n            management: 15672\n    ...\n    mongodb:\n        username: <username>\n        password: <password>\n        port: <port>\n        datapath: <datapath>\n    ...\n

2. Add the script:

The next step is to add the script that sets up the MongoDB container with the configuraiton.

Create new file named /deploy/services/mongodb.js and add the following code:

#!/usr/bin/node\n/* Install the optional platform services for DTaaS */\nimport { $ } from \"execa\";\nimport chalk from \"chalk\";\nimport fs from \"fs\";\nimport yaml from \"js-yaml\";\nconst $$ = $({ stdio: \"inherit\" });\nconst log = console.log;\nlet config;\ntry {\nlog(chalk.blue(\"Load services configuration\"));\nconfig = await yaml.load(fs.readFileSync(\"services.yml\", \"utf8\"));\nlog(\nchalk.green(\n\"configuration loading is successful and config is a valid yaml file\"\n)\n);\n} catch (e) {\nlog(chalk.red(\"configuration is invalid. Please rectify services.yml file\"));\nprocess.exit(1);\n}\nlog(chalk.blue(\"Start MongoDB server\"));\nconst mongodbConfig = config.services.mongodb;\ntry {\nlog(\nchalk.green(\n\"Attempt to delete any existing MongoDB server docker container\"\n)\n);\nawait $$`docker stop mongodb`;\nawait $$`docker rm mongodb`;\n} catch (e) {}\nlog(chalk.green(\"Start new Mongodb server docker container\"));\nawait $$`docker run -d -p ${mongodbConfig.port}:27017 \\\n--name mongodb \\\n-v ${mongodbConfig.datapath}:/data/db \\\n-e MONGO_INITDB_ROOT_USERNAME=${mongodbConfig.username} \\\n-e MONGO_INITDB_ROOT_PASSWORD=${mongodbConfig.password} \\\nmongo:7.0.3`;\nlog(chalk.green(\"MongoDB server docker container started successfully\"));\n

3. Run the script:

Go to the directory /deploy/services/ and run services script with the following commands:

yarn install\nnode mongodb.js\n

The MongoDB should now be available on services.foo.com:<port>.

"},{"location":"admin/guides/add_user.html","title":"Add a new user","text":"

This page will guide you on, how to add more users to the DTaas. Please do the following:

Important

Make sure to replace <username> and <port> Select a port that is not already being used by the system.

1. Add user:

Add the new user on the Gitlab instance.

2. Setup a new workspace:

The above code creates a new workspace for the new user based on user2.

cd DTaaS/files\ncp -R user2 <username>\ncd ..\ndocker run -d \\\n-p <port>:8080 \\\n--name \"ml-workspace-<username>\" \\\n-v \"${TOP_DIR}/files/<username>:/workspace\" \\\n-v \"${TOP_DIR}/files/<username>:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"<username>\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2\n

3. Add username and password:

The following code adds basic authentication for the new user.

cd DTaaS/servers/config/gateway\nhtpasswd auth <username>\n

4. Add 'route' for new user:

We need to add a new route to the servers ingress.

Open the following file with your preffered editor (e.g. VIM/nano).

vi DTaaS/servers/config/gateway/dynamic/fileConfig.yml\n

Now add the new route and service for the user.

Important

foo.com should be replaced with your own domain.

http:\n  routers:\n    ....\n    <username>:\n      entryPoints:\n        - http\n      rule: 'Host(`foo.com`) && PathPrefix(`/<username>`)'\n      middlewares:\n        - basic-auth\n      service: <username>\n\n  services:\n    ...\n    <username>:\n      loadBalancer:\n        servers:\n          - url: 'http://localhost:<port>'\n

5. Access the new user:

Log into the DTaaS application as new user.

"},{"location":"admin/guides/common_workspace_readonly.html","title":"Make common asset area read only","text":""},{"location":"admin/guides/common_workspace_readonly.html#why","title":"Why","text":"

In some cases you might want to restrict the access rights of some users to the common assets. In order to make the common area read only, you have to change the install script section performing the creation of user workspaces.

"},{"location":"admin/guides/common_workspace_readonly.html#how","title":"How","text":"

To make the common assets read-only for user2, the following changes need to be made to the install script, which is located one of the following places.

  • trial installation: single-script-install.sh

  • production installation: DTaas/deploy/install.sh

The line -v \"${TOP_DIR}/files/common:/workspace/common:ro\" was added to make the common workspace read-only for user2.

Here's the updated code:

docker run -d \\\n-p 8091:8080 \\\n--name \"ml-workspace-user2\" \\\n-v \"${TOP_DIR}/files/user2:/workspace\" \\\n-v \"${TOP_DIR}/files/common:/workspace/common:ro\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"user2\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2 || true\n

This ensures that the common area is read-only for user2, while the user's own (private) assets are still writable.

"},{"location":"admin/guides/hosting_site_without_https.html","title":"Hosting site without https","text":"

In the default trial or production installation setup, the https connection is provided by the reverse proxy. The DTaaS application by default runs in http mode. So removing the reverse proxy removes the https mode.

"},{"location":"admin/guides/link_service.html","title":"Link services to local ports","text":"

Requirements

  • User needs to have an account on server2.
  • SSH server must be running on server2

To link a port from the service machine (server2) to the local port on the user workspace. You can use ssh local port forwarding technique.

1. Step:

Go to the user workspace, on which you want to map from localhost to the services machine

  • e.g. foo.com/user1

2. Step:

Open a terminal in your user workspace.

3. Step:

Run the following command to map a port:

ssh -fNT -L <local_port>:<destination>:<destination_port> <user>@<services.server.com>\n

Here's an example mapping the RabbitMQ broker service available at 5672 of services.foo.com to localhost port 5672.

ssh -fNT -L 5672:localhost:5672 vagrant@services.foo.com\n

Now the programs in user workspace can treat the RabbitMQ broker service as a local service running within user workspace.

"},{"location":"admin/guides/update_basepath.html","title":"Update basepath/route for the application","text":"

The updates required to make the application work with basepath (say bar):

1. Change the Gitlab OAuth URLs to include basepath:

  REACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\n  REACT_APP_REDIRECT_URI: 'https://foo.com/bar/Library',\n  REACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/bar',\n

2. Update traefik gateway config (deploy/config/gateway/fileConfig.yml):

http:\n  routers:\n    dtaas:\n      entryPoints:\n        - http\n      rule: \"Host(`foo.com`)\" #remember, there is no basepath for this rule\n      middlewares:\n        - basic-auth\n      service: dtaas\n\n    user1:\n      entryPoints:\n        - http\n      rule: \"Host(`foo.com`) && PathPrefix(`/bar/user1`)\"\n      middlewares:\n        - basic-auth\n      service: user1\n\n  # Middleware: Basic authentication\n  middlewares:\n    basic-auth:\n      basicAuth:\n        usersFile: \"/etc/traefik/auth\"\n        removeHeader: true\n\n  services:\n    dtaas:\n      loadBalancer:\n        servers:\n          - url: \"http://localhost:4000\"\n\n    user1:\n      loadBalancer:\n        servers:\n          - url: \"http://localhost:8090\"\n

3. Update deploy/config/client/env.js:

See the client documentation for an example.

4. Update install scripts:

Update deploy/install.sh by adding basepath. For example, add WORKSPACE_BASE_URL=\"bar/\" for all user workspaces.

For user1, the docker command changes to:

docker run -d \\\n-p 8090:8080 \\\n--name \"ml-workspace-user1\" \\\n-v \"${TOP_DIR}/files/user1:/workspace\" \\\n-v \"${TOP_DIR}/files/common:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"bar/user1\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2 || true\n

5. Proceed with install using deploy/install.sh:

"},{"location":"admin/servers/lib/LIB-MS.html","title":"Host Library Microservice","text":"

The lib microservice is a simplified file manager providing graphQL API. It has three features:

  • provide a listing of directory contents.
  • transfer a file to user.
  • Source files can either come from local file system or from a gitlab instance.

The library microservice is designed to manage and serve files, functions, and models to users, allowing them to access and interact with various resources.

This document provides instructions for running a stand alone library microservice.

"},{"location":"admin/servers/lib/LIB-MS.html#setup-the-file-system","title":"Setup the File System","text":"

The users expect the following file system structure for their reusable assets.

There is a skeleton file structure in DTaaS codebase. You can copy and create file system for your users.

"},{"location":"admin/servers/lib/LIB-MS.html#gitlab-setup-optional","title":"Gitlab setup (optional)","text":"

For this microserivce to be functional, a certain directory or gitlab project structure is expected. The microservice expects that the gitlab consisting of one group, DTaaS, and within that group, all of the projects be located, user1, user2, ... , as well as a commons project. Each project corresponds to files of one user. A sample file structure can be seen in gitlab dtaas group. You can visit the gitlab documentation on groups for help on the management of gitlab groups.

You can clone the git repositories from the dtaas group to get a sample file system structure for the lib microservice.

"},{"location":"admin/servers/lib/LIB-MS.html#install","title":"Install","text":"

The package is available in Github packages registry.

Set the registry and install the package with the following commands

sudo npm config set @into-cps-association:registry https://npm.pkg.github.com\nsudo npm install -g @into-cps-association/libms\n

The npm install command asks for username and password. The username is your Github username and the password is your Github personal access token. In order for the npm to download the package, your personal access token needs to have read:packages scope.

"},{"location":"admin/servers/lib/LIB-MS.html#configure","title":"Configure","text":"

The microservices requires config specified in INI format. The template configuration file is:

PORT='4001'\nMODE='local' or 'gitlab'\nLOCAL_PATH='/Users/<Username>/DTaaS/files'\nGITLAB_GROUP='dtaas'\nGITLAB_URL='https://gitlab.com/api/graphql'\nTOKEN='123-sample-token'\nLOG_LEVEL='debug'\nAPOLLO_PATH='/lib' or ''\nGRAPHQL_PLAYGROUND='false' or 'true'\n

The LOCAL_PATH variable is the absolute filepath to the location of the local directory which will be served to users by the Library microservice.

The GITLAB_URL, GITLAB_GROUP and TOKEN are only relevant for gitlab mode. The TOKEN should be set to your GitLab Group access API token. For more information on how to create and use your access token, gitlab page.

Once you've generated a token, copy it and replace the value of TOKEN with your token for the gitlab group, can be found.

Replace the default values the appropriate values for your setup.

NOTE:

  1. When _MODE=local, only LOCAL_PATH is used. Other environment variables are unused.
  2. When MODE=gitlab, GITLAB_URL, TOKEN, and GITLAB_GROUP are used; LOCAL_PATH is unused.
"},{"location":"admin/servers/lib/LIB-MS.html#use","title":"Use","text":"

Display help.

libms -h\n

The config is saved .env file by convention. The libms looks for .env file in the working directory from which it is run. If you want to run libms without explicitly specifying the configuration file, run

libms\n

To run libms with a custom config file,

libms -c FILE-PATH\nlibms --config FILE-PATH\n

If the environment file is named something other than .env, for example as .env.development, you can run

libms -c \".env.development\"\n

You can press Ctl+C to halt the application. If you wish to run the microservice in the background, use

nohup libms [-c FILE-PATH] & disown\n

The lib microservice is now running and ready to serve files, functions, and models.

"},{"location":"admin/servers/lib/LIB-MS.html#service-endpoint","title":"Service Endpoint","text":"

The URL endpoint for this microservice is located at: localhost:PORT/lib

The service API documentation is available on user page.

"},{"location":"admin/vagrant/base-box.html","title":"DTaaS Vagrant Box","text":"

This README provides instructions on creating a custom Operating System virtual disk for running the DTaaS software. The virtual disk is managed by vagrant. The purpose is two fold:

  • Provide cross-platform installation of the DTaaS application. Any operating system supporting use of vagrant software utility can support installation of the DTaaS software.
  • Create a ready to use development environment for code contributors.

There are two scripts in this directory:

Script name Purpose Default user.sh user installation developer.sh developer installation

If you are installing the DTaaS for developers, the default installation caters to your needs. You can skip the next step and continue with the creation of vagrant box.

If you are a developer and would like additional software installed, you need to modify Vagrantfile. The existing Vagrantfile has two lines:

    config.vm.provision \"shell\", path: \"user.sh\"\n#config.vm.provision \"shell\", path: \"developer.sh\"\n

Uncomment the second line to have more software components installed. If you are not a developer, no changes are required to the Vagrantfile.

This vagrant box installed for users will have the following items:

  1. docker v24.0
  2. nodejs v18.8
  3. yarn v1.22
  4. npm v10.2
  5. containers - ml-workspace-minimal v0.13, traefik v2.10, gitlab-ce v16.4, influxdb v2.7, grafana v10.1, rabbitmq v3-management, eclipse-mosquitto (mqtt) v2, mongodb v7.0

This vagrant box installed for developers will have the following items additional items:

  • docker-compose v2.20
  • microk8s v1.27
  • jupyterlab
  • mkdocs
  • container - telegraf v1.28

At the end of installation, the software stack created in vagrant box can be visualised as shown in the following figure.

The upcoming instructions will help with the creation of base vagrant box.

#create a key pair\nssh-keygen -b 4096 -t rsa -f key -q -N \"\"\nmv key vagrant\nmv key.pub vagrant.pub\n\nvagrant up\n\n# let the provisioning be complete\n# replace the vagrant ssh key-pair with personal one\nvagrant ssh\n\n# install the oh-my-zsh\nsh -c \"$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\"\n# install plugins: history, autosuggestions,\ngit clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions\n\n# inside ~/.zshrc, modify the following line\nplugins=(git zsh-autosuggestions history cp tmux)\n# remove the vagrant default public key - first line of\n# /home/vagrant/.ssh/authorized_keys\n# exit vagrant guest machine and then\n# copy own private key to vagrant private key location\ncp vagrant .vagrant/machines/default/virtualbox/private_key\n\n# check\nvagrant ssh #should work\nvagrant halt\n\nvagrant package --base dtaas \\\n--info \"info.json\" --output dtaas.vagrant\n\n# Add box to the vagrant cache in ~/.vagrant.d/boxes directory\nvagrant box add --name dtaas ./dtaas.vagrant\n\n# You can use this box in other vagrant boxes using\n#config.vm.box = \"dtaas\"\n
"},{"location":"admin/vagrant/base-box.html#references","title":"References","text":"

Image sources: Ubuntu logo

"},{"location":"admin/vagrant/single-machine.html","title":"DTaaS on Single Vagrant Machine","text":"

These are installation instructions for running DTaaS software inside one vagrant Virtual Machine. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space to the vagrant box.

"},{"location":"admin/vagrant/single-machine.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"

Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/single-machine). This shall be useful for logging into the vagrant machines created for two-machine deployment.

"},{"location":"admin/vagrant/single-machine.html#target-installation-setup","title":"Target Installation Setup","text":"

The goal is to use the dtaas Vagrant box to install the DTaaS software on one single vagrant machine. A graphical illustration of a successful installation can be seen here.

There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue color.

Tip

The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.

"},{"location":"admin/vagrant/single-machine.html#configure-server-settings","title":"Configure Server Settings","text":"

A dummy foo.com URL has been used for illustration. Please change this to your unique website URL.

Please follow the next steps to make this installation work in your local environment.

Update the Vagrantfile. Fields to update are:

  1. Hostname (node.vm.hostname = \"foo.com\")
  2. MAC address (:mac => \"xxxxxxxx\"). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged.
  3. Other adjustments are optional.
"},{"location":"admin/vagrant/single-machine.html#installation-steps","title":"Installation Steps","text":"

Execute the following commands from terminal

vagrant up\nvagrant ssh\n

Set a cronjob inside the vagrant virtual machine to remote the conflicting default route.

wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS/feature/distributed-demo/deploy/vagrant/route.sh\nsudo bash route.sh\n

If you only want to test the application and are not setting up a production instance, you can follow the instructions of single script install.

If you are not in a hurry and would rather have a production instance, follow the instructions of regular server installation setup to complete the installation.

"},{"location":"admin/vagrant/single-machine.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"admin/vagrant/two-machines.html","title":"DTaaS on Two Vagrant Machines","text":"

These are installation instructions for running DTaaS application in two vagrant virtual machines (VMs). In this setup, all the user workspaces shall be run on server1 while all the platform services will be run on server2.

The setup requires two server VMs with the following hardware configuration:

server1: 16GB RAM, 8 x64 vCPUs and 50GB Hard Disk space

server2: 6GB RAM, 3 x64 vCPUs and 50GB Hard Disk space

Under the default configuration, two user workspaces are provisioned on server1. The default installation setup also installs InfluxDB, Grafana, RabbitMQ and MQTT services on server2. If you would like to install more services, you can create shell scripts to install the same on server2.

"},{"location":"admin/vagrant/two-machines.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"

Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/two-machine). This shall be useful for logging into the vagrant machines created for two-machine deployment.

"},{"location":"admin/vagrant/two-machines.html#target-installation-setup","title":"Target Installation Setup","text":"

The goal is to use this dtaas vagrant box to install the DTaaS software on server1 and the default platform services on server2. Both the servers are vagrant machines.

There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue and red color.

A graphical illustration of a successful installation can be seen here.

In this case, both the vagrant boxes are spawed on one server using two vagrant configuration files, namely boxes.json and Vagrantfile.

Tip

The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.

"},{"location":"admin/vagrant/two-machines.html#configure-server-settings","title":"Configure Server Settings","text":"

NOTE: A dummy foo.com and services.foo.com URLs has been used for illustration. Please change these to your unique website URLs.

The first step is to define the network identity of the two VMs. For that, you need server name, hostname and MAC address. The hostname is the network URL at which the server can be accessed on the web. Please follow these steps to make this work in your local environment.

Update the boxes.json. There are entries one for each server. The fields to update are:

  1. name - name of server1 (\"name\" = \"dtaas\")
  2. hostname - hostname of server1 (\"name\" = \"foo.com\")
  3. MAC address (:mac => \"xxxxxxxx\"). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged.
  4. name - name of server2 (\"name\" = \"services\")
  5. hostname - hostname of server2 (\"name\" = \"services.foo.com\")
  6. MAC address (:mac => \"xxxxxxxx\"). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged.
  7. Other adjustments are optional.
"},{"location":"admin/vagrant/two-machines.html#installation-steps","title":"Installation Steps","text":"

The installation instructions are given separately for each vagrant machine.

"},{"location":"admin/vagrant/two-machines.html#launch-dtaas-platform-default-services","title":"Launch DTaaS Platform Default Services","text":"

Follow the installation guide for services to install the DTaaS platform services.

After the services are up and running, you can see the following services active within server2 (services.foo.com).

service external url InfluxDB and visualization service services.foo.com Grafana visualization service services.foo.com:3000 MQTT communication service services.foo.com:1883 RabbitMQ communication service services.foo.com:5672 RabbitMQ management service services.foo.com:15672"},{"location":"admin/vagrant/two-machines.html#install-dtaas-application","title":"Install DTaaS Application","text":"

Execute the following commands from terminal

vagrant up --provision dtaas\nvagrant ssh dtaas\nwget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS/feature/distributed-demo/deploy/vagrant/route.sh\nsudo bash route.sh\n

If you only want to test the application and are not setting up a production instance, you can follow the instructions of single script install.

If you are not in a hurry and would rather have a production instance, follow the instructions of regular server installation setup to complete the installation.

"},{"location":"admin/vagrant/two-machines.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"developer/index.html","title":"Developers Guide","text":"

This guide is to help developers get familiar with the project. Please see developer-specific Slides, Video, and Research paper.

"},{"location":"developer/index.html#development-environment","title":"Development Environment","text":"

Ideally, developers should work on Ubuntu/Linux. Other operating systems are not supported inherently and may require additional steps.

To start with, install the required software and git-hooks.

bash script/env.sh\nbash script/configure-git-hooks.sh\n

The git-hooks will ensure that your commits are formatted correctly and that the tests pass before you push the commits to remote repositories.

Be aware that the tests may take a long time to run. If you want to skip the tests or formatting, you can use the --no-verify flag on git commit or git push. Please use this option with care.

There is a script to download all the docker containers used in the project. You can download them using

bash script/docker.sh\n

The docker images are large and are likely to consume about 5GB of bandwidth and 15GB of space. You will have to download the docker images on a really good network.

"},{"location":"developer/index.html#development-workflow","title":"Development Workflow","text":"

To manage collaboration by multiple developers on the software, a development workflow is in place. Each developer should follow these steps:

  1. Fork of the main repository into your github account.
  2. Setup Code Climate and Codecov for your fork. The codecov does not require secret token for public repositories.
  3. Install git-hooks for the project.
  4. Use Fork, Branch, PR workflow.
  5. Work in your fork and open a PR from your working branch to your feature/distributed-demo branch. The PR will run all the github actions, code climate and codecov checks.
  6. Resolve all the issues identified in the previous step.
  7. If you have access to the integration server, try your working branch on the integration server.
  8. Once changes are verified, a PR should be made to the feature/distributed-demo branch of the upstream DTaaS repository.
  9. The PR will be merged after checks by either the project administrators or the maintainers.

Remember that every PR should be meaningful and satisfies a well-defined user story or improve the code quality.

"},{"location":"developer/index.html#code-quality","title":"Code Quality","text":"

The project code qualities are measured based on:

  • Linting issues identified by Code Climate
  • Test coverage report collected by Codecov
  • Successful github actions
"},{"location":"developer/index.html#code-climate","title":"Code Climate","text":"

Code Climate performs static analysis, linting and style checks. Quality checks are performed by codeclimate are to ensure the best possible quality of code to add to our project.

While any new issues introduced in your code would be shown in the PR page itself, to address any specific issue, you can visit the issues or code section of the codeclimate page.

It is highly recommended that any code you add does not introduce new quality issues. If they are introduced, they should be fixed immediately using the appropriate suggestions from Code Climate, or in worst case, adding a ignore flag (To be used with caution).

"},{"location":"developer/index.html#codecov","title":"Codecov","text":"

Codecov keeps track of the test coverage for the entire project. For information about testing and workflow related to that, please see the testing page.

"},{"location":"developer/index.html#github-actions","title":"Github Actions","text":"

The project has multiple github actions defined. All PRs and direct code commits must have successful status on github actions.

"},{"location":"developer/npm-packages.html","title":"Publish NPM packages","text":"

The DTaaS software is developed as a monorepo with multiple npm packages. Since publishing to npmjs is irrevocable and public, developers are encouraged to setup their own private npm registry for local development.

A private npm registry will help with local publish and unpublish steps.

"},{"location":"developer/npm-packages.html#setup-private-npm-registry","title":"Setup private npm registry","text":"

We recommend using verdaccio for this task. The following commands help you create a working private npm registry for development.

docker run -d --name verdaccio -p 4873:4873 verdaccio/verdaccio\nnpm adduser --registry http://localhost:4873 #create a user on the verdaccio registry\nnpm set registry http://localhost:4873/\nyarn config set registry \"http://localhost:4873\"\nyarn login --registry \"http://localhost:4873\" #login with the credentials for yarn utility\nnpm login #login with the credentials for npm utility\n

You can open http://localhost:4873 in your browser, login with the user credentials to see the packages published.

"},{"location":"developer/npm-packages.html#publish-to-private-npm-registry","title":"Publish to private npm registry","text":"

To publish a package to your local registry, do:

yarn install\nyarn build #the dist/ directory is needed for publishing step\nyarn publish --no-git-tag-version #increments version in package.json, publishes to registry\nyarn publish #increments version in package.json, publishes to registry and adds a git tag\n

The package version in package.json gets updated as well. You can open http://localhost:4873 in your browser, login with the user credentials to see the packages published. Please see verdaccio docs for more information.

If there is a need to unpublish a package, ex: @dtaas/runner@0.0.2, do:

npm unpublish  --registry http://localhost:4873/ @dtaas/runner@0.0.2\n

To install / uninstall this utility for all users, do:

sudo npm install  --registry http://localhost:4873 -g @dtaas/runner\nsudo npm list -g # should list @dtaas/runner in the packages\nsudo npm remove --global @dtaas/runner\n
"},{"location":"developer/npm-packages.html#use-the-packages","title":"Use the packages","text":"

The packages available in private npm registry can be used like the regular npm packages installed from npmjs.

For example, to use @dtaas/runner@0.0.2 package, do:

sudo npm install  --registry http://localhost:4873 -g @dtaas/runner\nrunner # launch the digital twin runner\n
"},{"location":"developer/client/client.html","title":"React Website","text":"

The Website is how the end-users interact with the software platform. The website is being developed as a React single page web application.

A dependency graph for the entire codebase of the react application is:

"},{"location":"developer/client/client.html#dependency-graphs","title":"Dependency Graphs","text":"

The figures are the dependency graphs generated from the code.

"},{"location":"developer/client/client.html#src-directory","title":"src directory","text":""},{"location":"developer/client/client.html#test-directory","title":"test directory","text":""},{"location":"developer/servers/lib/lib-ms.html","title":"Library Microservice","text":"

The Library Microservices provides users with access to files in user workspaces via API. This microservice will interface with local file system and Gitlab to provide uniform Gitlab-compliant API access to files.

Warning

This microservice is still under heavy development. It is still not a good replacement for file server we are using now.

"},{"location":"developer/servers/lib/lib-ms.html#architecture-and-design","title":"Architecture and Design","text":"

The C4 level 2 diagram of this microservice is:

The GraphQL API provided by the library microservice shall be compliant with the Gitlab GraphQL service.

"},{"location":"developer/servers/lib/lib-ms.html#uml-diagrams","title":"UML Diagrams","text":""},{"location":"developer/servers/lib/lib-ms.html#class-diagram","title":"Class Diagram","text":"
classDiagram\n    class FilesResolver {\n    -filesService: IFilesService\n    +listDirectory(path: string): Promise<Project>\n    +readFile(path: string): Promise<Project>\n    }\n\n    class FilesServiceFactory {\n    -configService: ConfigService\n    -gitlabFilesService: GitlabFilesService\n    -localFilesService: LocalFilesService\n    +create(): IFilesService\n    }\n\n    class GitlabFilesService {\n    -configService: ConfigService\n    -parseArguments(path: string): Promise<domain: string; parsedPath: string>\n    -sendRequest(query: string): Promise<Project>\n    -executeQuery(path: string, getQuery: QueryFunction): Promise<Project>\n    +listDirectory(path: string): Promise<Project>\n    +readFile(path: string): Promise<Project>\n    }\n\n    class LocalFilesService {\n    -configService: ConfigService\n    -getFileStats(fullPath: string, file: string): Promise<Project>\n    +listDirectory(path: string): Promise<Project>\n    +readFile(path: string): Promise<Project>\n    }\n\n    class ConfigService {\n    +get(propertyPath: string): any\n    }\n\n    class IFilesService{\n    listDirectory(path: string): Promise<Project>\n    readFile(path: string): Promise<Project>\n    }\n\n    IFilesService <|-- FilesResolver: uses\n    IFilesService <|.. GitlabFilesService: implements\n    IFilesService <|.. LocalFilesService: implements\n    IFilesService <|-- FilesServiceFactory: creates\n    ConfigService <|-- FilesServiceFactory: uses\n    ConfigService <|-- GitlabFilesService: uses\n    ConfigService <|-- LocalFilesService: uses
"},{"location":"developer/servers/lib/lib-ms.html#sequence-diagram","title":"Sequence Diagram","text":"
sequenceDiagram\n    actor Client\n    actor Traefik\n\n    box LightGreen Library Microservice\n    participant FR as FilesResolver\n    participant FSF as FilesServiceFactory\n    participant CS as ConfigService\n    participant IFS as IFilesService\n    participant LFS as LocalFilesService\n    participant GFS as GitlabFilesService\n    end\n\n    participant FS as Local File System DB\n    participant GAPI as GitLab API DB\n\n    Client ->> Traefik : HTTP request\n    Traefik ->> FR : GraphQL query\n    activate FR\n\n    FR ->> FSF : create()\n    activate FSF\n\n    FSF ->> CS : getConfiguration(\"MODE\")\n    activate CS\n\n    CS -->> FSF : return configuration value\n    deactivate CS\n\n    alt MODE = Local\n    FSF ->> FR : return filesService (LFS)\n    deactivate FSF\n\n    FR ->> IFS : listDirectory(path) or readFile(path)\n    activate IFS\n\n    IFS ->> LFS : listDirectory(path) or readFile(path)\n    activate LFS\n\n    LFS ->> CS : getConfiguration(\"LOCAL_PATH\")\n    activate CS\n\n    CS -->> LFS : return local path\n    deactivate CS\n\n    LFS ->> FS : Access filesystem\n    alt Filesystem error\n        FS -->> LFS : Filesystem error\n        LFS ->> LFS : Throw new InternalServerErrorException\n        LFS -->> IFS : Error\n    else Successful file operation\n        FS -->> LFS : Return filesystem data\n        LFS ->> IFS : return Promise<Project>\n    end\n    deactivate LFS\n    else MODE = GitLab\n        FSF ->> FR : return filesService (GFS)\n        %%deactivate FSF\n\n    FR ->> IFS : listDirectory(path) or readFile(path)\n    activate IFS\n\n    IFS ->> GFS : listDirectory(path) or readFile(path)\n    activate GFS\n\n    GFS ->> GFS : parseArguments(path)\n    GFS ->> GFS : executeQuery()\n\n    GFS ->> CS : getConfiguration(\"GITLAB_API_URL\", \"GITLAB_TOKEN\")\n    activate CS\n\n    CS -->> GFS : return GitLab API URL and Token\n    deactivate CS\n\n    GFS ->> GAPI : sendRequest()\n    alt GitLab API error\n        GAPI -->> GFS : API error\n        GFS ->> GFS : Throw new Error(\"Invalid query\")\n        GFS -->> IFS : Error\n    else Successful GitLab API operation\n        GAPI -->> GFS : Return API response\n        GFS ->> IFS : return Promise<Project>\n    end\n    deactivate GFS\n    end\n\n    alt Error thrown\n    IFS ->> FR : return Error\n    deactivate IFS\n    FR ->> Traefik : return Error\n    Traefik ->> Client : HTTP error response\n    else Successful operation\n    IFS ->> FR : return Promise<Project>\n    deactivate IFS\n    FR ->> Traefik : return Promise<Project>\n    Traefik ->> Client : HTTP response\n    end\n\n    deactivate FR\n
"},{"location":"developer/servers/lib/lib-ms.html#dependency-graphs","title":"Dependency Graphs","text":"

The figures are the dependency graphs generated from the code.

"},{"location":"developer/servers/lib/lib-ms.html#src-directory","title":"src directory","text":""},{"location":"developer/servers/lib/lib-ms.html#test-directory","title":"test directory","text":""},{"location":"developer/system/architecture.html","title":"System Overview","text":""},{"location":"developer/system/architecture.html#user-requirements","title":"User Requirements","text":"

The DTaaS software platform users expect a single platform to support the complete DT lifecycle. To be more precise, the platform users expect the following features:

  1. Author \u2013 create different assets of the DT on the platform itself. This step requires use of some software frameworks and tools whose sole purpose is to author DT assets.
  2. Consolidate \u2013 consolidate the list of available DT assets and authoring tools so that user can navigate the library of reusable assets. This functionality requires support for discovery of available assets.
  3. Configure \u2013 support selection and configuration of DTs. This functionality also requires support for validation of a given configuration.
  4. Execute \u2013 provision computing infrastructure on demand to support execution of a DT.
  5. Explore \u2013 interact with a DT and explore the results stored both inside and outside the platform. Exploration may lead to analytical insights.
  6. Save \u2013 save the state of a DT that\u2019s already in the execution phase. This functionality is required for on demand saving and re-spawning of DTs.
  7. What-if analysis \u2013 explore alternative scenarios to (i) plan for an optimal next step, (ii) recalibrate new DT assets, (iii) automated creation of new DTs or their assets; these newly created DT assets may be used to perform scientifically valid experiments.
  8. Share \u2013 share a DT with other users of their organisation.
"},{"location":"developer/system/architecture.html#system-architecture","title":"System Architecture","text":"

The figure shows the system architecture of the the DTaaS software platform.

"},{"location":"developer/system/architecture.html#system-components","title":"System Components","text":"

The users interact with the software platform using a website. The gateway is a single point of entry for direct access to the platform services. The gateway is responsible for controlling user access to the microservice components. The service mesh enables discovery of microservices, load balancing and authentication functionalities.

In addition, there are microservices for catering to author, store, explore, configure, execute and scenario analysis requirements. The microservices are complementary and composable; they fulfil core requirements of the system.

The microservices responsible for satisfying the user requirements are:

  1. The security microservice implements role-based access control (RBAC) in the platform.
  2. The accounting microservice is responsible for keeping track of the platform, DT asset and infrastructure usage. Any licensing, usage restrictions need to be enforced by the accounting microservice. Accounting is a pre-requisite to commercialisation of the platform. Due to significant use of external infrastructure and resources via the platform, the accounting microservice needs to interface with accounting systems of the external services.

  3. The data microservice is a frontend to all the databases integrated into the platform. A time-series database and a graph database are essential. These two databases store timeseries data from PT, events on PT/DT, commands sent by DT to PT. The PTs uses these databases even when their respective DTs are not in the execute phase.

  4. The visualisation microservice is again a frontend to visualisation software that are natively supported inside the platform. Any visualisation software running either on external systems or on client browsers do not need to interact with this microservice. They can directly use the data provided by the data microservice.
"},{"location":"developer/system/architecture.html#c4-architectural-diagrams","title":"C4 Architectural Diagrams","text":"

The C4 architectural diagrams of the DTaaS software are presented here.

"},{"location":"developer/system/architecture.html#level-1","title":"Level 1","text":"

This Level 1 diagram only shows the users and the roles they play in the DTaaS software.

"},{"location":"developer/system/architecture.html#level-2","title":"Level 2","text":"

This simplified version of Level 2 diagram shows the software containers of the DTaaS software.

If you are interested, please take a look at the detailed diagram.

Please note that the given diagram only covers DT Lifecycle, Reusable Assets and Execution Manager.

"},{"location":"developer/system/architecture.html#mapping","title":"Mapping","text":"

A mapping of the C4 level 2 containers to components identified in the system architecture is also available in the table.

System Component Container(s) Gateway Traefik Gateway Unified Interface React Webapplication Reusable Assets Library Microservice Data MQTT, InfluxDB, and RabbitMQ (not shown in the C4 Level 2 diagram) Visualization InfluxDB (not shown in the C4 Level 2 diagram) DT Lifecycle DT Lifecycle Manager and DT Configuration Validator Security Gitlab OAuth Accounting None Execution Manager Execution Manager"},{"location":"developer/system/current-status.html","title":"Current Status","text":"

The DTaaS software platform is currently under development. Crucial system components are in place with ongoing development work focusing on increased automation and feature enhancement. The figure below shows the current status of the development work.

"},{"location":"developer/system/current-status.html#user-security","title":"User Security","text":"

There is authentication mechanisms in place for the react website and the Traefik gateway.

The react website component uses Gitlab for user authentication using OAuth protocol.

"},{"location":"developer/system/current-status.html#gateway-authentication","title":"Gateway Authentication","text":"

The Traefik gateway has HTTP basic authentication enabled by default. This authentication on top of HTTPS connection can provide a good protection against unauthorized use.

Warning

Please note that HTTP basic authentication over insecure non-TLS is insecure.

There is also a possibility of using self-signed mTLS certificates. The current security functionality is based on signed Transport Layer Security (TLS) certificates issued to users. The TLS certificate based mutual TLS (mTLS) authentication protocol provides better security than the usual username and password combination. The mTLS authentication takes place between the users browser and the platform gateway. The gateway federates all the backend services. The service discovery, load balancing, and health checks are carried by the gateway based on a dynamic reconfiguration mechanism.

Note

The mTLS is not enabled in the default install. Please use the scripts in ssl/ directory to generate the required certificates for users and Traefik gateway.

"},{"location":"developer/system/current-status.html#user-workspaces","title":"User Workspaces","text":"

All users have dedicated dockerized-workspaces. These docker-images are based on container images published by mltooling group.

Thus DT experts can develop DTs from existing DT components and share them with other users. A file server has been setup to act as a DT asset repository. Each user gets space to store private DT assets and also gets access to shared DT assets. Users can synchronize their private DT assets with external git repositories. In addition, the asset repository transparently gets mapped to user workspaces within which users can perform DT lifecycle operations. There is also a library microservice which in the long-run will replace the file server.

Users can run DTs in their workspaces and also permit remote access to other users. There is already shared access to internal and external services. With these two provisions, users can treat live DTs as service components in their own software systems.

"},{"location":"developer/system/current-status.html#platform-services","title":"Platform Services","text":"

There are four external services integrated with the DTaaS software platform. They are: InfluxDB, Grafana, RabbitMQ and MQTT.

These services can be used by DTs and PTs for communication, storing and visualization of data. There can also be monitoring services setup based on these services.

"},{"location":"developer/system/current-status.html#development-priorities","title":"Development Priorities","text":"

The development priorities for the DTaaS software development team are:

  • DT Runner (API Interface to DT)
  • Multi-user and microservice security
  • Increased automation of installation procedures
  • DT Configuration DSL \u00edn the form of YAML schema
  • UI for DT creation
  • DT examples

Your contributions and collaboration are highly welcome.

"},{"location":"developer/testing/intro.html","title":"Testing","text":""},{"location":"developer/testing/intro.html#common-questions-on-testing","title":"Common Questions on Testing","text":""},{"location":"developer/testing/intro.html#what-is-software-testing","title":"What is Software Testing","text":"

Software testing is a procedure to investigate the quality of a software product in different scenarios. It can also be stated as the process of verifying and validating that a software program or application works as expected and meets the business and technical requirements that guided design and development.

"},{"location":"developer/testing/intro.html#why-software-testing","title":"Why Software Testing","text":"

Software testing is required to point out the defects and errors that were made during different development phases. Software testing also ensures that the product under test works as expected in all different cases \u2013 stronger the test suite, stronger is our confidence in the product that we have built. One important benefit of software testing is that it facilitates the developers to make incremental changes to source code and make sure that the current changes are not breaking the functionality of the previously existing code.

"},{"location":"developer/testing/intro.html#what-is-tdd","title":"What is TDD","text":"

TDD stands for Test Driven Development. It is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards. The goal of TDD can be viewed as specification and not validation. In other words, it\u2019s one way to think through your requirements or design before your write your functional code.

"},{"location":"developer/testing/intro.html#what-is-bdd","title":"What is BDD","text":"

BDD stands for \u201cBehaviour Driven Development\u201d. It is a software development process that emerged from TDD. It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. This provides software development and management teams with shared tools and a shared process to collaborate on software development. BDD is largely facilitated through the use of a simple domain-specific language (DSL) using natural language constructs (e.g., English-like sentences) that can express the behavior and the expected outcomes. Mocha and Cucumber testing libraries are built around the concepts of BDD.

"},{"location":"developer/testing/intro.html#testing-workflow","title":"Testing workflow","text":"

(Ref: Ham Vocke, The Practical Test Pyramid)

We follow a testing workflow in accordance with the test pyramid diagram given above, starting with isolated tests and moving towards complete integration for any new feature changes. The different types of tests (in the order that they should be performed) are explained below:

"},{"location":"developer/testing/intro.html#unit-tests","title":"Unit Tests","text":"

Unit testing is a level of software testing where individual units/ components of a software are tested. The objective of Unit Testing is to isolate a section of code and verify its correctness.

Ideally, each test case is independent from the others. Substitutes such as method stubs, mock objects, and spies can be used to assist testing a module in isolation.

"},{"location":"developer/testing/intro.html#benefits-of-unit-testing","title":"Benefits of Unit Testing","text":"
  • Unit testing increases confidence in changing/ maintaining code. If good unit tests are written and if they are run every time any code is changed, we will be able to promptly catch any defects introduced due to the change.
  • If codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
  • The cost, in terms of time, effort and money, of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels.
"},{"location":"developer/testing/intro.html#unit-tests-in-dtaas","title":"Unit Tests in DTaaS","text":"

Each component DTaaS project uses unique technology stack. Thus the packages used for unit tests are different. Please check the test/ directory of a component to figure out the unit test packages used.

"},{"location":"developer/testing/intro.html#integration-tests","title":"Integration tests","text":"

Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. In DTaaS, we use an integration server for software development as well as such tests.

The existing integration tests are done at the component level. There are no integration tests between the components. This task has been postponed to future.

"},{"location":"developer/testing/intro.html#end-to-end-tests","title":"End-to-End tests","text":"

Testing any code changes through the end user interface of your software is essential to verify if your code has the desired effect for the user. End-to-End tests in DTaaS a functional setup. For more information visit here.

There are end-to-end tests in the DTaaS. This task has been postponed to future.

"},{"location":"developer/testing/intro.html#feature-tests","title":"Feature Tests","text":"

A Software feature can be defined as the changes made in the system to add new functionality or modify the existing functionality. Each feature is said to have a characteristics that is designed to be useful, intuitive and effective. It is important to test a new feature when it has been added. We also need to make sure that it does not break the functionality of already existing features. Hence feature tests prove to be useful.

The DTaaS project does not have any feature tests yet. Cucumber shall be used in future to implement feature tests.

"},{"location":"developer/testing/intro.html#references","title":"References","text":"

Justin Searls and Kevin Buchanan, Contributing Tests wiki. This wiki has goog explanation of TDD and test doubles.

"},{"location":"user/features.html","title":"Overview","text":""},{"location":"user/features.html#advantages","title":"Advantages","text":"

The DTaaS software platform provides certain advantages to users:

  • Support for different kinds of Digital Twins
  • CFD, Simulink, co-simulation, FEM, ROM, ML etc.
  • Integrates with other Digital Twin frameworks
  • Facilitate availability of Digital Twin as a Service
  • Collaboration and reuse
  • Private workspaces for verification of reusable assets, trial run DTs
  • Cost effectiveness
"},{"location":"user/features.html#software-features","title":"Software Features","text":"

Each installation of DTaaS platform comes with the features highlighted in the following picture.

All the users have dedicated workspaces. These workspaces are dockerized versions of Linux Desktops. The user desktops are isolated so the installations and customizations done in one user workspace do not effect the other user workspaces.

Each user workspace comes with some development tools pre-installed. These tools are directly accessible from web browser. The following tools are available at present:

Tool Advantage Jupyter Lab Provides flexible creation and use of digital twins and their components from web browser. All the native Jupyterlab usecases are supported here. Jupyter Notebook Useful for web-based management of their files (library assets) VS Code in the browser A popular IDE for software development. Users can develop their digital twin-related assets here. ungit An interactive git client. Users can work with git repositories from web browser

In addition, users have access to xfce-based remote desktop via VNC client. The VNC client is available right in the web browser. The xfce supported desktop software can also be run in their workspace.

The DTaaS software platform has some pre-installed services available. The currently available services are:

Service Advantage InfluxDB Time-series database primarly for storing time-series data from physical twins. The digital twins can use an already existing data. Users can also create visualization dashboards for their digital twins. RabbitMQ Communication broker for communication between physical and digital twins Grafana Visualization dashboards for their digital twins. MQTT Lightweight data transfer broker for IoT devices / physical twins feeding data into digital twins.

In addition, the workspaces are connected to the Internet so all the Digital Twins running in the workspace can interact with both the internal and external services.

The users can publish and reuse the digital twin assets available on the platform. In addition, users can run their digital twins and make these live digital twins available as services to their clients. The clients need not be users of the DTaaS software installation.

"},{"location":"user/motivation.html","title":"Motivation","text":"

How can DT software platforms enable users collaborate to:

  • Build digital twins (DTs)
  • Use DTs themselves
  • Share DTs with other users
  • Provide the existing DTs as Service to other users

In addition, how can the DT software platforms:

  • Support DT lifecycle
  • Scale up rather than scale down (flexible convention over configuration)
"},{"location":"user/motivation.html#existing-approaches","title":"Existing Approaches","text":"

There are quite a few solutions proposed in the recent past to solve this problem. Some of them are:

  • Focus on data from Physical Twins (PTs) to perform analysis, diagnosis, planning etc\u2026
  • Share DT assets across the upstream, downstream etc\u2026.
  • Evaluate different models of PT
  • DevOps for Cyber Physical Systems (CPS)
  • Scale DT / execution of DT / ensemble of related DTs
  • Support for PT product lifecycle
"},{"location":"user/motivation.html#our-approach","title":"Our Approach","text":"
  • Support for transition from existing workflows to DT frameworks
  • Create DTs from reusable assets
  • Enable users to share DT assets
  • Offer DTs as a Service
  • Integrate the DTs with external software systems
  • Separate configurations of independent DT components
"},{"location":"user/digital-twins/create.html","title":"Create a Digital Twin","text":"

The first step in digital twin creation is to use the available assets in your workspace. If you have assets / files in your computer that need to be available in the DTaaS workspace, then please follow the instructions provided in library assets.

There are dependencies among the library assets. These dependencies are shown below.

A digital twin can only be created by linking the assets in a meaningful way. This relationship can be expressed using a mathematical equation:

where D denotes data, M denotes models, F denotes functions, T denotes tools, denotes DT configuration and is a symbolic notation for a digital twin itself. The expression denotes composition of DT from D,M,T and F assets. The indicates zero or one more instances of an asset and indicates one or more instances of an asset.

The DT configuration specifies the relevant assets to use, the potential parameters to be set for these assets. If a DT needs to use RabbitMQ, InfluxDB like services supported by the platform, the DT configuration needs to have access credentials for these services.

This kind of generic DT definition is based on the DT examples seen in the wild. You are at liberty to deviate from this definition of DT. The only requirement is the ability to run the DT from either commandline or desktop.

Tip

If you are stepping into the world of Digital Twins, you might not have distinct digital twin assets. You are likely to have one directory of everything in which you run your digital twin. In such a case we recommend that you upload this monolithic digital twin into digital_twin/your_digital_twin_name directory.

"},{"location":"user/digital-twins/create.html#example","title":"Example","text":"

The Examples repository contains a co-simulation setup for mass spring damper. This example illustrates the potential of using co-simulation for digital twins.

The file system contents for this example are:

workspace/\n  data/\n    mass-spring-damper\n        input/\n        output/\n\n  digital_twins/\n    mass-spring-damper/\n      cosim.json\n      time.json\n      lifecycle/\n        analyze\n        clean\n        evolve\n        execute\n        save\n        terminate\n      README.md\n\n  functions/\n  models/\n    MassSpringDamper1.fmu\n    MassSpringDamper2.fmu\n\n  tools/\n  common/\n    data/\n    functions/\n    models/\n    tools/\n        maestro-2.3.0-jar-with-dependencies.jar\n

The workspace/data/mass-spring-damper/ contains input and output data for the mass-spring-damper digital twin.

The two FMU models needed for this digital twin are in models/ directory.

The co-simulation digital twin needs Maestro co-simulation orchestrator. Since this is a reusable asset for all the co-simulation based DTs, the tool has been placed in common/tools/ directory.

The actual digital twin configuration is specified in digital twins/mass-spring-damper directory. The co-simulation configuration is specified in two json files, namely cosim.json and time.json. A small explanation of digital twin for its users can be placed in digital twins/mass-spring-damper/README.md.

The launch program for this digital twin is in digital twins/mass-spring-damper/lifecycle/execute. This launch program runs the co-simulation digital twin. The co-simulation runs till completion and then ends. The programs in digital twins/mass-spring-damper/lifecycle are responsible for lifecycle management of this digital twin. The lifecycle page provides more explanation on these programs.

Execution of a Digital Twin

A frequent question arises on the run time characteristics of a digital twin. The natural intuition is to say that a digital twin must operate as long as its physical twin is in operation. If a digital twin runs for a finite time and then ends, can it be called a digital twin? The answer is a resounding YES. The Industry 4.0 usecases seen among SMEs have digital twins that run for a finite time. These digital twins are often run at the discretion of the user.

You can run this digital twin by,

  1. Go to Workbench tools page of the DTaaS website and open VNC Desktop. This opens a new tab in your browser
  2. A page with VNC Desktop and a connect button comes up. Click on Connect. You are now connected to the Linux Desktop of your workspace.
  3. Open a Terminal (black rectangular icon in the top left region of your tab) and type the following commands.
  4. Download the example files by following the instructions given on examples overview.

  5. Go to the digital twin directory and run

cd /workspace/examples/digital_twins/mass-spring-damper\nlifecycle/execute\n

The last command executes the mass-spring-damper digital twin and stores the co-simulation output in data/mass-spring-damper/output.

"},{"location":"user/digital-twins/lifecycle.html","title":"Digital Twin Lifecycle","text":"

The physical products in the real world have product lifecycle. A simplified four-stage product life is illustrated here.

A digital twin tracking the physical products (twins) need to track and evolve in conjunction with the corresponding physical twin.

The possible activities undertaken in each lifecycle phases are illustrated in the figure.

(Ref: Minerva, R, Lee, GM and Crespi, N (2020) Digital Twin in the IoT context: a survey on technical features, scenarios and architectural models. Proceedings of the IEEE, 108 (10). pp. 1785-1824. ISSN 0018-9219.)

"},{"location":"user/digital-twins/lifecycle.html#lifecycle-phases","title":"Lifecycle Phases","text":"

The four phase lifecycle has been extended to a lifecycle with eight phases. The new phase names and the typical activities undertaken in each phase are outlined in this section.

A DT lifecycle consists of explore, create, execute, save, analyse, evolve and terminate phases.

Phase Main Activities explore selection of suitable assets based on the user needs and checking their compatibility for the purposes of creating a DT. create specification of DT configuration. If DT already exists, there is no creation phase at the time of reuse. execute automated / manual execution of a DT based on its configuration. The DT configuration must checked before starting the execution phase. analyse checking the outputs of a DT and making a decision. The outputs can be text files, or visual dashboards. evolve reconfigure DT primarily based on analysis. save involves saving the state of DT to enable future recovery. terminate stop the execution of DT.

A digital twin faithfully tracking the physical twin lifecycle will have to support all the phases. It is also possible for digital twin engineers to add more phases to digital they are developing. Thus it is important for the DTaaS software platform needs to accommodate needs of different DTs.

A potential linear representation of the tasks undertaken in a digital twin lifecycle are shown here.

Again this is only a one possible pathway. Users are at liberty to alter the sequence of steps.

It is possible to map the lifecycle phases identified so far with the Build-Use-Share approach of the DTaaS software platform.

Even though not mandatory, having a matching coding structure makes it easy to for users to create and manage their DTs within the DTaaS. It is recommended to have the following structure:

workspace/\n  digital_twins/\n    digital-twin-1/\n      lifecycle/\n        analyze\n        clean\n        evolve\n        execute\n        save\n        terminate\n

A dedicated program exists for each phase of DT lifecycle. Each program can be as simple as a script that launches other programs or sends messages to a live digital twin.

"},{"location":"user/digital-twins/lifecycle.html#example-lifecycle-scripts","title":"Example Lifecycle Scripts","text":"

Here are the example programs / scripts to manage three phases in the lifecycle of mass-spring-damper DT.

lifecycle/execute
#!/bin/bash\nmkdir -p /workspace/data/mass-spring-damper/output\n#cd ..\njava -jar /workspace/common/tools/maestro-2.3.0-jar-with-dependencies.jar \\\nimport -output /workspace/data/mass-spring-damper/output \\\n--dump-intermediate sg1 cosim.json time.json -i -vi FMI2 \\\noutput-dir>debug.log 2>&1\n

The execute phases uses the DT configuration, FMU models and Maestro tool to execute the digital twin. The script also stores the output of cosimulation in /workspace/data/mass-spring-damper/output.

It is possible for a DT not to support a specific lifecycle phase. This intention can be specified with an empty script and a helpful message if deemed necessary.

lifecycle/analyze
#!/bin/bash\nprintf \"operation is not supported on this digital twin\"\n

The lifecycle programs can call other programs in the code base. In the case of lifecycle/terminate program, it is calling another script to do the necessary job.

lifecycle/terminate
#!/bin/bash\nlifecycle/clean\n
"},{"location":"user/examples/index.html","title":"DTaaS Examples","text":"

There are some example digital twins created for the DTaaS software. Use these examples and follow the steps given in the Examples section to experience features of the DTaaS software platform and understand best practices for managing digital twins within the platform.

"},{"location":"user/examples/index.html#copy-examples","title":"Copy Examples","text":"

The first step is to copy all the example code into your user workspace within the DTaaS. Use the given shell script to copy all the examples into /workspace/examples directory.

wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS-examples/main/getExamples.sh\nbash getExamples.sh\n
"},{"location":"user/examples/index.html#example-list","title":"Example List","text":"

The digital twins provided in examples vary in their complexity. It is best to use the examples in the following order.

  1. Mass Spring Damper
  2. Water Tank Fault Injection
  3. Water Tank Model Swap
  4. Desktop Robotti and RabbitMQ

DTaaS examples

"},{"location":"user/examples/drobotti-rmqfmu/index.html","title":"Desktop Robotti with RabbitMQ","text":""},{"location":"user/examples/drobotti-rmqfmu/index.html#overview","title":"Overview","text":"

This example demonstrates bidirectional communication between a mock physical twin and a digital twin of a mobile robot (Desktop Robotti). The communication is enabled by RabbitMQ Broker.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#example-structure","title":"Example Structure","text":"

The mock physical twin of mobile robot is created using two python scripts

  1. data/drobotti_rmqfmu/rmq-publisher.py
  2. data/drobotti_rmqfmu/consume.py

The mock physical twin sends its physical location in (x,y) coordinates and expects a cartesian distance calculated from digital twin.

The rmq-publisher.py reads the recorded (x,y) physical coordinates of mobile robot. The recorded values are stored in a data file. These (x,y) values are published to RabbitMQ Broker. The published (x,y) values are consumed by the digital twin.

The consume.py subscribes to RabbitMQ Broker and waits for the calculated distance value from the digital twin.

The digital twin consists of a FMI-based co-simulation, where Maestro is used as co-orchestration engine. In this case, the co-simulation is created by using two FMUs - RMQ FMU (rabbitmq-vhost.fmu) and distance FMU (distance-from-zero.fmu). The RMQ FMU receives the (x,y) coordinates from rmq-publisher.py and sends calculated distance value to consume.py. The RMQ FMU uses RabbitMQ broker for communication with the mock mobile robot, i.e., rmq-publisher.py and consume.py. The distance FMU is responsible for calculating the distance between (0,0) and (x,y). The RMQ FMU and distance FMU exchange values during co-simulation.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"

This example uses two models, one tool, one data, and two scripts to create mock physical twin. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models distance-from-zero.fmu Private No rmq-vhost.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes Data drobotti_playback_data.csv private No Mock PT rmq-publisher.py Private No consume.py Private No

This DT has many configuration files. The coe.json and multimodel.json are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.

The RabbitMQ access credentials need to be provided in multimodel.json. The rabbitMQ-credentials.json provides RabbitMQ access credentials for mock PT python scripts. Please add your credentials in both these files.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool and pip packages for python scripts Execute Runs both DT and mock PT Clean Clears run logs and outputs"},{"location":"user/examples/drobotti-rmqfmu/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/drobotti_rmqfmu\n

If required, change the execute permission of lifecycle scripts you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/drobotti-rmqfmu/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 in the workspace. Also install the required python pip packages for rmq-publisher.py and consume.py scripts.

lifecycle/create\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#execute","title":"Execute","text":"

Run the python scripts to start mock physical twin. Also run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.

lifecycle/execute\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the /workspace/examples/digital_twins/drobotti_rmqfmu directory.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#terminate-phase","title":"Terminate phase","text":"

Terminate to clean up the debug files and co-simulation output files.

lifecycle/terminate\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#references","title":"References","text":"

The RabbitMQ FMU github repository contains complete documentation and source code of the rmq-vhost.fmu.

More information about the case study is available in:

Frasheri, Mirgita, et al. \"Addressing time discrepancy between digital\nand physical twins.\" Robotics and Autonomous Systems 161 (2023): 104347.\n
"},{"location":"user/examples/incubator/index.html","title":"Incubator Demo","text":"

Installation of required python packages for the Incubator demo

pip install pyhocon\npip install influxdb_client\npip install scipy\npip install pandas\npip install pika\npip install oomodelling\npip install control\npip install filterpy\npip install sympy\npip install docker\n

start rabbitmq server and create a rabbitmq account with,

name: incubator\npassword:incubator\nwith access to the virtual host \"/\"\n
docker run -d --name rabbitmq-server -p 15672:15672 -p 5672:5672 rabbitmq:3-management\ndocker exec rabbitmq-server rabbitmqctl add_user incubator incubator\ndocker exec rabbitmq-server rabbitmqctl set_permissions -p \"/\" incubator \".*\" \".*\" \".*\"\n

Access InfluxDB running on another machine. Remember that InfluxDB works only on a distinct sub-domain name like influx.foo.com, but not on foo.com/influx.

ssh -i /vagrant/vagrant -fNT -L 40000:localhost:80 vagrant@influx.server2.com\n

Update the rabbitmq-server and influxdb configuration in

/home/vagrant/dt/1/incubator/example_digital-twin_incubator/software/startup.conf\n

select (comment / uncomment) functions in

/home/vagrant/dt/1/incubator/example_digital-twin_incubator/software/startup/start_all_services.py\n

Start the program

export PYTHONPATH=\"${PYTHONPATH}:/home/vagrant/dt/1/incubator/example_digital-twin_incubator/software/incubator\"\ncd /home/vagrant/dt/1/incubator/example_digital-twin_incubator/software\npython3 -m startup.start_all_services\n
"},{"location":"user/examples/mass-spring-damper/index.html","title":"Mass Spring Damper","text":""},{"location":"user/examples/mass-spring-damper/index.html#overview","title":"Overview","text":"

The mass spring damper digital twin (DT) comprises two mass spring dampers and demonstrates how a co-simulation based DT can be used within DTaaS.

"},{"location":"user/examples/mass-spring-damper/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/mass-spring-damper/index.html#example-structure","title":"Example Structure","text":"

There are two simulators included in the study, each representing a mass spring damper system. The first simulator calculates the mass displacement and speed of for a given force acting on mass . The second simulator calculates force given a displacement and speed of mass . By coupling these simulators, the evolution of the position of the two masses is computed.

"},{"location":"user/examples/mass-spring-damper/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"

This example uses two models and one tool. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models MassSpringDamper1.fmu Private Yes MassSpringDamper2.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes

The co-sim.json and time.json are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.

"},{"location":"user/examples/mass-spring-damper/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/mass-spring-damper/output directory Clean Clears run logs and outputs"},{"location":"user/examples/mass-spring-damper/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/mass-spring-damper\n

If required, change the execute permission of lifecycle scripts you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/mass-spring-damper/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 in the workspace.

lifecycle/create\n
"},{"location":"user/examples/mass-spring-damper/index.html#execute","title":"Execute","text":"

Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.

lifecycle/execute\n
"},{"location":"user/examples/mass-spring-damper/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the /workspace/examples/data/mass-spring-damper/output directory.

You can also view run logs in the /workspace/examples/digital_twins/mass-spring-damper.

"},{"location":"user/examples/mass-spring-damper/index.html#terminate-phase","title":"Terminate phase","text":"

Terminate to clean up the debug files and co-simulation output files.

lifecycle/terminate\n
"},{"location":"user/examples/mass-spring-damper/index.html#references","title":"References","text":"

More information about co-simulation techniques and mass spring damper case study are available in:

Gomes, Cl\u00e1udio, et al. \"Co-simulation: State of the art.\"\narXiv preprint arXiv:1702.00686 (2017).\n

The source code for the models used in this DT are available in mass spring damper github repository.

"},{"location":"user/examples/water_tank_FI/index.html","title":"Water Tank Fault Injection","text":""},{"location":"user/examples/water_tank_FI/index.html#overview","title":"Overview","text":"

This example shows a fault injection (FI) enabled digital twin (DT). A live DT is subjected to simulated faults received from the environment. The simulated faults is specified as part of DT configuration and can be changed for new instances of DTs.

In this co-simulation based DT, a watertank case-study is used; co-simulation consists of a tank and controller. The goal of which is to keep the level of water in the tank between Level-1 and Level-2. The faults are injected into output of the water tank controller (Watertankcontroller-c.fmu) from 12 to 20 time units, such that the tank output is closed for a period of time, leading to the water level increasing in the tank beyond the desired level (Level-2).

"},{"location":"user/examples/water_tank_FI/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/water_tank_FI/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_FI/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"

This example uses two models and one tool. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models watertankcontroller-c.fmu Private Yes singlewatertank-20sim.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes

The multimodelFI.json and simulation-config.json are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.

The faults are defined in wt_fault.xml.

"},{"location":"user/examples/water_tank_FI/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_FI/output directory Clean Clears run logs and outputs"},{"location":"user/examples/water_tank_FI/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/water_tank_FI\n

If required, change the execute permission of lifecycle scripts you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/water_tank_FI/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 and pip dependencies. The pandas and matplotlib are the pip dependencies installated.

lifecycle/create\n
"},{"location":"user/examples/water_tank_FI/index.html#execute","title":"Execute","text":"

Run the co-simulation. Generates the co-simulation output.csv file at /workspace/examples/data/water_tank_FI/output.

lifecycle/execute\n
"},{"location":"user/examples/water_tank_FI/index.html#analyze-phase","title":"Analyze phase","text":"

Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/.

lifecycle/analyze\n
"},{"location":"user/examples/water_tank_FI/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the /workspace/examples/data/water_tank_FI/output directory.

You can also view run logs in the /workspace/examples/digital_twins/water_tank_FI.

"},{"location":"user/examples/water_tank_FI/index.html#terminate-phase","title":"Terminate phase","text":"

Clean up the temporary files and delete output plot

lifecycle/terminate\n
"},{"location":"user/examples/water_tank_FI/index.html#references","title":"References","text":"

More details on this case-study can be found in the paper:

M. Frasheri, C. Thule, H. D. Macedo, K. Lausdahl, P. G. Larsen and\nL. Esterle, \"Fault Injecting Co-simulations for Safety,\"\n2021 5th International Conference on System Reliability and Safety (ICSRS),\nPalermo, Italy, 2021.\n

The fault-injection plugin is an extension to the Maestro co-orchestration engine that enables injecting inputs and outputs of FMUs in an FMI-based co-simulation with tampered values. More details on the plugin can be found in fault injection git repository. The source code for this example is also in the same github repository in a example directory.

"},{"location":"user/examples/water_tank_swap/index.html","title":"Water Tank Model Swap","text":""},{"location":"user/examples/water_tank_swap/index.html#overview","title":"Overview","text":"

This example shows multi-stage execution and dynamic reconfiguration of a digital twin (DT). Two features of DTs are demonstrated here:

  • Fault injection into live DT
  • Dynamic auto-reconfiguration of live DT

The co-simulation methodology is used to construct this DT.

"},{"location":"user/examples/water_tank_swap/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_swap/index.html#configuration-of-assets","title":"Configuration of assets","text":"

This example uses four models and one tool. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models Watertankcontroller-c.fmu Private Yes Singlewatertank-20sim.fmu Private Yes Leak_detector.fmu Private No Leak_controller.fmu Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes

This DT has many configuration files. The DT is executed in two stages. There exist separate DT configuration files for each stage. The following table shows the configuration files and their purpose.

Configuration file name Execution Stage Purpose mm1. json stage-1 DT configuration wt_fault.xml, FaultInject.mabl stage-1 faults injected into DT during stage-1 mm2.json stage-2 DT configuration simulation-config.json Both stages Configuration for specifying DT execution time and output logs"},{"location":"user/examples/water_tank_swap/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_swap/output directory Analyze Process the co-simulation output and produce plots Clean Clears run logs, outputs and plots"},{"location":"user/examples/water_tank_swap/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/water_tank_swap\n

If required, change the permission of files you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/water_tank_swap/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 and pip dependencies. The matplotlib pip package is also installated.

lifecycle/create\n
"},{"location":"user/examples/water_tank_swap/index.html#execute","title":"Execute","text":"

This DT has two-stage execution. In the first-stage, a co-simulation is executed. The Watertankcontroller-c.fmu and Singlewatertank-20sim.fmu models are used to execute the DT. During this stage, faults are injected into one of the models (Watertankcontroller-c.fmu) and the system performance is checked.

In the second-stage, another co-simulation is run in which three FMUs are used. The FMUs used are: watertankcontroller, singlewatertank-20sim, and leak_detector. There is an in-built monitor in the Maestro tool. This monitor is enabled during the stage and a swap condition is set at the beginning of the second-stage. When the swap condition is satisfied, the Maestro swaps out Watertankcontroller-c.fmu model and swaps in Leakcontroller.fmu model. This swapping of FMU models demonstrates the dynamic reconfiguration of a DT.

The end of execution phase generates the co-simulation output.csv file at /workspace/examples/data/water_tank_swap/output.

lifecycle/execute\n
"},{"location":"user/examples/water_tank_swap/index.html#analyze-phase","title":"Analyze phase","text":"

Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/.

lifecycle/analyze\n
"},{"location":"user/examples/water_tank_swap/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the workspace/examples/data/water_tank_swap/output directory.

You can also view run logs in the workspace/examples/digital_twins/water_tank_swap.

"},{"location":"user/examples/water_tank_swap/index.html#terminate-phase","title":"Terminate phase","text":"

Clean up the temporary files and delete output plot

lifecycle/terminate\n
"},{"location":"user/examples/water_tank_swap/index.html#references","title":"References","text":"

The complete source of this example is available on model swap github repository.

The runtime model (FMU) swap mechanism demonstrated by the experiment is detailed in the paper:

Ejersbo, Henrik, et al. \"fmiSwap: Run-time Swapping of Models for\nCo-simulation and Digital Twins.\" arXiv preprint arXiv:2304.07328 (2023).\n

The runtime reconfiguration of co-simulation by modifying the Functional Mockup Units (FMUs) used is further detailed in the paper:

Ejersbo, Henrik, et al. \"Dynamic Runtime Integration of\nNew Models in Digital Twins.\" 2023 IEEE/ACM 18th Symposium on\nSoftware Engineering for Adaptive and Self-Managing Systems\n(SEAMS). IEEE, 2023.\n
"},{"location":"user/servers/lib/LIB-MS.html","title":"Library Microservice","text":"

The library microservice provides an API interface to reusable assets library. This is only for expert users who need to integrate the DTaaS with their own IT systems. Regular users can safely skip this page.

The lib microservice is responsible for handling and serving the contents of library assets of the DTaaS platform. It provides API endpoints for clients to query, and fetch these assets.

This document provides instructions for using the library microservice.

Please see assets for a suggested storage conventions of your library assets.

Once the assets are stored in the library, you can access the server's endpoint by typing in the following URL: http://foo.com/lib.

The URL opens a graphql playground. You can check the query schema and try sample queries here. You can also send graphql queries as HTTP POST requests and get responses.

"},{"location":"user/servers/lib/LIB-MS.html#api-queries","title":"API Queries","text":"

The library microservice services two API calls:

  • Provide a list of contents for a directory
  • Fetch a file from the available files

The API calls are accepted over GraphQL and HTTP API end points. The format of the accepted queries are:

"},{"location":"user/servers/lib/LIB-MS.html#provide-list-of-contents-for-a-directory","title":"Provide list of contents for a directory","text":"

To retrieve a list of files in a directory, use the following GraphQL query.

Replace path with the desired directory path.

send requests to: https://foo.com/lib

GraphQL QueryGraphQL ResponseHTTP RequestHTTP Response
query {\n  listDirectory(path: \"user1\") {\n    repository {\n      tree {\n        blobs {\n          edges {\n            node {\n              name\n              type\n            }\n          }\n        }\n        trees {\n          edges {\n            node {\n              name\n              type\n            }\n          }\n        }\n      }\n    }\n  }\n}\n
{\n  \"data\": {\n    \"listDirectory\": {\n      \"repository\": {\n        \"tree\": {\n          \"blobs\": {\n            \"edges\": []\n          },\n          \"trees\": {\n            \"edges\": [\n              {\n                \"node\": {\n                  \"name\": \"common\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"data\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"digital twins\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"functions\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"models\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"tools\",\n                  \"type\": \"tree\"\n                }\n              }\n            ]\n          }\n        }\n      }\n    }\n  }\n}\n
POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n  \"query\":\"query {\\n  listDirectory(path: \\\"user1\\\") {\\n    repository {\\n      tree {\\n        blobs {\\n          edges {\\n            node {\\n              name\\n              type\\n            }\\n          }\\n        }\\n        trees {\\n          edges {\\n            node {\\n              name\\n              type\\n            }\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\"\n}\n
HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 306\nContent-Type: application/json; charset=utf-8\nDate: Tue, 26 Sep 2023 20:26:49 GMT\nX-Powered-By: Express\n{\"data\":{\"listDirectory\":{\"repository\":{\"tree\":{\"blobs\":{\"edges\":[]},\"trees\":{\"edges\":[{\"node\":{\"name\":\"data\",\"type\":\"tree\"}},{\"node\":{\"name\":\"digital twins\",\"type\":\"tree\"}},{\"node\":{\"name\":\"functions\",\"type\":\"tree\"}},{\"node\":{\"name\":\"models\",\"type\":\"tree\"}},{\"node\":{\"name\":\"tools\",\"type\":\"tree\"}}]}}}}}}\n
"},{"location":"user/servers/lib/LIB-MS.html#fetch-a-file-from-the-available-files","title":"Fetch a file from the available files","text":"

This query receives directory path and send the file contents to user in response.

To check this query, create a file files/user2/data/welcome.txt with content of hello world.

GraphQL RequestGraphQL ResponseHTTP RequestHTTP Response
query {\n  readFile(path: \"user2/data/sample.txt\") {\n    repository {\n      blobs {\n        nodes {\n          name\n          rawBlob\n          rawTextBlob\n        }\n      }\n    }\n  }\n}\n
{\n  \"data\": {\n    \"readFile\": {\n      \"repository\": {\n        \"blobs\": {\n          \"nodes\": [\n            {\n              \"name\": \"sample.txt\",\n              \"rawBlob\": \"hello world\",\n              \"rawTextBlob\": \"hello world\"\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n
POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 217\n{\n  \"query\":\"query {\\n  readFile(path: \\\"user2/data/welcome.txt\\\") {\\n    repository {\\n      blobs {\\n        nodes {\\n          name\\n          rawBlob\\n          rawTextBlob\\n        }\\n      }\\n    }\\n  }\\n}\"\n}\n
HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Wed, 27 Sep 2023 09:17:18 GMT\nX-Powered-By: Express\n{\"data\":{\"readFile\":{\"repository\":{\"blobs\":{\"nodes\":[{\"name\":\"welcome.txt\",\"rawBlob\":\"hello world\",\"rawTextBlob\":\"hello world\"}]}}}}}\n

The path refers to the file path to look at: For example, user1 looks at files of user1; user1/functions looks at contents of functions/ directory.

"},{"location":"user/servers/lib/assets.html","title":"Reusable Assets","text":"

The reusability of digital twin assets makes it easy for users to work with the digital twins. The reusability of assets is a fundamental feature of the platform.

"},{"location":"user/servers/lib/assets.html#kinds-of-reusable-assets","title":"Kinds of Reusable Assets","text":"

The DTaaS software categorizes all the reusable library assets into five categories:

"},{"location":"user/servers/lib/assets.html#functions","title":"Functions","text":"

The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.

"},{"location":"user/servers/lib/assets.html#data","title":"Data","text":"

The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.

"},{"location":"user/servers/lib/assets.html#models","title":"Models","text":"

The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.

"},{"location":"user/servers/lib/assets.html#tools","title":"Tools","text":"

The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.

"},{"location":"user/servers/lib/assets.html#digital-twins","title":"Digital Twins","text":"

These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.

"},{"location":"user/servers/lib/assets.html#file-system-structure","title":"File System Structure","text":"

Each user has their assets put into five different directories named above. In addition, there will also be common library assets that all users have access to. A simplified example of the structure is as follows:

workspace/\n  data/\n    data1/ (ex: sensor)\n      filename (ex: sensor.csv)\n      README.md\n    data2/ (ex: turbine)\n      README.md (remote source; no local file)\n    ...\n  digital_twins/\n    digital_twin-1/ (ex: incubator)\n      code and config\n      README.md (usage instructions)\n    digital_twin-2/ (ex: mass spring damper)\n      code and config\n      README.md (usage instructions)\n    digital_twin-3/ (ex: model swap)\n      code and config\n      README.md (usage instructions)\n    ...\n  functions/\n    function1/ (ex: graphs)\n      filename (ex: graphs.py)\n      README.md\n    function2/ (ex: statistics)\n      filename (ex: statistics.py)\n      README.md\n    ...\n  models/\n    model1/ (ex: spring)\n      filename (ex: spring.fmu)\n      README.md\n    model2/ (ex: building)\n      filename (ex: building.skp)\n      README.md\n    model3/ (ex: rabbitmq)\n      filename (ex: rabbitmq.fmu)\n      README.md\n    ...\n  tools/\n    tool1/ (ex: maestro)\n      filename (ex: maestro.jar)\n      README.md\n    ...\n  common/\n    data/\n    functions/\n    models/\n    tools/\n

Tip

The DTaaS is agnostic to the format of your assets. The only requirement is that they are files which can be uploaded on the Library page. Any directories can be compressed as one file and uploaded. You can decompress the file into a directory from a Terminal or xfce Desktop available on the Workbench page.

A recommended file system structure for storing assets is also available in DTaaS examples.

"},{"location":"user/servers/lib/assets.html#upload-assets","title":"Upload Assets","text":"

Users can upload assets into their workspace using Library page of the website.

You can go into a directory and click on the upload button to upload a file or a directory into your workspace. This asset is then available in all the workbench tools you can use. You can also create new assets on the page by clicking on new drop down menu. This is a simple web interface which allows you to create text-based files. You need to upload other files using upload button.

The user workbench has the following services:

  • Jupyter Notebook and Lab
  • VS Code
  • XFCE Desktop Environment available via VNC
  • Terminal

Users can also bring their DT assets into user workspaces from outside using any of the above mentioned services. The developers using git repositories can clone from and push to remote git servers. Users can also use widely used file transfer protocols such as FTP, and SCP to bring the required DT assets into their workspaces.

"},{"location":"user/website/index.html","title":"DTaaS Website Screenshots","text":"

This page contains a screenshot driven preview of the website serving the DTaaS software platform.

"},{"location":"user/website/index.html#login-to-enter-the-dtaas-software-platform","title":"Login to enter the DTaaS software platform","text":"

The screen presents with HTTP authentication form. You can enter the user credentials. If the DTaaS is being served over HTTPS secure communication protocol, the username and password are secure.

"},{"location":"user/website/index.html#start-the-authentication","title":"Start the Authentication","text":"

You are now logged into the DTaaS server. The DTaaS uses third-party authentication protocol known as OAuth. This protocol provides secure access to a DTaaS installation if users have a working active accounts at the selected OAuth service provider. The DTaaS uses Gitlab as OAuth provider.

You can see the Gitlab signin button. A click on this button takes you to Gitlab instance providing authentication for DTaaS.

"},{"location":"user/website/index.html#authenticate-at-gitlab","title":"Authenticate at Gitlab","text":"

The username and password authentication takes place on the gitlab website. Enter your username and password in the login form.

"},{"location":"user/website/index.html#permit-dtaas-to-use-gitlab","title":"Permit DTaaS to Use Gitlab","text":"

The DTaaS application needs your permission to use your Gitlab account for authentication. Click on Authorize button.

After successful authentication, you will be redirected to the Library page of the DTaaS website.

"},{"location":"user/website/index.html#overview-of-menu-items","title":"Overview of menu items","text":"

The menu is hidden by default. Only the icons of menu items are visible. You can click on the icon in the top-left corner of the page to see the menu.

There are three menu items:

Library: for management of reusable library assets. You can upload, download, create and modify new files on this page.

Digital Twins: for management of digital twins. You are presented with the Jupyter Lab page from which you can run the digital twins.

Workbench: Not all digital twins can be managed within Jupyter Lab. You have more tools at your disposal on this page.

"},{"location":"user/website/index.html#library-tabs-and-their-help-text","title":"Library tabs and their help text","text":"

You can see the file manager and five tabs above the library manager. Each tab provides help text to guide users in the use of different directories in their workspace.

Functions

The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.

Data

The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.

Models

The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.

Tools

The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.

Digital

These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.

In addition to the five directories, there is also common directory in which five sub-directories exist. These sub-directories are: data, functions, models, tools and digital twins.

Common

The common directory again has four sub-directories: * data * functions * models * tools * digital twins The assets common to all users are placed in common.

The items used by more than one user are placed in common. The items in the common directory are available to all users. Further explanation of directory structure and placement of reusable assets within the the directory structure is in the assets page

The file manager is based on Jupyter notebook and all the tasks you can perform in the Jupyter Notebook can be undertaken here.

"},{"location":"user/website/index.html#digital-twins-page","title":"Digital Twins page","text":"

The digital twins page has three tabs and the central pane opens Jupyter lab. There are three tabs with helpful instructions on the suggested tasks you can undertake in the Create - Execute - Analyze life cycle phases of digital twin. You can see more explanation on the life cycle phases of digital twin.

Create

Create digital twins from tools provided within user workspaces. Each digital twin will have one directory. It is suggested that user provide one bash shell script to run their digital twin. Users can create the required scripts and other files from tools provided in Workbench page.

Execute

Digital twins are executed from within user workspaces. The given bash script gets executed from digital twin directory. Terminal-based digital twins can be executed from VSCode and graphical digital twins can be executed from VNC GUI. The results of execution can be placed in the data directory.

Analyze

The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI. The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI.

The reusable assets (files) seen in the file manager are available in the Jupyter Lab. In addition, there is a git plugin installed in the Jupyter Lab using which you can link your files with the external git repositories.

"},{"location":"user/website/index.html#workbench","title":"Workbench","text":"

The workbench page provides links to four integrated tools.

The hyperlinks open in new browser tab. The screenshots of pages opened in new browser are:

Bug

The Terminal hyperlink does not always work reliably. If you want terminal. Please use the tools dropdown in the Jupyter Notebook. The Terminal hyperlink does not always work reliably. If you want terminal. Please use the tools dropdown in the Jupyter Notebook.

"},{"location":"user/website/index.html#finally-logout","title":"Finally logout","text":"

You have to close the browser in order to completely exit the DTaaS software platform.

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"What is DTaaS?","text":"

The Digital Twin as a Service (DTaaS) software platform is useful to Build, Use and Share digital twins (DTs).

Build: The DTs are built on the software platform using the reusable DT components available on the platform.

Use: Use the DTs on the software platform.

Share: Share ready to use DTs with other users. It is also possible to share the services offered by one DT with other users.

There is an overview of the software available in the form of slides, video, and feature walkthrough.

"},{"location":"index.html#license","title":"License","text":"

This software is owned by The INTO-CPS Association and is available under the INTO-CPS License.

The DTaaS software platform uses Tr\u00e6fik, ML Workspace, Grafana, InfluxDB, MQTT and RabbitMQ open-source components. These software components have their own licenses.

"},{"location":"FAQ.html","title":"Frequently Asked Questions","text":""},{"location":"FAQ.html#abreviations","title":"Abreviations","text":"Term Full Form DT Digital Twin DTaaS Digital Twin as a Service PT Physical Twin"},{"location":"FAQ.html#general-questions","title":"General Questions","text":"What is DTaaS?

DTaaS is software platform on which you can create and run digital twins. Please see the features page to get a sense of the things you can do in DaaS.

Are there any Key Performance / Capability Indicators for DTaaS? Key Performance Indicator Value Processor Two AMD EPYC 7443 24-Core Processors Maximum Storage Capacity 4TB SSD, RAID 0 configuration Storage Type File System Maximum file size 10 GB Data transfer speed 100 Mbps Data Security Yes Data Privacy Yes Redundancy None Availability It is a matter of human resources. If you have human resources to maintain DTaaS round the clock, upwards 95% is easily possible. Do you provide licensed software like Matlab?

The licensed software are not available on the software platform. But users have private workspaces which are based on Linux-based xfce Desktop environment. Users can install software in their workspaces. The licensed software installed by one user is not available to another user.

"},{"location":"FAQ.html#digital-twin-models","title":"Digital Twin Models","text":"Can DTaaS create new DT models?

DTaaS is not a model creation tool. You can put model creation tool inside DTaaS and create new models. The DTaaS itself does not create digital twin models but it can help users create digital twin models. You can run Linux desktop / terminal tools inside the DTaaS. So you can create models inside DTaaS and run them using tools that can run in Linux. The Windows only tools can not run in DTaaS.

How can DTaaS help to design geometric model? Does it support 3D modeling and simulation?

Well, DTaaS by itself does not produce any models. DTaaS only provides a platform and an ecosystem of services to facilitate digital twins to be run as services. Since each user has a Linux OS at their disposal, they can also run digital twins that have graphical interface. In summary, DTaaS is neither a modeling nor simulation tool. If you need these kinds of tools, you need to bring them onto the platform. For example, if you need Matlab for your work, you need to bring he licensed Matlab software.

Commercial DT platforms in market provide modelling and simulation alongside integration and UI. DTaas is not able to do any modelling or simulation on its own like other commercial platforms. Is this a correct understanding?

Yes, you are right

Can DTaaS support only the information models (or behavioral models) or some other kind of models?

The DTaaS as such is agnostic to the kind of models you use. DTaaS can run all kinds of models. This includes behavioral and data models. As long as you have models and the matching solvers that can run in Linux OS, you are good to go in DTaaS. In some cases, models and solvers (tools) are bundled together to form monolithic DTs. The DTaaS does not limit you from running such DTs as well. DTaaS does not provide dedicated solvers. But if you can install a solver in your workspace, then you don't need the platform to provide one.

Does it support XML-based representation and ontology representation?

Currently No. We are looking for users needing this capability. If you have concrete requirements and an example, we can discuss a way of realizing it in DTaaS.

"},{"location":"FAQ.html#communication-between-physical-twin-and-digital-twin","title":"Communication Between Physical Twin and Digital Twin","text":"How would you measure a physical entity like shape, size, weight, structure, chemical attributes etc. using DTaaS? Any specific technology used in this case?

The real measurements are done at physical twin which are then communicated to the digital twin. Any digital twin platform like DTaaS can only facilitate this communication of these measurements from physical twin. The DTaaS provides InfluxDB, RabbitMQ and Mosquitto services for this purpose. These three are probably most widely used services for digital twin communication. Having said that, DTaaS allows you to utilize other communication technologies and services hosted elsewhere on the Internet.

How a real-time data can be differed from static data and what is the procedure to identify dynamic data? Is there any UI or specific tool used here?

DTaaS can not understand the static or dynamic nature of data. It can facilitate storing names, units and any other text description of interesting quantities (weight of batter, voltage output etc). It can also store the data being sent by the physical twin. The distinction between static and dynamic data needs to be made by the user. Only metadata of the data can reveal such more information about the nature of data. A tool can probably help in very specific cases, but you need metadata. If there is a human being making this distinction, then the need for metadata goes down but does not completely go away. In some of the DT platforms supported by manufacturers, there is a tight integration between data and model. In this case, the tool itself is taking care of the metadata. The DTaaS is a generic platform which can support execution of digital twins. If a tool can be executed on a Linux desktop / commandline, the tool can be supported within DTaaS. The tool (ex. Matlab) itself can take care of the metadata requirements.

How can DTaaS control the physical entity? Which technologies it uses for controlling the physical world?

At a very abstract level, there is a communication from physical entity to digital entity and back to physical entity. How this communication should happen is decided by the person designing the digital entity. The DTaaS can provide communication services that can help you do this communication with relative ease. You can use InfluxDB, RabbitMQ and Mosquitto services hosted on DTaaS for two communication between digital and physical entities.

"},{"location":"FAQ.html#data-management","title":"Data Management","text":"Does DTaaS support data collection from different sources like hardware, software and network? Is there any user interface or any tracking instruments used for data collection?

The DTaaS provids InfluxDB, RabbitMQ, MQTT services. Both the physical twin and digital twin can utilize these protocols for communication. The IoT (time-series) data can be collected using InfluxDB and MQTT broker services. There is a user interface for InfluxDB which can be used to analyze the data collected. Users can also manually upload their data files into DTaaS.

Which transmission protocol does DTaaS allow?

InfluxDB, RabbitMQ, MQTT and anything else that can be used from Cloud service providers.

Does DTaaS support multisource information and combined multi sensor input data? Can it provide analysis and decision-supporting inferences?

You can store information from multiple sources. The existing InfluxDB services hosted on DTaaS already has a dedicated Influx / Flux query language for doing sensor fusion, analysis and inferences.

Which kinds of visualization technologies DTaaS can support (e.g. graphical, geometry, image, VR/AR representation)?

Graphical, geometric and images. If you need specific licensed software for the visualization, you will have to bring the license for it. DTaaS does not support AR/VR.

Can DTaaS collect data directly from sensors?

Yes

Is DTaaS able to transmit data to cloud in real time?

Yes

"},{"location":"FAQ.html#platform-native-services-on-dtaas-platform","title":"Platform Native Services on DTaaS Platform","text":"Is DTaaS able to detect the anomalies about-to-fail components and prescribe solutions?

This is the job of a digital twin. If you have a ready to use digital twin that does the job, DTaaS allows others to use your solution.

"},{"location":"FAQ.html#comparison-with-other-dt-platforms","title":"Comparison with other DT Platforms","text":"All the DT platforms seem to provide different features. Is there a comparison chart?

Here is a qualitative comparison of different DT integration platforms:

Legend: high performance (H), mid performance (M) and low performance (L)

DT Platforms License DT Development Process Connectivity Security Processing power, performance and Scalability Data Storage Visualization Modeling and Simulation Microsoft Azure DT Commercial Cloud H H H M H H H AWS IOT Greengrass Open source commercial H H H M H H H Eclipse Ditto Open source M H M H H L L Asset Administration Shell Open source H H L H M L M PTC Thingworx Commercial H H H H H M M GE Predix Commercial M H H M L M L AU's DTaaS Open source H H L L M M M

Adopted by Tanusree Roy from Table 4 and 5 of the following paper.

Ref: Naseri, F., Gil, S., Barbu, C., Cetkin, E., Yarimca, G., Jensen, A. C., ... & Gomes, C. (2023). Digital twin of electric vehicle battery systems: Comprehensive review of the use cases, requirements, and platforms. Renewable and Sustainable Energy Reviews, 179, 113280.

All the comparisons between DT platforms seems so confusing. Why?

The fundamental confusion comes from the fact that different DT platforms (Azure DT, GE Predix) provide different kind of DT capabilities. You can run all kinds of models natively in GE Predix. In fact you can run models even next to (on) PTs using GE Predix. But you cannot natively do that in Azure DT service. You have to do the leg work of integrating with other Azure services or third-party services to get the kind of capabilities that GE Predix natively provides in one interface. The takeaway is that we pick horses for the courses.

"},{"location":"FAQ.html#create-assets","title":"Create Assets","text":"Can DTaaS be used to create new DT assets?

The core feature of DTaaS software is to help users create DTs from assets already available in the library. However, it is possible for users to take advantage of services available in their workspace to install asset authoring tools in their own workspace. These authoring tools can then be used to create and publish new assets. User workspaces are private and are not shared with other users. Thus any licensed software tools installed in their workspace is only available to them.

"},{"location":"FAQ.html#gdpr-concerns","title":"GDPR Concerns","text":"Does your platform adhere to GDPR compliance standards? If so, how?

The DTaaS software platform does not store any personal information of users. It only stores username to identify users and these usernames do not contain enough information to deduce the true identify of users.

Which security measures are deployed? How is data encrypted (if exists)?

The default installation requires a HTTPS terminating reverse proxy server from user to the DTaaS software installation. The administrators of DTaaS software can also install HTTPS certificates into the application. The codebase can generate HTTPS application and the users also have the option of installing their own certificates obtained from certification agencies such as LetsEncrypt.

What security measures does your cloud provider offer?

The current installation of DTaaS software runs on Aarhus University servers. The university network offers firewall access control to servers so that only permitted user groups have access to the network and physical access to the server.

How is user access controlled and authenticated?

There is a two-level authentication mechanism in place in each default installation of DTaaS. The first-level is HTTP basic authentication over secure HTTPS connection. The second-level is the OAuth PKCE authentication flow for each user. The OAuth authentication is provider by a Gitlab instance. The DTaaS does not store the account and authentication information of users.

Does you platform manage personal data? How is data classified and tagged based on the sensitivity? Who has access to the critical data?

The platform does not store personal data of users.

How are identities and roles managed within the platform?

There are two roles for users on the platform. One is the administrator and the other one is user. The user roles are managed by the administrator.

"},{"location":"LICENSE.html","title":"License","text":"

--- Start of Definition of INTO-CPS Association Public License ---

/*

  • This file is part of the INTO-CPS Association.

  • Copyright (c) 2017-CurrentYear, INTO-CPS Association (ICA),

  • c/o Peter Gorm Larsen, Aarhus University, Department of Engineering,
  • Finlandsgade 22, 8200 Aarhus N, Denmark.

  • All rights reserved.

  • THIS PROGRAM IS PROVIDED UNDER THE TERMS OF GPL VERSION 3 LICENSE OR

  • THIS INTO-CPS ASSOCIATION PUBLIC LICENSE (ICAPL) VERSION 1.0.
  • ANY USE, REPRODUCTION OR DISTRIBUTION OF THIS PROGRAM CONSTITUTES
  • RECIPIENT'S ACCEPTANCE OF THE INTO-CPS ASSOCIATION PUBLIC LICENSE OR
  • THE GPL VERSION 3, ACCORDING TO RECIPIENTS CHOICE.

  • The INTO-CPS tool suite software and the INTO-CPS Association

  • Public License (ICAPL) are obtained from the INTO-CPS Association, either
  • from the above address, from the URLs: http://www.into-cps.org or
  • in the INTO-CPS tool suite distribution.
  • GNU version 3 is obtained from: http://www.gnu.org/copyleft/gpl.html.

  • This program is distributed WITHOUT ANY WARRANTY; without

  • even the implied warranty of MERCHANTABILITY or FITNESS
  • FOR A PARTICULAR PURPOSE, EXCEPT AS EXPRESSLY SET FORTH
  • IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF
  • THE INTO-CPS ASSOCIATION PUBLIC LICENSE.

  • See the full ICAPL conditions for more details.

*/

--- End of INTO-CPS Association Public License Header ---

The ICAPL is a public license for the INTO-CPS tool suite with three modes/alternatives (GPL, ICA-Internal-EPL, ICA-External-EPL) for use and redistribution, in source and/or binary/object-code form:

  • GPL. Any party (member or non-member of the INTO-CPS Association) may use and redistribute INTO-CPS tool suite under GPL version 3.

  • Silver Level members of the INTO-CPS Association may also use and redistribute the INTO-CPS tool suite under ICA-Internal-EPL conditions.

  • Gold Level members of the INTO-CPS Association may also use and redistribute The INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL conditions.

Definitions of the INTO-CPS Association Public license modes:

  • GPL = GPL version 3.

  • ICA-Internal-EPL = These INTO-CPA Association Public license conditions together with Internally restricted EPL, i.e., EPL version 1.0 with the Additional Condition that use and redistribution by a member of the INTO-CPS Association is only allowed within the INTO-CPS Association member's own organization (i.e., its own legal entity), or for a member of the INTO-CPS Association paying a membership fee corresponding to the size of the organization including all its affiliates, use and redistribution is allowed within/between its affiliates.

  • ICA-External-EPL = These INTO-CPA Association Public license conditions together with Externally restricted EPL, i.e., EPL version 1.0 with the Additional Condition that use and redistribution by a member of the INTO-CPS Association, or by a Licensed Third Party Distributor having a redistribution agreement with that member, to parties external to the INTO-CPS Association member\u2019s own organization (i.e., its own legal entity) is only allowed in binary/object-code form, except the case of redistribution to other members the INTO-CPS Association to which source is also allowed to be distributed.

[This has the consequence that an external party who wishes to use the INTO-CPS Association in source form together with its own proprietary software in all cases must be a member of the INTO-CPS Association].

In all cases of usage and redistribution by recipients, the following conditions also apply:

a) Redistributions of source code must retain the above copyright notice, all definitions, and conditions. It is sufficient if the ICAPL Header is present in each source file, if the full ICAPL is available in a prominent and easily located place in the redistribution.

b) Redistributions in binary/object-code form must reproduce the above copyright notice, all definitions, and conditions. It is sufficient if the ICAPL Header and the location in the redistribution of the full ICAPL are present in the documentation and/or other materials provided with the redistribution, if the full ICAPL is available in a prominent and easily located place in the redistribution.

c) A recipient must clearly indicate its chosen usage mode of ICAPL, in accompanying documentation and in a text file ICA-USAGE-MODE.txt, provided with the distribution.

d) Contributor(s) making a Contribution to the INTO-CPS Association thereby also makes a Transfer of Contribution Copyright. In return, upon the effective date of the transfer, ICA grants the Contributor(s) a Contribution License of the Contribution. ICA has the right to accept or refuse Contributions.

Definitions:

\"Subsidiary license conditions\" means:

The additional license conditions depending on the by the recipient chosen mode of ICAPL, defined by GPL version 3.0 for GPL, and by EPL for ICA-Internal-EPL and ICA-External-EPL.

\"ICAPL\" means:

INTO-CPS Association Public License version 1.0, i.e., the license defined here (the text between \"--- Start of Definition of INTO-CPS Association Public License ---\" and \"--- End of Definition of INTO-CPS Association Public License ---\", or later versions thereof.

\"ICAPL Header\" means:

INTO-CPS Association Public License Header version 1.2, i.e., the text between \"--- Start of Definition of INTO-CPS Association Public License ---\" and \"--- End of INTO-CPS Association Public License Header ---, or later versions thereof.

\"Contribution\" means:

a) in the case of the initial Contributor, the initial code and documentation distributed under ICAPL, and

b) in the case of each subsequent Contributor: i) changes to the INTO-CPS tool suite, and ii) additions to the INTO-CPS tool suite;

where such changes and/or additions to the INTO-CPS tool suite originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the INTO-CPS tool suite by such Contributor itself or anyone acting on such Contributor's behalf.

For Contributors licensing the INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL conditions, the following conditions also hold:

Contributions do not include additions to the distributed Program which: (i) are separate modules of software distributed in conjunction with the INTO-CPS tool suite under their own license agreement, (ii) are separate modules which are not derivative works of the INTO-CPS tool suite, and (iii) are separate modules of software distributed in conjunction with the INTO-CPS tool suite under their own license agreement where these separate modules are merged with (weaved together with) modules of The INTO-CPS tool suite to form new modules that are distributed as object code or source code under their own license agreement, as allowed under the Additional Condition of internal distribution according to ICA-Internal-EPL and/or Additional Condition for external distribution according to ICA-External-EPL.

\"Transfer of Contribution Copyright\" means that the Contributors of a Contribution transfer the ownership and the copyright of the Contribution to the INTO-CPS Association, the INTO-CPS Association Copyright owner, for inclusion in the INTO-CPS tool suite. The transfer takes place upon the effective date when the Contribution is made available on the INTO-CPS Association web site under ICAPL, by such Contributors themselves or anyone acting on such Contributors' behalf. The transfer is free of charge. If the Contributors or the INTO-CPS Association so wish, an optional Copyright transfer agreement can be signed between the INTO-CPS Association and the Contributors.

\"Contribution License\" means a license from the INTO-CPS Association to the Contributors of the Contribution, effective on the date of the Transfer of Contribution Copyright, where the INTO-CPS Association grants the Contributors a non-exclusive, world-wide, transferable, free of charge, perpetual license, including sublicensing rights, to use, have used, modify, have modified, reproduce and or have reproduced the contributed material, for business and other purposes, including but not limited to evaluation, development, testing, integration and merging with other software and distribution. The warranty and liability disclaimers of ICAPL apply to this license.

\"Contributor\" means any person or entity that distributes (part of) the INTO-CPS tool chain.

\"The Program\" means the Contributions distributed in accordance with ICAPL.

\"The INTO-CPS tool chain\" means the Contributions distributed in accordance with ICAPL.

\"Recipient\" means anyone who receives the INTO-CPS tool chain under ICAPL, including all Contributors.

\"Licensed Third Party Distributor\" means a reseller/distributor having signed a redistribution/resale agreement in accordance with ICAPL and the INTO-CPS Association Bylaws, with a Gold Level organizational member which is not an Affiliate of the reseller/distributor, for distributing a product containing part(s) of the INTO-CPS tool suite. The Licensed Third Party Distributor shall only be allowed further redistribution to other resellers if the Gold Level member is granting such a right to it in the redistribution/resale agreement between the Gold Level member and the Licensed Third Party Distributor.

\"Affiliate\" shall mean any legal entity, directly or indirectly, through one or more intermediaries, controlling or controlled by or under common control with any other legal entity, as the case may be. For purposes of this definition, the term \"control\" (including the terms \"controlling,\" \"controlled by\" and \"under common control with\") means the possession, direct or indirect, of the power to direct or cause the direction of the management and policies of a legal entity, whether through the ownership of voting securities, by contract or otherwise.

NO WARRANTY

EXCEPT AS EXPRESSLY SET FORTH IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF ICAPL, THE INTO-CPS ASSOCIATION IS PROVIDED ON AN \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the INTO-CPS tool suite and assumes all risks associated with its exercise of rights under ICAPL , including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.

DISCLAIMER OF LIABILITY

EXCEPT AS EXPRESSLY SET FORTH IN THE BY RECIPIENT SELECTED SUBSIDIARY LICENSE CONDITIONS OF ICAPL, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE INTO-CPS TOOL SUITE OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

A Contributor licensing the INTO-CPS tool suite under ICA-Internal-EPL or ICA-External-EPL may choose to distribute (parts of) the INTO-CPS tool suite in object code form under its own license agreement, provided that:

a) it complies with the terms and conditions of ICAPL; or for the case of redistribution of the INTO-CPS tool suite together with proprietary code it is a dual license where the INTO-CPS tool suite parts are distributed under ICAPL compatible conditions and the proprietary code is distributed under proprietary license conditions; and

b) its license agreement: i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; iii) states that any provisions which differ from ICAPL are offered by that Contributor alone and not by any other party; and iv) states from where the source code for the INTO-CPS tool suite is available, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.

When the INTO-CPS tool suite is made available in source code form:

a) it must be made available under ICAPL; and

b) a copy of ICAPL must be included with each copy of the INTO-CPS tool suite.

c) a copy of the subsidiary license associated with the selected mode of ICAPL must be included with each copy of the INTO-CPS tool suite.

Contributors may not remove or alter any copyright notices contained within The INTO-CPS tool suite.

If there is a conflict between ICAPL and the subsidiary license conditions, ICAPL has priority.

This Agreement is governed by the laws of Denmark. The place of jurisdiction for all disagreements related to this Agreement, is Aarhus, Denmark.

The EPL 1.0 license definition has been obtained from: http://www.eclipse.org/legal/epl-v10.html. It is also reproduced in the INTO-CPS distribution.

The GPL Version 3 license definition has been obtained from http://www.gnu.org/copyleft/gpl.html. It is also reproduced in the INTO-CPS distribution.

--- End of Definition of INTO-CPS Association Public License ---

"},{"location":"PUBLISH.html","title":"Project Documentation","text":"

This file contains instructions for creation, compilation and publication of project documentation.

The documentation system is based on Material for Mkdocs. The documentation is generated based on the configuration files:

  • mkdocs.yml: used for generating online documentation which is hosted on the web
  • mkdocs-github.yml: used for generating documentation in github actions

Install Mkdocs using the following command.

pip install -r docs/requirements.txt\n
"},{"location":"PUBLISH.html#fix-linting-errors","title":"Fix Linting Errors","text":"

This project uses markdownlint linter tool for identifying the formatting issues in markdown files. Run

mdl docs\n

from top-directory of the project and fix any identified issues. This needs to be done before committing changes to the documentation.

"},{"location":"PUBLISH.html#create-documentation","title":"Create documentation","text":"

The document generation pipeline can generate both html and pdf versions of documentation.

The generation of pdf version of documentation is controlled via a shell variable.

export MKDOCS_ENABLE_PDF_EXPORT=0 #disables generation of pdf document\nexport MKDOCS_ENABLE_PDF_EXPORT=1 #enables generation of pdf document\n

The mkdocs utility allows for live editing of documentation on the developer computer.

You can add, and edit the markdown files in docs/ directory to update the documentation. There is a facility to check the status of your documentation by using:

mkdocs serve --config-file mkdocs.yml\n
"},{"location":"PUBLISH.html#publish-documentation","title":"Publish documentation","text":"

You can compile and place the html version of documentation on the webpage-docs branch of the codebase.

export MKDOCS_ENABLE_PDF_EXPORT=1 #enable generation of pdf document\nsource script/docs.sh [version]\n

The command takes an optional version parameter. This version parameter is needed for making a release. Otherwise, the documentation gets published with the latest version tag. This command makes a new commit on webpage-docs branch. You need to push the branch to upstream.

git push webpage-docs\n

The github pages system serves the project documentation from this branch.

"},{"location":"bugs.html","title":"Few issues in the Software","text":""},{"location":"bugs.html#third-party-software","title":"Third-Party Software","text":"
  • We use third-party software which have certain known issues. Some of the issues are listed below.
"},{"location":"bugs.html#ml-workspace","title":"ML Workspace","text":"
  • the docker container loses network connectivity after three days. The only known solution is to restart the docker container. You don't need to restart the complete DTaaS platform, restart of the docker container of ml-workspace is sufficient.
  • the terminal tool doesn't seem to have the ability to refresh itself. If there is an issue, the only solution is to close and reopen the terminal from \"open tools\" drop down of notebook
  • terminal app does not show at all after some time: terminal always comes if it is open from drop-down menu of Jupyter Notebook, but not as a direct link.
"},{"location":"bugs.html#gitlab","title":"Gitlab","text":"
  • The gilab oauth authentication service does not have a way to sign out of a third-party application. Even if you sign out of DTaaS, the gitlab still shows user as signed in. The next time you click on the sign in button on the DTaaS page, user is not shown the login page. Instead user is directly taken to the Library page. So close the brower window after you are done. Another way to overcome this limitation is to open your gitlab instance (https://gitlab.foo.com) and signout from there. Thus user needs to sign out of two places, namely DTaaS and gitlab, in order to completely exit the DTaaS application.
"},{"location":"thanks.html","title":"Contributors","text":"

code contributors

"},{"location":"thanks.html#users","title":"Users","text":"

Cl\u00e1udio \u00c2ngelo Gon\u00e7alves Gomes, Dmitri Tcherniak, Elif Ecem Bas, Giuseppe Abbiati, Hao Feng, Henrik Ejersbo, Tanusree Roy, Farshid Naseri

"},{"location":"thanks.html#documentation","title":"Documentation","text":"
  1. Talasila, P., Gomes, C., Mikkelsen, P. H., Arboleda, S. G., Kamburjan, E., & Larsen, P. G. (2023). Digital Twin as a Service (DTaaS): A Platform for Digital Twin Developers and Users arXiv preprint arXiv:2305.07244.
  2. Astitva Sehgal for developer and example documentation.
  3. Tanusree Roy and Farshid Naseri for asking interesting questions that ended up in FAQs.
"},{"location":"admin/host.html","title":"DTaaS on Linux Operating System","text":"

These are installation instructions for running DTaaS application on a Ubuntu Server 22.04 Operating System. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space.

A dummy foo.com URL has been used for illustration. Please change this to your unique website URL. It is assumed that you are going to serve the application in only HTTPS mode.

A successful installation will create a setup similar to the one shown in the figure.

Please follow these steps to make this work in your local environment. Download the DTaaS.zip from the releases page. Unzip the same into a directory named DTaaS. The rest of the instructions assume that your working directory is DTaaS.

Note

If you only want to test the application and are not setting up a production instance, you can follow the instructions of trial installation.

"},{"location":"admin/host.html#configuration","title":"Configuration","text":"

You need to configure the Traefik gateway, library microservice and react client website.

The first step is to decide on the number of users and their usenames. The traefik gateway configuration has a template for two users. You can modify the usernames in the template to the usernames chosen by you.

"},{"location":"admin/host.html#traefik-gateway-server","title":"Traefik gateway server","text":"

You can run the Traefik gateway server in both HTTP and HTTPS mode to experience the DTaaS application. The installation guide assumes that you can run the application in HTTPS mode.

The Traefik gateway configuration is at deploy/config/gateway/fileConfig.yml. Change foo.com to your local hostname and user1/user2 to the usernames chosen by you.

Tip

Do not use http:// or https:// in deploy/config/gateway/fileConfig.yml.

"},{"location":"admin/host.html#authentication","title":"Authentication","text":"

This step requires htpasswd commandline utility. If it is not available on your system, please install the same by using

sudo apt-get update\nsudo apt-get install -y apache2-utils\n

You can now proceed with update of the gateway authentication setup. The dummy username is foo and the password is bar. Please change this before starting the gateway.

rm deploy/config/gateway/auth\ntouch deploy/config/gateway/auth\nhtpasswd deploy/config/gateway/auth <first_username>\npassword: <your password>\n

The user credentials added in deploy/config/gateway/auth should match the usernames in deploy/config/gateway/fileConfig.yml.

"},{"location":"admin/host.html#lib-microservice","title":"Lib microservice","text":"

The library microservice requires configuration. A template of this configuration file is given in deploy/config/lib file. Please modify this file as per your needs.

The first step in this configuration is to prepare the a filesystem for users. An example file system in files/ directory. You can rename the top-level user1/user2 to the usernames chosen by you.

Add an environment file named .env in lib for the library microservice. An example .env file is given below. The simplest possibility is to use local mode with the following example. The filepath is the absolute filepath to files/ directory. You can copy this configuration into deploy/config/lib file to get started.

PORT='4001'\nMODE='local'\nLOCAL_PATH ='filepath'\nLOG_LEVEL='debug'\nAPOLLO_PATH='/lib'\nGRAPHQL_PLAYGROUND='true'\n
"},{"location":"admin/host.html#react-client-website","title":"React Client Website","text":""},{"location":"admin/host.html#gitlab-oauth-application","title":"Gitlab OAuth application","text":"

The DTaaS react website requires Gitlab OAuth provider. If you need more help with this step, please see the Authentication page.

You need the following information from the OAuth application registered on Gitlab:

Gitlab Variable Name Variable name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api

You can also see Gitlab help page for getting the Gitlab OAuth application details. Remember to Create gitlab accounts for usernames chosen by you.

"},{"location":"admin/host.html#update-client-config","title":"Update Client Config","text":"

Change the React website configuration in deploy/config/client/env.js.

window.env = {\nREACT_APP_ENVIRONMENT: \"prod\",\nREACT_APP_URL: \"https://foo.com/\",\nREACT_APP_URL_BASENAME: \"dtaas\",\nREACT_APP_URL_DTLINK: \"/lab\",\nREACT_APP_URL_LIBLINK: \"\",\nREACT_APP_WORKBENCHLINK_TERMINAL: \"/terminals/main\",\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: \"/tools/vnc/?password=vncpassword\",\nREACT_APP_WORKBENCHLINK_VSCODE: \"/tools/vscode/\",\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: \"/lab\",\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: \"\",\nREACT_APP_CLIENT_ID:\n\"934b98f03f1b6f743832b2840bf7cccaed93c3bfe579093dd0942a433691ccc0\",\nREACT_APP_AUTH_AUTHORITY: \"https://gitlab.foo.com/\",\nREACT_APP_REDIRECT_URI: \"https://foo.com/Library\",\nREACT_APP_LOGOUT_REDIRECT_URI: \"https://foo.com/\",\nREACT_APP_GITLAB_SCOPES: \"openid profile read_user read_repository api\",\n};\n
"},{"location":"admin/host.html#update-the-installation-script","title":"Update the installation script","text":"

Open deploy/install.sh and update user1/user2 to usernames chosen by you.

"},{"location":"admin/host.html#perform-the-installation","title":"Perform the Installation","text":"

Go to the DTaaS directory and execute

source deploy/install.sh\n

You can run this script multiple times until the installation is successful.

Note

While installing you might encounter multiple dialogs asking, which services should be restarted. Just click OK to all of those.

"},{"location":"admin/host.html#post-install-check","title":"Post-install Check","text":"

Now you should be able to access the DTaaS application at: https://foo.com.

If you can following all the screenshots from user website. Everything is correctly setup.

"},{"location":"admin/host.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"admin/overview.html","title":"Overview","text":""},{"location":"admin/overview.html#what-is-the-goal","title":"What is the goal?","text":"

The goal is to set up the DTaaS infrastructure in order to enable your users to use the DTaaS. As an admin you will administrate the users and the servers of the system.

"},{"location":"admin/overview.html#what-are-the-requirements","title":"What are the requirements?","text":""},{"location":"admin/overview.html#oauth-provider","title":"OAuth Provider","text":"

You need to have an OAuth Provider running, which the DTaaS can use for authentication. This is described further in the authentication section.

"},{"location":"admin/overview.html#domain-name","title":"Domain name","text":"

The DTaaS software can only be hosted on a server with a domain name like foo.com.

"},{"location":"admin/overview.html#reverse-proxy","title":"Reverse Proxy","text":"

The installation setup assumes that the foo.com server is behind a reverse proxy / load balancer that provides https termination. You can still use the DTaaS software even if you do not have this reverse proxy. If you do not have a reverse proxy, please replace https://foo.com with http://foo.com in client env.js file and in OAuth registration. Other installation configuration remains the same.

"},{"location":"admin/overview.html#what-to-install","title":"What to install?","text":"

The DTaaS can be installed in different ways. Each version is for different purposes:

  • Trial installation on single host
  • Production installation on single host
  • On one or two Vagrant virtual machines
  • Seperater Packages: client website and lib microservice

Follow the installation that fits your usecase.

"},{"location":"admin/services.html","title":"Third-party Services","text":"

The DTaaS software platform uses third-party software services to provide enhanced value to users.

InfluxDB, Grafana, RabbitMQ and Mosquitto are default services integrated into the DTaaS software platform.

"},{"location":"admin/services.html#pre-requisites","title":"Pre-requisites","text":"

All these services run on raw TCP/UDP ports. Thus a direct network access to these services is required for both the DTs running inside the DTaaS software and the PT located outside the DTaaS software.

There are two possible choices here:

  • Configure Traefik gateway to permit TCP/UDP traffic
  • Bypass Traefik altogether

Unless you are an informed user of Traefik, we recommend bypassing traefik and provide raw TCP/UDP access to these services from the Internet.

The InfluxDB service requires a dedicated hostname. The management interface of RabbitMQ service requires a dedicated hostname as well.

Grafana service can run well behind Traefik gateway. The default Traefik configuration makes permits access to Grafana at URL: http(s): foo.com/vis.

"},{"location":"admin/services.html#configure-and-install","title":"Configure and Install","text":"

If you have not cloned the DTaaS git repository, cloning would be the first step. In case you already have the codebase, you can skip the cloning step. To clone, do:

git clone https://github.com/into-cps-association/DTaaS.git\ncd DTaaS/deploy/services\n

The next step in installation is to specify the config of the services. There are two configuration files. The services.yml contains most of configuration settings. The mqtt-default.conf file contains the MQTT listening port. Update these two config files before proceeding with the installation of the services.

Now continue with the installation of services.

yarn install\nnode services.js\n
"},{"location":"admin/services.html#use","title":"Use","text":"

After the installation is complete, you can see the following services active at the following ports / URLs.

service external url Influx services.foo.com Grafana services.foo.com:3000 RabbitMQ Broker services.foo.com:5672 RabbitMQ Broker Management Website services.foo.com:15672 MQTT Broker services.foo.com:1883 MongoDB database services.foo.com:27017

The firewall and network access settings of corporate / cloud network need to be configured to allow external access to the services. Otherwise the users of DTaaS will not be able to utilize these services from their user workspaces.

"},{"location":"admin/trial.html","title":"Trial Installation","text":"

To try out the software, you can install it on Ubuntu Server 22.04 Operating System. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space to the vagrant box. A successful installation will create a setup similar to the one shown in the figure.

A one-step installation script is provided on this page. This script sets up the DTaaS software with default credentials and users. You can use it to check a test installation of DTaaS software.

"},{"location":"admin/trial.html#pre-requisites","title":"Pre-requisites","text":""},{"location":"admin/trial.html#1-domain-name","title":"1. Domain name","text":"

You need a domain name to run the application. The install script assumes foo.com to be your domain name. You will change this after running the script.

"},{"location":"admin/trial.html#2-gitlab-oauth-application","title":"2. Gitlab OAuth application","text":"

The DTaaS react website requires Gitlab OAuth provider. If you need more help with this step, please see the Authentication page.

You need the following information from the OAuth application registered on Gitlab:

Gitlab Variable Name Variable name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api

You can also see Gitlab help page for getting the Gitlab OAuth application details.

Remember to create gitlab accounts for user1 and user2.

"},{"location":"admin/trial.html#install","title":"Install","text":"

Note

While installing you might encounter multiple dialogs asking, which services should be restarted. Just click OK to all of those.

Run the following scripts.

wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS/feature/distributed-demo/deploy/single-script-install.sh\nbash single-script-install.sh\n

Warning

This test installation has default credentials and is thus highly insecure.

"},{"location":"admin/trial.html#post-install","title":"Post install","text":"

After the install-script. Please change foo.com and Gitlab OAuth details to your local settings in the following files.

~/DTaaS/client/build/env.js\n~/DTaaS/servers/config/gateway/dynamic/fileConfig.yml\n
"},{"location":"admin/trial.html#post-install-check","title":"Post-install Check","text":"

Now when you visit your domain, you should be able to login through your OAuth Provider and be able to access the DTaas web UI.

If you can following all the screenshots from user website. Everything is correctly setup.

"},{"location":"admin/trial.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"admin/client/CLIENT.html","title":"Host the DTaaS Client Website","text":"

To host DTaaS client website on your server, follow these steps:

  • Download the DTaaS-client.zip from the releases page.
  • Inside the DTaaS-client directory, there is site directory. The site directory contains all the optimized static files that are ready for deployment.

  • Setup the oauth application on gitlab instance. See the instructions in authentication page for completing this task.

  • Locate the file site/env.js and replace the example values to match your infrastructure. The constructed links will be \"REACT_APP_URL/REACT_APP_URL_BASENAME/{username}/{Endpoint}\". See the definitions below:
window.env = {\nREACT_APP_ENVIRONMENT: \"prod | dev\",\nREACT_APP_URL: \"URL for the gateway\",\nREACT_APP_URL_BASENAME: \"Base URL for the client website\"(optional),\nREACT_APP_URL_DTLINK: \"Endpoint for the Digital Twin\",\nREACT_APP_URL_LIBLINK: \"Endpoint for the Library Assets\",\nREACT_APP_WORKBENCHLINK_TERMINAL: \"Endpoint for the terminal link\",\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: \"Endpoint for the VNC Desktop link\",\nREACT_APP_WORKBENCHLINK_VSCODE: \"Endpoint for the VS Code link\",\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: \"Endpoint for the Jupyter Lab link\",\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK:\n\"Endpoint for the Jupyter Notebook link\",\nREACT_APP_CLIENT_ID: 'AppID genereated by the gitlab OAuth provider',\nREACT_APP_AUTH_AUTHORITY: 'URL of the private gitlab instance',\nREACT_APP_REDIRECT_URI: 'URL of the homepage for the logged in users of the website',\nREACT_APP_LOGOUT_REDIRECT_URI: 'URL of the homepage for the anonymous users of the website',\nREACT_APP_GITLAB_SCOPES: 'OAuth scopes. These should match with the scopes set in gitlab OAuth provider',\n};\n// Example values with no base URL. Trailing and ending slashes are optional.\nwindow.env = {\nREACT_APP_ENVIRONMENT: 'prod',\nREACT_APP_URL: 'https://foo.com/',\nREACT_APP_URL_BASENAME: '',\nREACT_APP_URL_DTLINK: '/lab',\nREACT_APP_URL_LIBLINK: '',\nREACT_APP_WORKBENCHLINK_TERMINAL: '/terminals/main',\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\nREACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\nREACT_APP_CLIENT_ID: '934b98f03f1b6f743832b2840bf7cccaed93c3bfe579093dd0942a433691ccc0',\nREACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\nREACT_APP_REDIRECT_URI: 'https://foo.com/Library',\nREACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/',\nREACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n};\n// Example values with \"bar\" as basename URL.\n//Trailing and ending slashes are optional.\nwindow.env = {\nREACT_APP_ENVIRONMENT: \"dev\",\nREACT_APP_URL: 'https://foo.com/',\nREACT_APP_URL_BASENAME: 'bar',\nREACT_APP_URL_DTLINK: '/lab',\nREACT_APP_URL_LIBLINK: '',\nREACT_APP_WORKBENCHLINK_TERMINAL: '/terminals/main',\nREACT_APP_WORKBENCHLINK_VNCDESKTOP: '/tools/vnc/?password=vncpassword',\nREACT_APP_WORKBENCHLINK_VSCODE: '/tools/vscode/',\nREACT_APP_WORKBENCHLINK_JUPYTERLAB: '/lab',\nREACT_APP_WORKBENCHLINK_JUPYTERNOTEBOOK: '',\nREACT_APP_CLIENT_ID: '934b98f03f1b6f743832b2840bf7cccaed93c3bfe579093dd0942a433691ccc0',\nREACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\nREACT_APP_REDIRECT_URI: 'https://foo.com/bar/Library',\nREACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/bar',\nREACT_APP_GITLAB_SCOPES: 'openid profile read_user read_repository api',\n};\n
  • Copy the entire contents of the build folder to the root directory of your server where you want to deploy the app. You can use FTP, SFTP, or any other file transfer protocol to transfer the files.

  • Make sure your server is configured to serve static files. This can vary depending on the server technology you are using, but typically you will need to configure your server to serve files from a specific directory.

  • Once the files are on your server, you should be able to access your app by visiting your server's IP address or domain name in a web browser.

The website depends on Traefik gateway and ML Workspace components to be available. Otherwise, you only get a skeleton non-functional website.

"},{"location":"admin/client/CLIENT.html#complementary-components","title":"Complementary Components","text":"

The website requires background services for providing actual functionality. The minimum background service required is atleast one ML Workspace serving the following routes.

https://foo.com/<username>/lab\nhttps://foo.com/<username>/terminals/main\nhttps://foo.com/<username>/tools/vnc/?password=vncpassword\nhttps://foo.com/<username>/tools/vscode/\n

The username is the user workspace created using ML Workspace docker container. Please follow the instructions in README. You can create as many user workspaces as you want. If you have two users - alice and bob - on your system, then the following the commands in will instantiate the required user workspaces.

mkdir -p files/alice files/bob files/common\n\nprintf \"\\n\\n start the user workspaces\"\ndocker run -d \\\n-p 8090:8080 \\\n--name \"ml-workspace-alice\" \\\n-v \"$(pwd)/files/alice:/workspace\" \\\n-v \"$(pwd)/files/common:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"alice\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2\n\ndocker run -d \\\n-p 8091:8080 \\\n--name \"ml-workspace-bob\" \\\n-v \"$(pwd)/files/bob:/workspace\" \\\n-v \"$(pwd)/files/common:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"bob\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2\n

Given that multiple services are running at different routes, a reverse proxy is needed to map the background services to external routes. You can use Apache, NGINX, Traefik or any other software to work as reverse proxy.

The website screenshots and usage information is available in user page.

"},{"location":"admin/client/auth.html","title":"Setting Up OAuth","text":"

To enable user authentication on DTaaS React client website, you will use the OAuth authentication protocol, specifically the PKCE authentication flow. Here are the steps to get started:

1. Choose Your GitLab Server:

  • You need to set up OAuth authentication on a GitLab server. The commercial gitlab.com is not suitable for multi-user authentication (DTaaS requires this), so you'll need an on-premise GitLab instance.
  • You can use GitLab Omnibus Docker for this purpose.
  • Configure the OAuth application as an instance-wide authentication type.

2. Determine Your Website's Hostname:

  • Before setting up OAuth on GitLab, decide on the hostname for your website. It's recommended to use a self-hosted GitLab instance, which you will use in other parts of the DTaaS application.

3. Define Callback and Logout URLs:

  • For the PKCE authentication flow to function correctly, you need two URLs: a callback URL and a logout URL.
  • The callback URL informs the OAuth provider of the page where signed-in users should be redirected. It's different from the landing homepage of the DTaaS application.
  • The logout URL is where users will be directed after logging out.

4. OAuth Application Creation:

  • During the creation of the OAuth application on GitLab, you need to specify the scope. Choose openid, profile, read_user, read_repository, and api scopes.

5. Application ID:

  • After successfully creating the OAuth application, GitLab generates an application ID. This is a long string of HEX values that you will need for your configuration files.

6. Required Information from OAuth Application:

  • You will need the following information from the OAuth application registered on GitLab:
GitLab Variable Name Variable Name in Client env.js Default Value OAuth Provider REACT_APP_AUTH_AUTHORITY https://gitlab.foo.com/ Application ID REACT_APP_CLIENT_ID Callback URL REACT_APP_REDIRECT_URI https://foo.com/Library Scopes REACT_APP_GITLAB_SCOPES openid, profile, read_user, read_repository, api

7. Create User Accounts:

Create user accounts in gitlab for all the usernames chosen during installation. The trial installation script comes with two default usernames - user1 and user2. For all other installation scenarios, accounts with specific usernames need to be created on gitlab.

"},{"location":"admin/client/auth.html#development-environment","title":"Development Environment","text":"

There needs to be a valid callback and logout URLs for development and testing purposes. You can use the same oauth application id for both development, testing and deployment scenarios. Only the callback and logout URLs change. It is possible to register multiple callback URLs in one oauth application. In order to use oauth for development and testing on developer computer (localhost), you need to add the following to oauth callback URL.

DTaaS application URL: http://localhost:4000\nCallback URL: http://localhost:4000/Library\nLogout URL: http://localhost:4000\n

The port 4000 is the default port for running the client website.

"},{"location":"admin/client/auth.html#multiple-dtaas-applications","title":"Multiple DTaaS applications","text":"

The DTaaS is a regular web application. It is possible to host multiple DTaaS applications on the same server. The only requirement is to have a distinct URLs. You can have three DTaaS applications running at the following URLs.

https://foo.com/au\nhttps://foo.com/acme\nhttps://foo.com/bar\n

All of these instances can use the same gitlab instance for authentication.

DTaaS application URL Gitlab Instance URL Callback URL Logout URL Application ID https://foo.com/au https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab https://foo.com/acme https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab https://foo.com/bar https://foo.gitlab.com https://foo.com/au/Library https://foo.com/au autogenerated by gitlab

If you are hosting multiple DTaaS instances on the same server, do not install DTaaS with a null basename on the same server. Even though it works well, the setup is confusing to setup and may lead to maintenance issues.

If you choose to host your DTaaS application with a basename (say bar), then the URLs in env.js change to:

DTaaS application URL: https://foo.com/bar\nGitlab instance URL: https://foo.gitlab.com\nCallback URL: https://foo.com/bar/Library\nLogout URL: https://foo.com/bar\n
"},{"location":"admin/guides/add_service.html","title":"Add other services","text":"

Pre-requisite

You should read the documentation about the already available services

This guide will show you how to add more services. In the following example we will be adding MongoDB as a service, but these steps could be modified to install other services as well.

Adding other services requires more RAM and CPU power. Please make sure the host machine meets the hardware requirements for running all the services.

1. Add the configuration:

Select configuration parameters for the MongoDB service.

Configuration Variable Name Description username the username of the root user in the MongoDB password the password of the root user in the MongoDB port the mapped port on the host machine (default is 27017) datapath path on host machine to mount the data from the MongoDB container

Open the file /deploy/services/services.yml and add the configuration for MongoDB:

services:\n    rabbitmq:\n        username: \"dtaas\"\n        password: \"dtaas\"\n        vhost: \"/\"\n        ports:\n            main: 5672\n            management: 15672\n    ...\n    mongodb:\n        username: <username>\n        password: <password>\n        port: <port>\n        datapath: <datapath>\n    ...\n

2. Add the script:

The next step is to add the script that sets up the MongoDB container with the configuraiton.

Create new file named /deploy/services/mongodb.js and add the following code:

#!/usr/bin/node\n/* Install the optional platform services for DTaaS */\nimport { $ } from \"execa\";\nimport chalk from \"chalk\";\nimport fs from \"fs\";\nimport yaml from \"js-yaml\";\nconst $$ = $({ stdio: \"inherit\" });\nconst log = console.log;\nlet config;\ntry {\nlog(chalk.blue(\"Load services configuration\"));\nconfig = await yaml.load(fs.readFileSync(\"services.yml\", \"utf8\"));\nlog(\nchalk.green(\n\"configuration loading is successful and config is a valid yaml file\"\n)\n);\n} catch (e) {\nlog(chalk.red(\"configuration is invalid. Please rectify services.yml file\"));\nprocess.exit(1);\n}\nlog(chalk.blue(\"Start MongoDB server\"));\nconst mongodbConfig = config.services.mongodb;\ntry {\nlog(\nchalk.green(\n\"Attempt to delete any existing MongoDB server docker container\"\n)\n);\nawait $$`docker stop mongodb`;\nawait $$`docker rm mongodb`;\n} catch (e) {}\nlog(chalk.green(\"Start new Mongodb server docker container\"));\nawait $$`docker run -d -p ${mongodbConfig.port}:27017 \\\n  --name mongodb \\\n  -v ${mongodbConfig.datapath}:/data/db \\\n  -e MONGO_INITDB_ROOT_USERNAME=${mongodbConfig.username} \\\n  -e MONGO_INITDB_ROOT_PASSWORD=${mongodbConfig.password} \\\n  --restart always \\\n  mongo:7.0.3`;\nlog(chalk.green(\"MongoDB server docker container started successfully\"));\n

3. Run the script:

Go to the directory /deploy/services/ and run services script with the following commands:

yarn install\nnode mongodb.js\n

The MongoDB should now be available on services.foo.com:<port>.

"},{"location":"admin/guides/add_user.html","title":"Add a new user","text":"

This page will guide you on, how to add more users to the DTaas. Please do the following:

Important

Make sure to replace <username> and <port> Select a port that is not already being used by the system.

1. Add user:

Add the new user on the Gitlab instance.

2. Setup a new workspace:

The above code creates a new workspace for the new user based on user2.

cd DTaaS/files\ncp -R user2 <username>\ncd ..\ndocker run -d \\\n-p <port>:8080 \\\n--name \"ml-workspace-<username>\" \\\n-v \"${TOP_DIR}/files/<username>:/workspace\" \\\n-v \"${TOP_DIR}/files/<username>:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"<username>\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2\n

3. Add username and password:

The following code adds basic authentication for the new user.

cd DTaaS/servers/config/gateway\nhtpasswd auth <username>\n

4. Add 'route' for new user:

We need to add a new route to the servers ingress.

Open the following file with your preffered editor (e.g. VIM/nano).

vi DTaaS/servers/config/gateway/dynamic/fileConfig.yml\n

Now add the new route and service for the user.

Important

foo.com should be replaced with your own domain.

http:\n  routers:\n    ....\n    <username>:\n      entryPoints:\n        - http\n      rule: 'Host(`foo.com`) && PathPrefix(`/<username>`)'\n      middlewares:\n        - basic-auth\n      service: <username>\n\n  services:\n    ...\n    <username>:\n      loadBalancer:\n        servers:\n          - url: 'http://localhost:<port>'\n

5. Access the new user:

Log into the DTaaS application as new user.

"},{"location":"admin/guides/common_workspace_readonly.html","title":"Make common asset area read only","text":""},{"location":"admin/guides/common_workspace_readonly.html#why","title":"Why","text":"

In some cases you might want to restrict the access rights of some users to the common assets. In order to make the common area read only, you have to change the install script section performing the creation of user workspaces.

"},{"location":"admin/guides/common_workspace_readonly.html#how","title":"How","text":"

To make the common assets read-only for user2, the following changes need to be made to the install script, which is located one of the following places.

  • trial installation: single-script-install.sh

  • production installation: DTaas/deploy/install.sh

The line -v \"${TOP_DIR}/files/common:/workspace/common:ro\" was added to make the common workspace read-only for user2.

Here's the updated code:

docker run -d \\\n-p 8091:8080 \\\n--name \"ml-workspace-user2\" \\\n-v \"${TOP_DIR}/files/user2:/workspace\" \\\n-v \"${TOP_DIR}/files/common:/workspace/common:ro\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"user2\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2 || true\n

This ensures that the common area is read-only for user2, while the user's own (private) assets are still writable.

"},{"location":"admin/guides/hosting_site_without_https.html","title":"Hosting site without https","text":"

In the default trial or production installation setup, the https connection is provided by the reverse proxy. The DTaaS application by default runs in http mode. So removing the reverse proxy removes the https mode.

"},{"location":"admin/guides/link_service.html","title":"Link services to local ports","text":"

Requirements

  • User needs to have an account on server2.
  • SSH server must be running on server2

To link a port from the service machine (server2) to the local port on the user workspace. You can use ssh local port forwarding technique.

1. Step:

Go to the user workspace, on which you want to map from localhost to the services machine

  • e.g. foo.com/user1

2. Step:

Open a terminal in your user workspace.

3. Step:

Run the following command to map a port:

ssh -fNT -L <local_port>:<destination>:<destination_port> <user>@<services.server.com>\n

Here's an example mapping the RabbitMQ broker service available at 5672 of services.foo.com to localhost port 5672.

ssh -fNT -L 5672:localhost:5672 vagrant@services.foo.com\n

Now the programs in user workspace can treat the RabbitMQ broker service as a local service running within user workspace.

"},{"location":"admin/guides/update_basepath.html","title":"Update basepath/route for the application","text":"

The updates required to make the application work with basepath (say bar):

1. Change the Gitlab OAuth URLs to include basepath:

  REACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',\n  REACT_APP_REDIRECT_URI: 'https://foo.com/bar/Library',\n  REACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/bar',\n

2. Update traefik gateway config (deploy/config/gateway/fileConfig.yml):

http:\n  routers:\n    dtaas:\n      entryPoints:\n        - http\n      rule: \"Host(`foo.com`)\" #remember, there is no basepath for this rule\n      middlewares:\n        - basic-auth\n      service: dtaas\n\n    user1:\n      entryPoints:\n        - http\n      rule: \"Host(`foo.com`) && PathPrefix(`/bar/user1`)\"\n      middlewares:\n        - basic-auth\n      service: user1\n\n  # Middleware: Basic authentication\n  middlewares:\n    basic-auth:\n      basicAuth:\n        usersFile: \"/etc/traefik/auth\"\n        removeHeader: true\n\n  services:\n    dtaas:\n      loadBalancer:\n        servers:\n          - url: \"http://localhost:4000\"\n\n    user1:\n      loadBalancer:\n        servers:\n          - url: \"http://localhost:8090\"\n

3. Update deploy/config/client/env.js:

See the client documentation for an example.

4. Update install scripts:

Update deploy/install.sh by adding basepath. For example, add WORKSPACE_BASE_URL=\"bar/\" for all user workspaces.

For user1, the docker command changes to:

docker run -d \\\n-p 8090:8080 \\\n--name \"ml-workspace-user1\" \\\n-v \"${TOP_DIR}/files/user1:/workspace\" \\\n-v \"${TOP_DIR}/files/common:/workspace/common\" \\\n--env AUTHENTICATE_VIA_JUPYTER=\"\" \\\n--env WORKSPACE_BASE_URL=\"bar/user1\" \\\n--shm-size 512m \\\n--restart always \\\nmltooling/ml-workspace-minimal:0.13.2 || true\n

5. Proceed with install using deploy/install.sh:

"},{"location":"admin/servers/lib/LIB-MS.html","title":"Host Library Microservice","text":"

The lib microservice is a simplified file manager providing graphQL API. It has three features:

  • provide a listing of directory contents.
  • transfer a file to user.
  • Source files can either come from local file system or from a gitlab instance.

The library microservice is designed to manage and serve files, functions, and models to users, allowing them to access and interact with various resources.

This document provides instructions for running a stand alone library microservice.

"},{"location":"admin/servers/lib/LIB-MS.html#setup-the-file-system","title":"Setup the File System","text":"

The users expect the following file system structure for their reusable assets.

There is a skeleton file structure in DTaaS codebase. You can copy and create file system for your users.

"},{"location":"admin/servers/lib/LIB-MS.html#gitlab-setup-optional","title":"Gitlab setup (optional)","text":"

For this microserivce to be functional, a certain directory or gitlab project structure is expected. The microservice expects that the gitlab consisting of one group, DTaaS, and within that group, all of the projects be located, user1, user2, ... , as well as a commons project. Each project corresponds to files of one user. A sample file structure can be seen in gitlab dtaas group. You can visit the gitlab documentation on groups for help on the management of gitlab groups.

You can clone the git repositories from the dtaas group to get a sample file system structure for the lib microservice.

"},{"location":"admin/servers/lib/LIB-MS.html#install","title":"Install","text":"

The package is available in Github packages registry.

Set the registry and install the package with the following commands

sudo npm config set @into-cps-association:registry https://npm.pkg.github.com\nsudo npm install -g @into-cps-association/libms\n

The npm install command asks for username and password. The username is your Github username and the password is your Github personal access token. In order for the npm to download the package, your personal access token needs to have read:packages scope.

"},{"location":"admin/servers/lib/LIB-MS.html#configure","title":"Configure","text":"

The microservices requires config specified in INI format. The template configuration file is:

PORT='4001'\nMODE='local' or 'gitlab'\nLOCAL_PATH='/Users/<Username>/DTaaS/files'\nGITLAB_GROUP='dtaas'\nGITLAB_URL='https://gitlab.com/api/graphql'\nTOKEN='123-sample-token'\nLOG_LEVEL='debug'\nAPOLLO_PATH='/lib' or ''\nGRAPHQL_PLAYGROUND='false' or 'true'\n

The LOCAL_PATH variable is the absolute filepath to the location of the local directory which will be served to users by the Library microservice.

The GITLAB_URL, GITLAB_GROUP and TOKEN are only relevant for gitlab mode. The TOKEN should be set to your GitLab Group access API token. For more information on how to create and use your access token, gitlab page.

Once you've generated a token, copy it and replace the value of TOKEN with your token for the gitlab group, can be found.

Replace the default values the appropriate values for your setup.

NOTE:

  1. When _MODE=local, only LOCAL_PATH is used. Other environment variables are unused.
  2. When MODE=gitlab, GITLAB_URL, TOKEN, and GITLAB_GROUP are used; LOCAL_PATH is unused.
"},{"location":"admin/servers/lib/LIB-MS.html#use","title":"Use","text":"

Display help.

libms -h\n

The config is saved .env file by convention. The libms looks for .env file in the working directory from which it is run. If you want to run libms without explicitly specifying the configuration file, run

libms\n

To run libms with a custom config file,

libms -c FILE-PATH\nlibms --config FILE-PATH\n

If the environment file is named something other than .env, for example as .env.development, you can run

libms -c \".env.development\"\n

You can press Ctl+C to halt the application. If you wish to run the microservice in the background, use

nohup libms [-c FILE-PATH] & disown\n

The lib microservice is now running and ready to serve files, functions, and models.

"},{"location":"admin/servers/lib/LIB-MS.html#service-endpoint","title":"Service Endpoint","text":"

The URL endpoint for this microservice is located at: localhost:PORT/lib

The service API documentation is available on user page.

"},{"location":"admin/vagrant/base-box.html","title":"DTaaS Vagrant Box","text":"

This README provides instructions on creating a custom Operating System virtual disk for running the DTaaS software. The virtual disk is managed by vagrant. The purpose is two fold:

  • Provide cross-platform installation of the DTaaS application. Any operating system supporting use of vagrant software utility can support installation of the DTaaS software.
  • Create a ready to use development environment for code contributors.

There are two scripts in this directory:

Script name Purpose Default user.sh user installation developer.sh developer installation

If you are installing the DTaaS for developers, the default installation caters to your needs. You can skip the next step and continue with the creation of vagrant box.

If you are a developer and would like additional software installed, you need to modify Vagrantfile. The existing Vagrantfile has two lines:

    config.vm.provision \"shell\", path: \"user.sh\"\n#config.vm.provision \"shell\", path: \"developer.sh\"\n

Uncomment the second line to have more software components installed. If you are not a developer, no changes are required to the Vagrantfile.

This vagrant box installed for users will have the following items:

  1. docker v24.0
  2. nodejs v18.8
  3. yarn v1.22
  4. npm v10.2
  5. containers - ml-workspace-minimal v0.13, traefik v2.10, gitlab-ce v16.4, influxdb v2.7, grafana v10.1, rabbitmq v3-management, eclipse-mosquitto (mqtt) v2, mongodb v7.0

This vagrant box installed for developers will have the following items additional items:

  • docker-compose v2.20
  • microk8s v1.27
  • jupyterlab
  • mkdocs
  • container - telegraf v1.28

At the end of installation, the software stack created in vagrant box can be visualised as shown in the following figure.

The upcoming instructions will help with the creation of base vagrant box.

#create a key pair\nssh-keygen -b 4096 -t rsa -f key -q -N \"\"\nmv key vagrant\nmv key.pub vagrant.pub\n\nvagrant up\n\n# let the provisioning be complete\n# replace the vagrant ssh key-pair with personal one\nvagrant ssh\n\n# install the oh-my-zsh\nsh -c \"$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\"\n# install plugins: history, autosuggestions,\ngit clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions\n\n# inside ~/.zshrc, modify the following line\nplugins=(git zsh-autosuggestions history cp tmux)\n# remove the vagrant default public key - first line of\n# /home/vagrant/.ssh/authorized_keys\n# exit vagrant guest machine and then\n# copy own private key to vagrant private key location\ncp vagrant .vagrant/machines/default/virtualbox/private_key\n\n# check\nvagrant ssh #should work\nvagrant halt\n\nvagrant package --base dtaas \\\n--info \"info.json\" --output dtaas.vagrant\n\n# Add box to the vagrant cache in ~/.vagrant.d/boxes directory\nvagrant box add --name dtaas ./dtaas.vagrant\n\n# You can use this box in other vagrant boxes using\n#config.vm.box = \"dtaas\"\n
"},{"location":"admin/vagrant/base-box.html#references","title":"References","text":"

Image sources: Ubuntu logo

"},{"location":"admin/vagrant/single-machine.html","title":"DTaaS on Single Vagrant Machine","text":"

These are installation instructions for running DTaaS software inside one vagrant Virtual Machine. The setup requires a machine which can spare 16GB RAM, 8 vCPUs and 50GB Hard Disk space to the vagrant box.

"},{"location":"admin/vagrant/single-machine.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"

Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/single-machine). This shall be useful for logging into the vagrant machines created for two-machine deployment.

"},{"location":"admin/vagrant/single-machine.html#target-installation-setup","title":"Target Installation Setup","text":"

The goal is to use the dtaas Vagrant box to install the DTaaS software on one single vagrant machine. A graphical illustration of a successful installation can be seen here.

There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue color.

Tip

The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.

"},{"location":"admin/vagrant/single-machine.html#configure-server-settings","title":"Configure Server Settings","text":"

A dummy foo.com URL has been used for illustration. Please change this to your unique website URL.

Please follow the next steps to make this installation work in your local environment.

Update the Vagrantfile. Fields to update are:

  1. Hostname (node.vm.hostname = \"foo.com\")
  2. MAC address (:mac => \"xxxxxxxx\"). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged.
  3. Other adjustments are optional.
"},{"location":"admin/vagrant/single-machine.html#installation-steps","title":"Installation Steps","text":"

Execute the following commands from terminal

vagrant up\nvagrant ssh\n

Set a cronjob inside the vagrant virtual machine to remote the conflicting default route.

wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS/feature/distributed-demo/deploy/vagrant/route.sh\nsudo bash route.sh\n

If you only want to test the application and are not setting up a production instance, you can follow the instructions of single script install.

If you are not in a hurry and would rather have a production instance, follow the instructions of regular server installation setup to complete the installation.

"},{"location":"admin/vagrant/single-machine.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"admin/vagrant/two-machines.html","title":"DTaaS on Two Vagrant Machines","text":"

These are installation instructions for running DTaaS application in two vagrant virtual machines (VMs). In this setup, all the user workspaces shall be run on server1 while all the platform services will be run on server2.

The setup requires two server VMs with the following hardware configuration:

server1: 16GB RAM, 8 x64 vCPUs and 50GB Hard Disk space

server2: 6GB RAM, 3 x64 vCPUs and 50GB Hard Disk space

Under the default configuration, two user workspaces are provisioned on server1. The default installation setup also installs InfluxDB, Grafana, RabbitMQ and MQTT services on server2. If you would like to install more services, you can create shell scripts to install the same on server2.

"},{"location":"admin/vagrant/two-machines.html#create-base-vagrant-box","title":"Create Base Vagrant Box","text":"

Create dtaas Vagrant box. You would have created an SSH key pair - vagrant and vagrant.pub. The vagrant is the private SSH key and is needed for the next steps. Copy vagrant SSH private key into the current directory (deploy/vagrant/two-machine). This shall be useful for logging into the vagrant machines created for two-machine deployment.

"},{"location":"admin/vagrant/two-machines.html#target-installation-setup","title":"Target Installation Setup","text":"

The goal is to use this dtaas vagrant box to install the DTaaS software on server1 and the default platform services on server2. Both the servers are vagrant machines.

There are many unused software packages/docker containers within the dtaas base box. The used packages/docker containers are highlighed in blue and red color.

A graphical illustration of a successful installation can be seen here.

In this case, both the vagrant boxes are spawed on one server using two vagrant configuration files, namely boxes.json and Vagrantfile.

Tip

The illustration shows hosting of gitlab on the same vagrant machine with http(s)://gitlab.foo.com The gitlab setup is outside the scope this installation guide. Please refer to gitlab docker install for gitlab installation.

"},{"location":"admin/vagrant/two-machines.html#configure-server-settings","title":"Configure Server Settings","text":"

NOTE: A dummy foo.com and services.foo.com URLs has been used for illustration. Please change these to your unique website URLs.

The first step is to define the network identity of the two VMs. For that, you need server name, hostname and MAC address. The hostname is the network URL at which the server can be accessed on the web. Please follow these steps to make this work in your local environment.

Update the boxes.json. There are entries one for each server. The fields to update are:

  1. name - name of server1 (\"name\" = \"dtaas\")
  2. hostname - hostname of server1 (\"name\" = \"foo.com\")
  3. MAC address (:mac => \"xxxxxxxx\"). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged.
  4. name - name of server2 (\"name\" = \"services\")
  5. hostname - hostname of server2 (\"name\" = \"services.foo.com\")
  6. MAC address (:mac => \"xxxxxxxx\"). This change is required if you have a DHCP server assigning domain names based on MAC address. Otherwise, you can leave this field unchanged.
  7. Other adjustments are optional.
"},{"location":"admin/vagrant/two-machines.html#installation-steps","title":"Installation Steps","text":"

The installation instructions are given separately for each vagrant machine.

"},{"location":"admin/vagrant/two-machines.html#launch-dtaas-platform-default-services","title":"Launch DTaaS Platform Default Services","text":"

Follow the installation guide for services to install the DTaaS platform services.

After the services are up and running, you can see the following services active within server2 (services.foo.com).

service external url InfluxDB database services.foo.com Grafana visualization service services.foo.com:3000 MQTT Broker services.foo.com:1883 RabbitMQ Broker services.foo.com:5672 RabbitMQ Broker management website services.foo.com:15672 MongoDB database services.foo.com:27017"},{"location":"admin/vagrant/two-machines.html#install-dtaas-application","title":"Install DTaaS Application","text":"

Execute the following commands from terminal

vagrant up --provision dtaas\nvagrant ssh dtaas\nwget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS/feature/distributed-demo/deploy/vagrant/route.sh\nsudo bash route.sh\n

If you only want to test the application and are not setting up a production instance, you can follow the instructions of single script install.

If you are not in a hurry and would rather have a production instance, follow the instructions of regular server installation setup to complete the installation.

"},{"location":"admin/vagrant/two-machines.html#references","title":"References","text":"

Image sources: Ubuntu logo, Traefik logo, ml-workspace, nodejs, reactjs, nestjs

"},{"location":"developer/index.html","title":"Developers Guide","text":"

This guide is to help developers get familiar with the project. Please see developer-specific Slides, Video, and Research paper.

"},{"location":"developer/index.html#development-environment","title":"Development Environment","text":"

Ideally, developers should work on Ubuntu/Linux. Other operating systems are not supported inherently and may require additional steps.

To start with, install the required software and git-hooks.

bash script/env.sh\nbash script/configure-git-hooks.sh\n

The git-hooks will ensure that your commits are formatted correctly and that the tests pass before you push the commits to remote repositories.

Be aware that the tests may take a long time to run. If you want to skip the tests or formatting, you can use the --no-verify flag on git commit or git push. Please use this option with care.

There is a script to download all the docker containers used in the project. You can download them using

bash script/docker.sh\n

The docker images are large and are likely to consume about 5GB of bandwidth and 15GB of space. You will have to download the docker images on a really good network.

"},{"location":"developer/index.html#development-workflow","title":"Development Workflow","text":"

To manage collaboration by multiple developers on the software, a development workflow is in place. Each developer should follow these steps:

  1. Fork of the main repository into your github account.
  2. Setup Code Climate and Codecov for your fork. The codecov does not require secret token for public repositories.
  3. Install git-hooks for the project.
  4. Use Fork, Branch, PR workflow.
  5. Work in your fork and open a PR from your working branch to your feature/distributed-demo branch. The PR will run all the github actions, code climate and codecov checks.
  6. Resolve all the issues identified in the previous step.
  7. If you have access to the integration server, try your working branch on the integration server.
  8. Once changes are verified, a PR should be made to the feature/distributed-demo branch of the upstream DTaaS repository.
  9. The PR will be merged after checks by either the project administrators or the maintainers.

Remember that every PR should be meaningful and satisfies a well-defined user story or improve the code quality.

"},{"location":"developer/index.html#code-quality","title":"Code Quality","text":"

The project code qualities are measured based on:

  • Linting issues identified by Code Climate
  • Test coverage report collected by Codecov
  • Successful github actions
"},{"location":"developer/index.html#code-climate","title":"Code Climate","text":"

Code Climate performs static analysis, linting and style checks. Quality checks are performed by codeclimate are to ensure the best possible quality of code to add to our project.

While any new issues introduced in your code would be shown in the PR page itself, to address any specific issue, you can visit the issues or code section of the codeclimate page.

It is highly recommended that any code you add does not introduce new quality issues. If they are introduced, they should be fixed immediately using the appropriate suggestions from Code Climate, or in worst case, adding a ignore flag (To be used with caution).

"},{"location":"developer/index.html#codecov","title":"Codecov","text":"

Codecov keeps track of the test coverage for the entire project. For information about testing and workflow related to that, please see the testing page.

"},{"location":"developer/index.html#github-actions","title":"Github Actions","text":"

The project has multiple github actions defined. All PRs and direct code commits must have successful status on github actions.

"},{"location":"developer/npm-packages.html","title":"Publish NPM packages","text":"

The DTaaS software is developed as a monorepo with multiple npm packages. Since publishing to npmjs is irrevocable and public, developers are encouraged to setup their own private npm registry for local development.

A private npm registry will help with local publish and unpublish steps.

"},{"location":"developer/npm-packages.html#setup-private-npm-registry","title":"Setup private npm registry","text":"

We recommend using verdaccio for this task. The following commands help you create a working private npm registry for development.

docker run -d --name verdaccio -p 4873:4873 verdaccio/verdaccio\nnpm adduser --registry http://localhost:4873 #create a user on the verdaccio registry\nnpm set registry http://localhost:4873/\nyarn config set registry \"http://localhost:4873\"\nyarn login --registry \"http://localhost:4873\" #login with the credentials for yarn utility\nnpm login #login with the credentials for npm utility\n

You can open http://localhost:4873 in your browser, login with the user credentials to see the packages published.

"},{"location":"developer/npm-packages.html#publish-to-private-npm-registry","title":"Publish to private npm registry","text":"

To publish a package to your local registry, do:

yarn install\nyarn build #the dist/ directory is needed for publishing step\nyarn publish --no-git-tag-version #increments version in package.json, publishes to registry\nyarn publish #increments version in package.json, publishes to registry and adds a git tag\n

The package version in package.json gets updated as well. You can open http://localhost:4873 in your browser, login with the user credentials to see the packages published. Please see verdaccio docs for more information.

If there is a need to unpublish a package, ex: @dtaas/runner@0.0.2, do:

npm unpublish  --registry http://localhost:4873/ @dtaas/runner@0.0.2\n

To install / uninstall this utility for all users, do:

sudo npm install  --registry http://localhost:4873 -g @dtaas/runner\nsudo npm list -g # should list @dtaas/runner in the packages\nsudo npm remove --global @dtaas/runner\n
"},{"location":"developer/npm-packages.html#use-the-packages","title":"Use the packages","text":"

The packages available in private npm registry can be used like the regular npm packages installed from npmjs.

For example, to use @dtaas/runner@0.0.2 package, do:

sudo npm install  --registry http://localhost:4873 -g @dtaas/runner\nrunner # launch the digital twin runner\n
"},{"location":"developer/client/client.html","title":"React Website","text":"

The Website is how the end-users interact with the software platform. The website is being developed as a React single page web application.

A dependency graph for the entire codebase of the react application is:

"},{"location":"developer/client/client.html#dependency-graphs","title":"Dependency Graphs","text":"

The figures are the dependency graphs generated from the code.

"},{"location":"developer/client/client.html#src-directory","title":"src directory","text":""},{"location":"developer/client/client.html#test-directory","title":"test directory","text":""},{"location":"developer/servers/lib/lib-ms.html","title":"Library Microservice","text":"

The Library Microservices provides users with access to files in user workspaces via API. This microservice will interface with local file system and Gitlab to provide uniform Gitlab-compliant API access to files.

Warning

This microservice is still under heavy development. It is still not a good replacement for file server we are using now.

"},{"location":"developer/servers/lib/lib-ms.html#architecture-and-design","title":"Architecture and Design","text":"

The C4 level 2 diagram of this microservice is:

The GraphQL API provided by the library microservice shall be compliant with the Gitlab GraphQL service.

"},{"location":"developer/servers/lib/lib-ms.html#uml-diagrams","title":"UML Diagrams","text":""},{"location":"developer/servers/lib/lib-ms.html#class-diagram","title":"Class Diagram","text":"
classDiagram\n    class FilesResolver {\n    -filesService: IFilesService\n    +listDirectory(path: string): Promise<Project>\n    +readFile(path: string): Promise<Project>\n    }\n\n    class FilesServiceFactory {\n    -configService: ConfigService\n    -gitlabFilesService: GitlabFilesService\n    -localFilesService: LocalFilesService\n    +create(): IFilesService\n    }\n\n    class GitlabFilesService {\n    -configService: ConfigService\n    -parseArguments(path: string): Promise<domain: string; parsedPath: string>\n    -sendRequest(query: string): Promise<Project>\n    -executeQuery(path: string, getQuery: QueryFunction): Promise<Project>\n    +listDirectory(path: string): Promise<Project>\n    +readFile(path: string): Promise<Project>\n    }\n\n    class LocalFilesService {\n    -configService: ConfigService\n    -getFileStats(fullPath: string, file: string): Promise<Project>\n    +listDirectory(path: string): Promise<Project>\n    +readFile(path: string): Promise<Project>\n    }\n\n    class ConfigService {\n    +get(propertyPath: string): any\n    }\n\n    class IFilesService{\n    listDirectory(path: string): Promise<Project>\n    readFile(path: string): Promise<Project>\n    }\n\n    IFilesService <|-- FilesResolver: uses\n    IFilesService <|.. GitlabFilesService: implements\n    IFilesService <|.. LocalFilesService: implements\n    IFilesService <|-- FilesServiceFactory: creates\n    ConfigService <|-- FilesServiceFactory: uses\n    ConfigService <|-- GitlabFilesService: uses\n    ConfigService <|-- LocalFilesService: uses
"},{"location":"developer/servers/lib/lib-ms.html#sequence-diagram","title":"Sequence Diagram","text":"
sequenceDiagram\n    actor Client\n    actor Traefik\n\n    box LightGreen Library Microservice\n    participant FR as FilesResolver\n    participant FSF as FilesServiceFactory\n    participant CS as ConfigService\n    participant IFS as IFilesService\n    participant LFS as LocalFilesService\n    participant GFS as GitlabFilesService\n    end\n\n    participant FS as Local File System DB\n    participant GAPI as GitLab API DB\n\n    Client ->> Traefik : HTTP request\n    Traefik ->> FR : GraphQL query\n    activate FR\n\n    FR ->> FSF : create()\n    activate FSF\n\n    FSF ->> CS : getConfiguration(\"MODE\")\n    activate CS\n\n    CS -->> FSF : return configuration value\n    deactivate CS\n\n    alt MODE = Local\n    FSF ->> FR : return filesService (LFS)\n    deactivate FSF\n\n    FR ->> IFS : listDirectory(path) or readFile(path)\n    activate IFS\n\n    IFS ->> LFS : listDirectory(path) or readFile(path)\n    activate LFS\n\n    LFS ->> CS : getConfiguration(\"LOCAL_PATH\")\n    activate CS\n\n    CS -->> LFS : return local path\n    deactivate CS\n\n    LFS ->> FS : Access filesystem\n    alt Filesystem error\n        FS -->> LFS : Filesystem error\n        LFS ->> LFS : Throw new InternalServerErrorException\n        LFS -->> IFS : Error\n    else Successful file operation\n        FS -->> LFS : Return filesystem data\n        LFS ->> IFS : return Promise<Project>\n    end\n    deactivate LFS\n    else MODE = GitLab\n        FSF ->> FR : return filesService (GFS)\n        %%deactivate FSF\n\n    FR ->> IFS : listDirectory(path) or readFile(path)\n    activate IFS\n\n    IFS ->> GFS : listDirectory(path) or readFile(path)\n    activate GFS\n\n    GFS ->> GFS : parseArguments(path)\n    GFS ->> GFS : executeQuery()\n\n    GFS ->> CS : getConfiguration(\"GITLAB_API_URL\", \"GITLAB_TOKEN\")\n    activate CS\n\n    CS -->> GFS : return GitLab API URL and Token\n    deactivate CS\n\n    GFS ->> GAPI : sendRequest()\n    alt GitLab API error\n        GAPI -->> GFS : API error\n        GFS ->> GFS : Throw new Error(\"Invalid query\")\n        GFS -->> IFS : Error\n    else Successful GitLab API operation\n        GAPI -->> GFS : Return API response\n        GFS ->> IFS : return Promise<Project>\n    end\n    deactivate GFS\n    end\n\n    alt Error thrown\n    IFS ->> FR : return Error\n    deactivate IFS\n    FR ->> Traefik : return Error\n    Traefik ->> Client : HTTP error response\n    else Successful operation\n    IFS ->> FR : return Promise<Project>\n    deactivate IFS\n    FR ->> Traefik : return Promise<Project>\n    Traefik ->> Client : HTTP response\n    end\n\n    deactivate FR\n
"},{"location":"developer/servers/lib/lib-ms.html#dependency-graphs","title":"Dependency Graphs","text":"

The figures are the dependency graphs generated from the code.

"},{"location":"developer/servers/lib/lib-ms.html#src-directory","title":"src directory","text":""},{"location":"developer/servers/lib/lib-ms.html#test-directory","title":"test directory","text":""},{"location":"developer/system/architecture.html","title":"System Overview","text":""},{"location":"developer/system/architecture.html#user-requirements","title":"User Requirements","text":"

The DTaaS software platform users expect a single platform to support the complete DT lifecycle. To be more precise, the platform users expect the following features:

  1. Author \u2013 create different assets of the DT on the platform itself. This step requires use of some software frameworks and tools whose sole purpose is to author DT assets.
  2. Consolidate \u2013 consolidate the list of available DT assets and authoring tools so that user can navigate the library of reusable assets. This functionality requires support for discovery of available assets.
  3. Configure \u2013 support selection and configuration of DTs. This functionality also requires support for validation of a given configuration.
  4. Execute \u2013 provision computing infrastructure on demand to support execution of a DT.
  5. Explore \u2013 interact with a DT and explore the results stored both inside and outside the platform. Exploration may lead to analytical insights.
  6. Save \u2013 save the state of a DT that\u2019s already in the execution phase. This functionality is required for on demand saving and re-spawning of DTs.
  7. What-if analysis \u2013 explore alternative scenarios to (i) plan for an optimal next step, (ii) recalibrate new DT assets, (iii) automated creation of new DTs or their assets; these newly created DT assets may be used to perform scientifically valid experiments.
  8. Share \u2013 share a DT with other users of their organisation.
"},{"location":"developer/system/architecture.html#system-architecture","title":"System Architecture","text":"

The figure shows the system architecture of the the DTaaS software platform.

"},{"location":"developer/system/architecture.html#system-components","title":"System Components","text":"

The users interact with the software platform using a website. The gateway is a single point of entry for direct access to the platform services. The gateway is responsible for controlling user access to the microservice components. The service mesh enables discovery of microservices, load balancing and authentication functionalities.

In addition, there are microservices for catering to author, store, explore, configure, execute and scenario analysis requirements. The microservices are complementary and composable; they fulfil core requirements of the system.

The microservices responsible for satisfying the user requirements are:

  1. The security microservice implements role-based access control (RBAC) in the platform.
  2. The accounting microservice is responsible for keeping track of the platform, DT asset and infrastructure usage. Any licensing, usage restrictions need to be enforced by the accounting microservice. Accounting is a pre-requisite to commercialisation of the platform. Due to significant use of external infrastructure and resources via the platform, the accounting microservice needs to interface with accounting systems of the external services.

  3. The data microservice is a frontend to all the databases integrated into the platform. A time-series database and a graph database are essential. These two databases store timeseries data from PT, events on PT/DT, commands sent by DT to PT. The PTs uses these databases even when their respective DTs are not in the execute phase.

  4. The visualisation microservice is again a frontend to visualisation software that are natively supported inside the platform. Any visualisation software running either on external systems or on client browsers do not need to interact with this microservice. They can directly use the data provided by the data microservice.
"},{"location":"developer/system/architecture.html#c4-architectural-diagrams","title":"C4 Architectural Diagrams","text":"

The C4 architectural diagrams of the DTaaS software are presented here.

"},{"location":"developer/system/architecture.html#level-1","title":"Level 1","text":"

This Level 1 diagram only shows the users and the roles they play in the DTaaS software.

"},{"location":"developer/system/architecture.html#level-2","title":"Level 2","text":"

This simplified version of Level 2 diagram shows the software containers of the DTaaS software.

If you are interested, please take a look at the detailed diagram.

Please note that the given diagram only covers DT Lifecycle, Reusable Assets and Execution Manager.

"},{"location":"developer/system/architecture.html#mapping","title":"Mapping","text":"

A mapping of the C4 level 2 containers to components identified in the system architecture is also available in the table.

System Component Container(s) Gateway Traefik Gateway Unified Interface React Webapplication Reusable Assets Library Microservice Data MQTT, InfluxDB, and RabbitMQ (not shown in the C4 Level 2 diagram) Visualization InfluxDB (not shown in the C4 Level 2 diagram) DT Lifecycle DT Lifecycle Manager and DT Configuration Validator Security Gitlab OAuth Accounting None Execution Manager Execution Manager"},{"location":"developer/system/current-status.html","title":"Current Status","text":"

The DTaaS software platform is currently under development. Crucial system components are in place with ongoing development work focusing on increased automation and feature enhancement. The figure below shows the current status of the development work.

"},{"location":"developer/system/current-status.html#user-security","title":"User Security","text":"

There is authentication mechanisms in place for the react website and the Traefik gateway.

The react website component uses Gitlab for user authentication using OAuth protocol.

"},{"location":"developer/system/current-status.html#gateway-authentication","title":"Gateway Authentication","text":"

The Traefik gateway has HTTP basic authentication enabled by default. This authentication on top of HTTPS connection can provide a good protection against unauthorized use.

Warning

Please note that HTTP basic authentication over insecure non-TLS is insecure.

There is also a possibility of using self-signed mTLS certificates. The current security functionality is based on signed Transport Layer Security (TLS) certificates issued to users. The TLS certificate based mutual TLS (mTLS) authentication protocol provides better security than the usual username and password combination. The mTLS authentication takes place between the users browser and the platform gateway. The gateway federates all the backend services. The service discovery, load balancing, and health checks are carried by the gateway based on a dynamic reconfiguration mechanism.

Note

The mTLS is not enabled in the default install. Please use the scripts in ssl/ directory to generate the required certificates for users and Traefik gateway.

"},{"location":"developer/system/current-status.html#user-workspaces","title":"User Workspaces","text":"

All users have dedicated dockerized-workspaces. These docker-images are based on container images published by mltooling group.

Thus DT experts can develop DTs from existing DT components and share them with other users. A file server has been setup to act as a DT asset repository. Each user gets space to store private DT assets and also gets access to shared DT assets. Users can synchronize their private DT assets with external git repositories. In addition, the asset repository transparently gets mapped to user workspaces within which users can perform DT lifecycle operations. There is also a library microservice which in the long-run will replace the file server.

Users can run DTs in their workspaces and also permit remote access to other users. There is already shared access to internal and external services. With these two provisions, users can treat live DTs as service components in their own software systems.

"},{"location":"developer/system/current-status.html#platform-services","title":"Platform Services","text":"

There are four external services integrated with the DTaaS software platform. They are: InfluxDB, Grafana, RabbitMQ and MQTT.

These services can be used by DTs and PTs for communication, storing and visualization of data. There can also be monitoring services setup based on these services.

"},{"location":"developer/system/current-status.html#development-priorities","title":"Development Priorities","text":"

The development priorities for the DTaaS software development team are:

  • DT Runner (API Interface to DT)
  • Multi-user and microservice security
  • Increased automation of installation procedures
  • DT Configuration DSL \u00edn the form of YAML schema
  • UI for DT creation
  • DT examples

Your contributions and collaboration are highly welcome.

"},{"location":"developer/testing/intro.html","title":"Testing","text":""},{"location":"developer/testing/intro.html#common-questions-on-testing","title":"Common Questions on Testing","text":""},{"location":"developer/testing/intro.html#what-is-software-testing","title":"What is Software Testing","text":"

Software testing is a procedure to investigate the quality of a software product in different scenarios. It can also be stated as the process of verifying and validating that a software program or application works as expected and meets the business and technical requirements that guided design and development.

"},{"location":"developer/testing/intro.html#why-software-testing","title":"Why Software Testing","text":"

Software testing is required to point out the defects and errors that were made during different development phases. Software testing also ensures that the product under test works as expected in all different cases \u2013 stronger the test suite, stronger is our confidence in the product that we have built. One important benefit of software testing is that it facilitates the developers to make incremental changes to source code and make sure that the current changes are not breaking the functionality of the previously existing code.

"},{"location":"developer/testing/intro.html#what-is-tdd","title":"What is TDD","text":"

TDD stands for Test Driven Development. It is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test, and finally refactors the new code to acceptable standards. The goal of TDD can be viewed as specification and not validation. In other words, it\u2019s one way to think through your requirements or design before your write your functional code.

"},{"location":"developer/testing/intro.html#what-is-bdd","title":"What is BDD","text":"

BDD stands for \u201cBehaviour Driven Development\u201d. It is a software development process that emerged from TDD. It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. This provides software development and management teams with shared tools and a shared process to collaborate on software development. BDD is largely facilitated through the use of a simple domain-specific language (DSL) using natural language constructs (e.g., English-like sentences) that can express the behavior and the expected outcomes. Mocha and Cucumber testing libraries are built around the concepts of BDD.

"},{"location":"developer/testing/intro.html#testing-workflow","title":"Testing workflow","text":"

(Ref: Ham Vocke, The Practical Test Pyramid)

We follow a testing workflow in accordance with the test pyramid diagram given above, starting with isolated tests and moving towards complete integration for any new feature changes. The different types of tests (in the order that they should be performed) are explained below:

"},{"location":"developer/testing/intro.html#unit-tests","title":"Unit Tests","text":"

Unit testing is a level of software testing where individual units/ components of a software are tested. The objective of Unit Testing is to isolate a section of code and verify its correctness.

Ideally, each test case is independent from the others. Substitutes such as method stubs, mock objects, and spies can be used to assist testing a module in isolation.

"},{"location":"developer/testing/intro.html#benefits-of-unit-testing","title":"Benefits of Unit Testing","text":"
  • Unit testing increases confidence in changing/ maintaining code. If good unit tests are written and if they are run every time any code is changed, we will be able to promptly catch any defects introduced due to the change.
  • If codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
  • The cost, in terms of time, effort and money, of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels.
"},{"location":"developer/testing/intro.html#unit-tests-in-dtaas","title":"Unit Tests in DTaaS","text":"

Each component DTaaS project uses unique technology stack. Thus the packages used for unit tests are different. Please check the test/ directory of a component to figure out the unit test packages used.

"},{"location":"developer/testing/intro.html#integration-tests","title":"Integration tests","text":"

Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. In DTaaS, we use an integration server for software development as well as such tests.

The existing integration tests are done at the component level. There are no integration tests between the components. This task has been postponed to future.

"},{"location":"developer/testing/intro.html#end-to-end-tests","title":"End-to-End tests","text":"

Testing any code changes through the end user interface of your software is essential to verify if your code has the desired effect for the user. End-to-End tests in DTaaS a functional setup. For more information visit here.

There are end-to-end tests in the DTaaS. This task has been postponed to future.

"},{"location":"developer/testing/intro.html#feature-tests","title":"Feature Tests","text":"

A Software feature can be defined as the changes made in the system to add new functionality or modify the existing functionality. Each feature is said to have a characteristics that is designed to be useful, intuitive and effective. It is important to test a new feature when it has been added. We also need to make sure that it does not break the functionality of already existing features. Hence feature tests prove to be useful.

The DTaaS project does not have any feature tests yet. Cucumber shall be used in future to implement feature tests.

"},{"location":"developer/testing/intro.html#references","title":"References","text":"

Justin Searls and Kevin Buchanan, Contributing Tests wiki. This wiki has goog explanation of TDD and test doubles.

"},{"location":"user/features.html","title":"Overview","text":""},{"location":"user/features.html#advantages","title":"Advantages","text":"

The DTaaS software platform provides certain advantages to users:

  • Support for different kinds of Digital Twins
  • CFD, Simulink, co-simulation, FEM, ROM, ML etc.
  • Integrates with other Digital Twin frameworks
  • Facilitate availability of Digital Twin as a Service
  • Collaboration and reuse
  • Private workspaces for verification of reusable assets, trial run DTs
  • Cost effectiveness
"},{"location":"user/features.html#software-features","title":"Software Features","text":"

Each installation of DTaaS platform comes with the features highlighted in the following picture.

All the users have dedicated workspaces. These workspaces are dockerized versions of Linux Desktops. The user desktops are isolated so the installations and customizations done in one user workspace do not effect the other user workspaces.

Each user workspace comes with some development tools pre-installed. These tools are directly accessible from web browser. The following tools are available at present:

Tool Advantage Jupyter Lab Provides flexible creation and use of digital twins and their components from web browser. All the native Jupyterlab usecases are supported here. Jupyter Notebook Useful for web-based management of their files (library assets) VS Code in the browser A popular IDE for software development. Users can develop their digital twin-related assets here. ungit An interactive git client. Users can work with git repositories from web browser

In addition, users have access to xfce-based remote desktop via VNC client. The VNC client is available right in the web browser. The xfce supported desktop software can also be run in their workspace.

The DTaaS software platform has some pre-installed services available. The currently available services are:

Service Advantage InfluxDB Time-series database primarly for storing time-series data from physical twins. The digital twins can use an already existing data. Users can also create visualization dashboards for their digital twins. RabbitMQ Communication broker for communication between physical and digital twins Grafana Visualization dashboards for their digital twins. MQTT Lightweight data transfer broker for IoT devices / physical twins feeding data into digital twins.

In addition, the workspaces are connected to the Internet so all the Digital Twins running in the workspace can interact with both the internal and external services.

The users can publish and reuse the digital twin assets available on the platform. In addition, users can run their digital twins and make these live digital twins available as services to their clients. The clients need not be users of the DTaaS software installation.

"},{"location":"user/motivation.html","title":"Motivation","text":"

How can DT software platforms enable users collaborate to:

  • Build digital twins (DTs)
  • Use DTs themselves
  • Share DTs with other users
  • Provide the existing DTs as Service to other users

In addition, how can the DT software platforms:

  • Support DT lifecycle
  • Scale up rather than scale down (flexible convention over configuration)
"},{"location":"user/motivation.html#existing-approaches","title":"Existing Approaches","text":"

There are quite a few solutions proposed in the recent past to solve this problem. Some of them are:

  • Focus on data from Physical Twins (PTs) to perform analysis, diagnosis, planning etc\u2026
  • Share DT assets across the upstream, downstream etc\u2026.
  • Evaluate different models of PT
  • DevOps for Cyber Physical Systems (CPS)
  • Scale DT / execution of DT / ensemble of related DTs
  • Support for PT product lifecycle
"},{"location":"user/motivation.html#our-approach","title":"Our Approach","text":"
  • Support for transition from existing workflows to DT frameworks
  • Create DTs from reusable assets
  • Enable users to share DT assets
  • Offer DTs as a Service
  • Integrate the DTs with external software systems
  • Separate configurations of independent DT components
"},{"location":"user/digital-twins/create.html","title":"Create a Digital Twin","text":"

The first step in digital twin creation is to use the available assets in your workspace. If you have assets / files in your computer that need to be available in the DTaaS workspace, then please follow the instructions provided in library assets.

There are dependencies among the library assets. These dependencies are shown below.

A digital twin can only be created by linking the assets in a meaningful way. This relationship can be expressed using a mathematical equation:

where D denotes data, M denotes models, F denotes functions, T denotes tools, denotes DT configuration and is a symbolic notation for a digital twin itself. The expression denotes composition of DT from D,M,T and F assets. The indicates zero or one more instances of an asset and indicates one or more instances of an asset.

The DT configuration specifies the relevant assets to use, the potential parameters to be set for these assets. If a DT needs to use RabbitMQ, InfluxDB like services supported by the platform, the DT configuration needs to have access credentials for these services.

This kind of generic DT definition is based on the DT examples seen in the wild. You are at liberty to deviate from this definition of DT. The only requirement is the ability to run the DT from either commandline or desktop.

Tip

If you are stepping into the world of Digital Twins, you might not have distinct digital twin assets. You are likely to have one directory of everything in which you run your digital twin. In such a case we recommend that you upload this monolithic digital twin into digital_twin/your_digital_twin_name directory.

"},{"location":"user/digital-twins/create.html#example","title":"Example","text":"

The Examples repository contains a co-simulation setup for mass spring damper. This example illustrates the potential of using co-simulation for digital twins.

The file system contents for this example are:

workspace/\n  data/\n    mass-spring-damper\n        input/\n        output/\n\n  digital_twins/\n    mass-spring-damper/\n      cosim.json\n      time.json\n      lifecycle/\n        analyze\n        clean\n        evolve\n        execute\n        save\n        terminate\n      README.md\n\n  functions/\n  models/\n    MassSpringDamper1.fmu\n    MassSpringDamper2.fmu\n\n  tools/\n  common/\n    data/\n    functions/\n    models/\n    tools/\n        maestro-2.3.0-jar-with-dependencies.jar\n

The workspace/data/mass-spring-damper/ contains input and output data for the mass-spring-damper digital twin.

The two FMU models needed for this digital twin are in models/ directory.

The co-simulation digital twin needs Maestro co-simulation orchestrator. Since this is a reusable asset for all the co-simulation based DTs, the tool has been placed in common/tools/ directory.

The actual digital twin configuration is specified in digital twins/mass-spring-damper directory. The co-simulation configuration is specified in two json files, namely cosim.json and time.json. A small explanation of digital twin for its users can be placed in digital twins/mass-spring-damper/README.md.

The launch program for this digital twin is in digital twins/mass-spring-damper/lifecycle/execute. This launch program runs the co-simulation digital twin. The co-simulation runs till completion and then ends. The programs in digital twins/mass-spring-damper/lifecycle are responsible for lifecycle management of this digital twin. The lifecycle page provides more explanation on these programs.

Execution of a Digital Twin

A frequent question arises on the run time characteristics of a digital twin. The natural intuition is to say that a digital twin must operate as long as its physical twin is in operation. If a digital twin runs for a finite time and then ends, can it be called a digital twin? The answer is a resounding YES. The Industry 4.0 usecases seen among SMEs have digital twins that run for a finite time. These digital twins are often run at the discretion of the user.

You can run this digital twin by,

  1. Go to Workbench tools page of the DTaaS website and open VNC Desktop. This opens a new tab in your browser
  2. A page with VNC Desktop and a connect button comes up. Click on Connect. You are now connected to the Linux Desktop of your workspace.
  3. Open a Terminal (black rectangular icon in the top left region of your tab) and type the following commands.
  4. Download the example files by following the instructions given on examples overview.

  5. Go to the digital twin directory and run

cd /workspace/examples/digital_twins/mass-spring-damper\nlifecycle/execute\n

The last command executes the mass-spring-damper digital twin and stores the co-simulation output in data/mass-spring-damper/output.

"},{"location":"user/digital-twins/lifecycle.html","title":"Digital Twin Lifecycle","text":"

The physical products in the real world have product lifecycle. A simplified four-stage product life is illustrated here.

A digital twin tracking the physical products (twins) need to track and evolve in conjunction with the corresponding physical twin.

The possible activities undertaken in each lifecycle phases are illustrated in the figure.

(Ref: Minerva, R, Lee, GM and Crespi, N (2020) Digital Twin in the IoT context: a survey on technical features, scenarios and architectural models. Proceedings of the IEEE, 108 (10). pp. 1785-1824. ISSN 0018-9219.)

"},{"location":"user/digital-twins/lifecycle.html#lifecycle-phases","title":"Lifecycle Phases","text":"

The four phase lifecycle has been extended to a lifecycle with eight phases. The new phase names and the typical activities undertaken in each phase are outlined in this section.

A DT lifecycle consists of explore, create, execute, save, analyse, evolve and terminate phases.

Phase Main Activities explore selection of suitable assets based on the user needs and checking their compatibility for the purposes of creating a DT. create specification of DT configuration. If DT already exists, there is no creation phase at the time of reuse. execute automated / manual execution of a DT based on its configuration. The DT configuration must checked before starting the execution phase. analyse checking the outputs of a DT and making a decision. The outputs can be text files, or visual dashboards. evolve reconfigure DT primarily based on analysis. save involves saving the state of DT to enable future recovery. terminate stop the execution of DT.

A digital twin faithfully tracking the physical twin lifecycle will have to support all the phases. It is also possible for digital twin engineers to add more phases to digital they are developing. Thus it is important for the DTaaS software platform needs to accommodate needs of different DTs.

A potential linear representation of the tasks undertaken in a digital twin lifecycle are shown here.

Again this is only a one possible pathway. Users are at liberty to alter the sequence of steps.

It is possible to map the lifecycle phases identified so far with the Build-Use-Share approach of the DTaaS software platform.

Even though not mandatory, having a matching coding structure makes it easy to for users to create and manage their DTs within the DTaaS. It is recommended to have the following structure:

workspace/\n  digital_twins/\n    digital-twin-1/\n      lifecycle/\n        analyze\n        clean\n        evolve\n        execute\n        save\n        terminate\n

A dedicated program exists for each phase of DT lifecycle. Each program can be as simple as a script that launches other programs or sends messages to a live digital twin.

"},{"location":"user/digital-twins/lifecycle.html#example-lifecycle-scripts","title":"Example Lifecycle Scripts","text":"

Here are the example programs / scripts to manage three phases in the lifecycle of mass-spring-damper DT.

lifecycle/execute
#!/bin/bash\nmkdir -p /workspace/data/mass-spring-damper/output\n#cd ..\njava -jar /workspace/common/tools/maestro-2.3.0-jar-with-dependencies.jar \\\nimport -output /workspace/data/mass-spring-damper/output \\\n--dump-intermediate sg1 cosim.json time.json -i -vi FMI2 \\\noutput-dir>debug.log 2>&1\n

The execute phases uses the DT configuration, FMU models and Maestro tool to execute the digital twin. The script also stores the output of cosimulation in /workspace/data/mass-spring-damper/output.

It is possible for a DT not to support a specific lifecycle phase. This intention can be specified with an empty script and a helpful message if deemed necessary.

lifecycle/analyze
#!/bin/bash\nprintf \"operation is not supported on this digital twin\"\n

The lifecycle programs can call other programs in the code base. In the case of lifecycle/terminate program, it is calling another script to do the necessary job.

lifecycle/terminate
#!/bin/bash\nlifecycle/clean\n
"},{"location":"user/examples/index.html","title":"DTaaS Examples","text":"

There are some example digital twins created for the DTaaS software. Use these examples and follow the steps given in the Examples section to experience features of the DTaaS software platform and understand best practices for managing digital twins within the platform.

"},{"location":"user/examples/index.html#copy-examples","title":"Copy Examples","text":"

The first step is to copy all the example code into your user workspace within the DTaaS. Use the given shell script to copy all the examples into /workspace/examples directory.

wget https://raw.githubusercontent.com/INTO-CPS-Association/DTaaS-examples/main/getExamples.sh\nbash getExamples.sh\n
"},{"location":"user/examples/index.html#example-list","title":"Example List","text":"

The digital twins provided in examples vary in their complexity. It is best to use the examples in the following order.

  1. Mass Spring Damper
  2. Water Tank Fault Injection
  3. Water Tank Model Swap
  4. Desktop Robotti and RabbitMQ

DTaaS examples

"},{"location":"user/examples/drobotti-rmqfmu/index.html","title":"Desktop Robotti with RabbitMQ","text":""},{"location":"user/examples/drobotti-rmqfmu/index.html#overview","title":"Overview","text":"

This example demonstrates bidirectional communication between a mock physical twin and a digital twin of a mobile robot (Desktop Robotti). The communication is enabled by RabbitMQ Broker.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#example-structure","title":"Example Structure","text":"

The mock physical twin of mobile robot is created using two python scripts

  1. data/drobotti_rmqfmu/rmq-publisher.py
  2. data/drobotti_rmqfmu/consume.py

The mock physical twin sends its physical location in (x,y) coordinates and expects a cartesian distance calculated from digital twin.

The rmq-publisher.py reads the recorded (x,y) physical coordinates of mobile robot. The recorded values are stored in a data file. These (x,y) values are published to RabbitMQ Broker. The published (x,y) values are consumed by the digital twin.

The consume.py subscribes to RabbitMQ Broker and waits for the calculated distance value from the digital twin.

The digital twin consists of a FMI-based co-simulation, where Maestro is used as co-orchestration engine. In this case, the co-simulation is created by using two FMUs - RMQ FMU (rabbitmq-vhost.fmu) and distance FMU (distance-from-zero.fmu). The RMQ FMU receives the (x,y) coordinates from rmq-publisher.py and sends calculated distance value to consume.py. The RMQ FMU uses RabbitMQ broker for communication with the mock mobile robot, i.e., rmq-publisher.py and consume.py. The distance FMU is responsible for calculating the distance between (0,0) and (x,y). The RMQ FMU and distance FMU exchange values during co-simulation.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"

This example uses two models, one tool, one data, and two scripts to create mock physical twin. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models distance-from-zero.fmu Private No rmq-vhost.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes Data drobotti_playback_data.csv private No Mock PT rmq-publisher.py Private No consume.py Private No

This DT has many configuration files. The coe.json and multimodel.json are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.

The RabbitMQ access credentials need to be provided in multimodel.json. The rabbitMQ-credentials.json provides RabbitMQ access credentials for mock PT python scripts. Please add your credentials in both these files.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool and pip packages for python scripts Execute Runs both DT and mock PT Clean Clears run logs and outputs"},{"location":"user/examples/drobotti-rmqfmu/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/drobotti_rmqfmu\n

If required, change the execute permission of lifecycle scripts you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/drobotti-rmqfmu/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 in the workspace. Also install the required python pip packages for rmq-publisher.py and consume.py scripts.

lifecycle/create\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#execute","title":"Execute","text":"

Run the python scripts to start mock physical twin. Also run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.

lifecycle/execute\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the /workspace/examples/digital_twins/drobotti_rmqfmu directory.

"},{"location":"user/examples/drobotti-rmqfmu/index.html#terminate-phase","title":"Terminate phase","text":"

Terminate to clean up the debug files and co-simulation output files.

lifecycle/terminate\n
"},{"location":"user/examples/drobotti-rmqfmu/index.html#references","title":"References","text":"

The RabbitMQ FMU github repository contains complete documentation and source code of the rmq-vhost.fmu.

More information about the case study is available in:

Frasheri, Mirgita, et al. \"Addressing time discrepancy between digital\nand physical twins.\" Robotics and Autonomous Systems 161 (2023): 104347.\n
"},{"location":"user/examples/incubator/index.html","title":"Incubator Demo","text":"

Installation of required python packages for the Incubator demo

pip install pyhocon\npip install influxdb_client\npip install scipy\npip install pandas\npip install pika\npip install oomodelling\npip install control\npip install filterpy\npip install sympy\npip install docker\n

start rabbitmq server and create a rabbitmq account with,

name: incubator\npassword:incubator\nwith access to the virtual host \"/\"\n
docker run -d --name rabbitmq-server \\\n--restart always \\\n-p 15672:15672 -p 5672:5672 rabbitmq:3-management\ndocker exec rabbitmq-server rabbitmqctl add_user incubator incubator\ndocker exec rabbitmq-server rabbitmqctl set_permissions -p \"/\" incubator \".*\" \".*\" \".*\"\n

Access InfluxDB running on another machine. Remember that InfluxDB works only on a distinct sub-domain name like influx.foo.com, but not on foo.com/influx.

ssh -i /vagrant/vagrant -fNT -L 40000:localhost:80 vagrant@influx.server2.com\n

Update the rabbitmq-server and influxdb configuration in

/home/vagrant/dt/1/incubator/example_digital-twin_incubator/software/startup.conf\n

select (comment / uncomment) functions in

/home/vagrant/dt/1/incubator/example_digital-twin_incubator/software/startup/start_all_services.py\n

Start the program

export PYTHONPATH=\"${PYTHONPATH}:/home/vagrant/dt/1/incubator/example_digital-twin_incubator/software/incubator\"\ncd /home/vagrant/dt/1/incubator/example_digital-twin_incubator/software\npython3 -m startup.start_all_services\n
"},{"location":"user/examples/mass-spring-damper/index.html","title":"Mass Spring Damper","text":""},{"location":"user/examples/mass-spring-damper/index.html#overview","title":"Overview","text":"

The mass spring damper digital twin (DT) comprises two mass spring dampers and demonstrates how a co-simulation based DT can be used within DTaaS.

"},{"location":"user/examples/mass-spring-damper/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/mass-spring-damper/index.html#example-structure","title":"Example Structure","text":"

There are two simulators included in the study, each representing a mass spring damper system. The first simulator calculates the mass displacement and speed of for a given force acting on mass . The second simulator calculates force given a displacement and speed of mass . By coupling these simulators, the evolution of the position of the two masses is computed.

"},{"location":"user/examples/mass-spring-damper/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"

This example uses two models and one tool. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models MassSpringDamper1.fmu Private Yes MassSpringDamper2.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes

The co-sim.json and time.json are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.

"},{"location":"user/examples/mass-spring-damper/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/mass-spring-damper/output directory Clean Clears run logs and outputs"},{"location":"user/examples/mass-spring-damper/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/mass-spring-damper\n

If required, change the execute permission of lifecycle scripts you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/mass-spring-damper/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 in the workspace.

lifecycle/create\n
"},{"location":"user/examples/mass-spring-damper/index.html#execute","title":"Execute","text":"

Run the the Digital Twin. Since this is a co-simulation based digital twin, the Maestro co-simulation tool executes co-simulation using the two FMU models.

lifecycle/execute\n
"},{"location":"user/examples/mass-spring-damper/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the /workspace/examples/data/mass-spring-damper/output directory.

You can also view run logs in the /workspace/examples/digital_twins/mass-spring-damper.

"},{"location":"user/examples/mass-spring-damper/index.html#terminate-phase","title":"Terminate phase","text":"

Terminate to clean up the debug files and co-simulation output files.

lifecycle/terminate\n
"},{"location":"user/examples/mass-spring-damper/index.html#references","title":"References","text":"

More information about co-simulation techniques and mass spring damper case study are available in:

Gomes, Cl\u00e1udio, et al. \"Co-simulation: State of the art.\"\narXiv preprint arXiv:1702.00686 (2017).\n

The source code for the models used in this DT are available in mass spring damper github repository.

"},{"location":"user/examples/water_tank_FI/index.html","title":"Water Tank Fault Injection","text":""},{"location":"user/examples/water_tank_FI/index.html#overview","title":"Overview","text":"

This example shows a fault injection (FI) enabled digital twin (DT). A live DT is subjected to simulated faults received from the environment. The simulated faults is specified as part of DT configuration and can be changed for new instances of DTs.

In this co-simulation based DT, a watertank case-study is used; co-simulation consists of a tank and controller. The goal of which is to keep the level of water in the tank between Level-1 and Level-2. The faults are injected into output of the water tank controller (Watertankcontroller-c.fmu) from 12 to 20 time units, such that the tank output is closed for a period of time, leading to the water level increasing in the tank beyond the desired level (Level-2).

"},{"location":"user/examples/water_tank_FI/index.html#example-diagram","title":"Example Diagram","text":""},{"location":"user/examples/water_tank_FI/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_FI/index.html#digital-twin-configuration","title":"Digital Twin Configuration","text":"

This example uses two models and one tool. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models watertankcontroller-c.fmu Private Yes singlewatertank-20sim.fmu Private Yes Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes

The multimodelFI.json and simulation-config.json are two DT configuration files used for executing the digital twin. You can change these two files to customize the DT to your needs.

The faults are defined in wt_fault.xml.

"},{"location":"user/examples/water_tank_FI/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_FI/output directory Clean Clears run logs and outputs"},{"location":"user/examples/water_tank_FI/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/water_tank_FI\n

If required, change the execute permission of lifecycle scripts you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/water_tank_FI/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 and pip dependencies. The pandas and matplotlib are the pip dependencies installated.

lifecycle/create\n
"},{"location":"user/examples/water_tank_FI/index.html#execute","title":"Execute","text":"

Run the co-simulation. Generates the co-simulation output.csv file at /workspace/examples/data/water_tank_FI/output.

lifecycle/execute\n
"},{"location":"user/examples/water_tank_FI/index.html#analyze-phase","title":"Analyze phase","text":"

Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/.

lifecycle/analyze\n
"},{"location":"user/examples/water_tank_FI/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the /workspace/examples/data/water_tank_FI/output directory.

You can also view run logs in the /workspace/examples/digital_twins/water_tank_FI.

"},{"location":"user/examples/water_tank_FI/index.html#terminate-phase","title":"Terminate phase","text":"

Clean up the temporary files and delete output plot

lifecycle/terminate\n
"},{"location":"user/examples/water_tank_FI/index.html#references","title":"References","text":"

More details on this case-study can be found in the paper:

M. Frasheri, C. Thule, H. D. Macedo, K. Lausdahl, P. G. Larsen and\nL. Esterle, \"Fault Injecting Co-simulations for Safety,\"\n2021 5th International Conference on System Reliability and Safety (ICSRS),\nPalermo, Italy, 2021.\n

The fault-injection plugin is an extension to the Maestro co-orchestration engine that enables injecting inputs and outputs of FMUs in an FMI-based co-simulation with tampered values. More details on the plugin can be found in fault injection git repository. The source code for this example is also in the same github repository in a example directory.

"},{"location":"user/examples/water_tank_swap/index.html","title":"Water Tank Model Swap","text":""},{"location":"user/examples/water_tank_swap/index.html#overview","title":"Overview","text":"

This example shows multi-stage execution and dynamic reconfiguration of a digital twin (DT). Two features of DTs are demonstrated here:

  • Fault injection into live DT
  • Dynamic auto-reconfiguration of live DT

The co-simulation methodology is used to construct this DT.

"},{"location":"user/examples/water_tank_swap/index.html#example-structure","title":"Example Structure","text":""},{"location":"user/examples/water_tank_swap/index.html#configuration-of-assets","title":"Configuration of assets","text":"

This example uses four models and one tool. The specific assets used are:

Asset Type Names of Assets Visibility Reuse in Other Examples Models Watertankcontroller-c.fmu Private Yes Singlewatertank-20sim.fmu Private Yes Leak_detector.fmu Private No Leak_controller.fmu Private No Tool maestro-2.3.0-jar-with-dependencies.jar Common Yes

This DT has many configuration files. The DT is executed in two stages. There exist separate DT configuration files for each stage. The following table shows the configuration files and their purpose.

Configuration file name Execution Stage Purpose mm1. json stage-1 DT configuration wt_fault.xml, FaultInject.mabl stage-1 faults injected into DT during stage-1 mm2.json stage-2 DT configuration simulation-config.json Both stages Configuration for specifying DT execution time and output logs"},{"location":"user/examples/water_tank_swap/index.html#lifecycle-phases","title":"Lifecycle Phases","text":"Lifecycle Phase Completed Tasks Create Installs Java Development Kit for Maestro tool Execute Produces and stores output in data/water_tank_swap/output directory Analyze Process the co-simulation output and produce plots Clean Clears run logs, outputs and plots"},{"location":"user/examples/water_tank_swap/index.html#run-the-example","title":"Run the example","text":"

To run the example, change your present directory.

cd /workspace/examples/digital_twins/water_tank_swap\n

If required, change the permission of files you need to execute, for example:

chmod +x lifecycle/create\n

Now, run the following scripts:

"},{"location":"user/examples/water_tank_swap/index.html#create","title":"Create","text":"

Installs Open Java Development Kit 17 and pip dependencies. The matplotlib pip package is also installated.

lifecycle/create\n
"},{"location":"user/examples/water_tank_swap/index.html#execute","title":"Execute","text":"

This DT has two-stage execution. In the first-stage, a co-simulation is executed. The Watertankcontroller-c.fmu and Singlewatertank-20sim.fmu models are used to execute the DT. During this stage, faults are injected into one of the models (Watertankcontroller-c.fmu) and the system performance is checked.

In the second-stage, another co-simulation is run in which three FMUs are used. The FMUs used are: watertankcontroller, singlewatertank-20sim, and leak_detector. There is an in-built monitor in the Maestro tool. This monitor is enabled during the stage and a swap condition is set at the beginning of the second-stage. When the swap condition is satisfied, the Maestro swaps out Watertankcontroller-c.fmu model and swaps in Leakcontroller.fmu model. This swapping of FMU models demonstrates the dynamic reconfiguration of a DT.

The end of execution phase generates the co-simulation output.csv file at /workspace/examples/data/water_tank_swap/output.

lifecycle/execute\n
"},{"location":"user/examples/water_tank_swap/index.html#analyze-phase","title":"Analyze phase","text":"

Process the output of co-simulation to produce a plot at: /workspace/examples/data/water_tank_FI/output/plots/.

lifecycle/analyze\n
"},{"location":"user/examples/water_tank_swap/index.html#examine-the-results","title":"Examine the results","text":"

The results can be found in the workspace/examples/data/water_tank_swap/output directory.

You can also view run logs in the workspace/examples/digital_twins/water_tank_swap.

"},{"location":"user/examples/water_tank_swap/index.html#terminate-phase","title":"Terminate phase","text":"

Clean up the temporary files and delete output plot

lifecycle/terminate\n
"},{"location":"user/examples/water_tank_swap/index.html#references","title":"References","text":"

The complete source of this example is available on model swap github repository.

The runtime model (FMU) swap mechanism demonstrated by the experiment is detailed in the paper:

Ejersbo, Henrik, et al. \"fmiSwap: Run-time Swapping of Models for\nCo-simulation and Digital Twins.\" arXiv preprint arXiv:2304.07328 (2023).\n

The runtime reconfiguration of co-simulation by modifying the Functional Mockup Units (FMUs) used is further detailed in the paper:

Ejersbo, Henrik, et al. \"Dynamic Runtime Integration of\nNew Models in Digital Twins.\" 2023 IEEE/ACM 18th Symposium on\nSoftware Engineering for Adaptive and Self-Managing Systems\n(SEAMS). IEEE, 2023.\n
"},{"location":"user/servers/lib/LIB-MS.html","title":"Library Microservice","text":"

The library microservice provides an API interface to reusable assets library. This is only for expert users who need to integrate the DTaaS with their own IT systems. Regular users can safely skip this page.

The lib microservice is responsible for handling and serving the contents of library assets of the DTaaS platform. It provides API endpoints for clients to query, and fetch these assets.

This document provides instructions for using the library microservice.

Please see assets for a suggested storage conventions of your library assets.

Once the assets are stored in the library, you can access the server's endpoint by typing in the following URL: http://foo.com/lib.

The URL opens a graphql playground. You can check the query schema and try sample queries here. You can also send graphql queries as HTTP POST requests and get responses.

"},{"location":"user/servers/lib/LIB-MS.html#api-queries","title":"API Queries","text":"

The library microservice services two API calls:

  • Provide a list of contents for a directory
  • Fetch a file from the available files

The API calls are accepted over GraphQL and HTTP API end points. The format of the accepted queries are:

"},{"location":"user/servers/lib/LIB-MS.html#provide-list-of-contents-for-a-directory","title":"Provide list of contents for a directory","text":"

To retrieve a list of files in a directory, use the following GraphQL query.

Replace path with the desired directory path.

send requests to: https://foo.com/lib

GraphQL QueryGraphQL ResponseHTTP RequestHTTP Response
query {\n  listDirectory(path: \"user1\") {\n    repository {\n      tree {\n        blobs {\n          edges {\n            node {\n              name\n              type\n            }\n          }\n        }\n        trees {\n          edges {\n            node {\n              name\n              type\n            }\n          }\n        }\n      }\n    }\n  }\n}\n
{\n  \"data\": {\n    \"listDirectory\": {\n      \"repository\": {\n        \"tree\": {\n          \"blobs\": {\n            \"edges\": []\n          },\n          \"trees\": {\n            \"edges\": [\n              {\n                \"node\": {\n                  \"name\": \"common\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"data\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"digital twins\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"functions\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"models\",\n                  \"type\": \"tree\"\n                }\n              },\n              {\n                \"node\": {\n                  \"name\": \"tools\",\n                  \"type\": \"tree\"\n                }\n              }\n            ]\n          }\n        }\n      }\n    }\n  }\n}\n
POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 388\n\n{\n  \"query\":\"query {\\n  listDirectory(path: \\\"user1\\\") {\\n    repository {\\n      tree {\\n        blobs {\\n          edges {\\n            node {\\n              name\\n              type\\n            }\\n          }\\n        }\\n        trees {\\n          edges {\\n            node {\\n              name\\n              type\\n            }\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\"\n}\n
HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 306\nContent-Type: application/json; charset=utf-8\nDate: Tue, 26 Sep 2023 20:26:49 GMT\nX-Powered-By: Express\n{\"data\":{\"listDirectory\":{\"repository\":{\"tree\":{\"blobs\":{\"edges\":[]},\"trees\":{\"edges\":[{\"node\":{\"name\":\"data\",\"type\":\"tree\"}},{\"node\":{\"name\":\"digital twins\",\"type\":\"tree\"}},{\"node\":{\"name\":\"functions\",\"type\":\"tree\"}},{\"node\":{\"name\":\"models\",\"type\":\"tree\"}},{\"node\":{\"name\":\"tools\",\"type\":\"tree\"}}]}}}}}}\n
"},{"location":"user/servers/lib/LIB-MS.html#fetch-a-file-from-the-available-files","title":"Fetch a file from the available files","text":"

This query receives directory path and send the file contents to user in response.

To check this query, create a file files/user2/data/welcome.txt with content of hello world.

GraphQL RequestGraphQL ResponseHTTP RequestHTTP Response
query {\n  readFile(path: \"user2/data/sample.txt\") {\n    repository {\n      blobs {\n        nodes {\n          name\n          rawBlob\n          rawTextBlob\n        }\n      }\n    }\n  }\n}\n
{\n  \"data\": {\n    \"readFile\": {\n      \"repository\": {\n        \"blobs\": {\n          \"nodes\": [\n            {\n              \"name\": \"sample.txt\",\n              \"rawBlob\": \"hello world\",\n              \"rawTextBlob\": \"hello world\"\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n
POST /lib HTTP/1.1\nHost: foo.com\nContent-Type: application/json\nContent-Length: 217\n{\n  \"query\":\"query {\\n  readFile(path: \\\"user2/data/welcome.txt\\\") {\\n    repository {\\n      blobs {\\n        nodes {\\n          name\\n          rawBlob\\n          rawTextBlob\\n        }\\n      }\\n    }\\n  }\\n}\"\n}\n
HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nConnection: close\nContent-Length: 134\nContent-Type: application/json; charset=utf-8\nDate: Wed, 27 Sep 2023 09:17:18 GMT\nX-Powered-By: Express\n{\"data\":{\"readFile\":{\"repository\":{\"blobs\":{\"nodes\":[{\"name\":\"welcome.txt\",\"rawBlob\":\"hello world\",\"rawTextBlob\":\"hello world\"}]}}}}}\n

The path refers to the file path to look at: For example, user1 looks at files of user1; user1/functions looks at contents of functions/ directory.

"},{"location":"user/servers/lib/assets.html","title":"Reusable Assets","text":"

The reusability of digital twin assets makes it easy for users to work with the digital twins. The reusability of assets is a fundamental feature of the platform.

"},{"location":"user/servers/lib/assets.html#kinds-of-reusable-assets","title":"Kinds of Reusable Assets","text":"

The DTaaS software categorizes all the reusable library assets into five categories:

"},{"location":"user/servers/lib/assets.html#functions","title":"Functions","text":"

The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.

"},{"location":"user/servers/lib/assets.html#data","title":"Data","text":"

The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.

"},{"location":"user/servers/lib/assets.html#models","title":"Models","text":"

The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.

"},{"location":"user/servers/lib/assets.html#tools","title":"Tools","text":"

The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.

"},{"location":"user/servers/lib/assets.html#digital-twins","title":"Digital Twins","text":"

These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.

"},{"location":"user/servers/lib/assets.html#file-system-structure","title":"File System Structure","text":"

Each user has their assets put into five different directories named above. In addition, there will also be common library assets that all users have access to. A simplified example of the structure is as follows:

workspace/\n  data/\n    data1/ (ex: sensor)\n      filename (ex: sensor.csv)\n      README.md\n    data2/ (ex: turbine)\n      README.md (remote source; no local file)\n    ...\n  digital_twins/\n    digital_twin-1/ (ex: incubator)\n      code and config\n      README.md (usage instructions)\n    digital_twin-2/ (ex: mass spring damper)\n      code and config\n      README.md (usage instructions)\n    digital_twin-3/ (ex: model swap)\n      code and config\n      README.md (usage instructions)\n    ...\n  functions/\n    function1/ (ex: graphs)\n      filename (ex: graphs.py)\n      README.md\n    function2/ (ex: statistics)\n      filename (ex: statistics.py)\n      README.md\n    ...\n  models/\n    model1/ (ex: spring)\n      filename (ex: spring.fmu)\n      README.md\n    model2/ (ex: building)\n      filename (ex: building.skp)\n      README.md\n    model3/ (ex: rabbitmq)\n      filename (ex: rabbitmq.fmu)\n      README.md\n    ...\n  tools/\n    tool1/ (ex: maestro)\n      filename (ex: maestro.jar)\n      README.md\n    ...\n  common/\n    data/\n    functions/\n    models/\n    tools/\n

Tip

The DTaaS is agnostic to the format of your assets. The only requirement is that they are files which can be uploaded on the Library page. Any directories can be compressed as one file and uploaded. You can decompress the file into a directory from a Terminal or xfce Desktop available on the Workbench page.

A recommended file system structure for storing assets is also available in DTaaS examples.

"},{"location":"user/servers/lib/assets.html#upload-assets","title":"Upload Assets","text":"

Users can upload assets into their workspace using Library page of the website.

You can go into a directory and click on the upload button to upload a file or a directory into your workspace. This asset is then available in all the workbench tools you can use. You can also create new assets on the page by clicking on new drop down menu. This is a simple web interface which allows you to create text-based files. You need to upload other files using upload button.

The user workbench has the following services:

  • Jupyter Notebook and Lab
  • VS Code
  • XFCE Desktop Environment available via VNC
  • Terminal

Users can also bring their DT assets into user workspaces from outside using any of the above mentioned services. The developers using git repositories can clone from and push to remote git servers. Users can also use widely used file transfer protocols such as FTP, and SCP to bring the required DT assets into their workspaces.

"},{"location":"user/website/index.html","title":"DTaaS Website Screenshots","text":"

This page contains a screenshot driven preview of the website serving the DTaaS software platform.

"},{"location":"user/website/index.html#login-to-enter-the-dtaas-software-platform","title":"Login to enter the DTaaS software platform","text":"

The screen presents with HTTP authentication form. You can enter the user credentials. If the DTaaS is being served over HTTPS secure communication protocol, the username and password are secure.

"},{"location":"user/website/index.html#start-the-authentication","title":"Start the Authentication","text":"

You are now logged into the DTaaS server. The DTaaS uses third-party authentication protocol known as OAuth. This protocol provides secure access to a DTaaS installation if users have a working active accounts at the selected OAuth service provider. The DTaaS uses Gitlab as OAuth provider.

You can see the Gitlab signin button. A click on this button takes you to Gitlab instance providing authentication for DTaaS.

"},{"location":"user/website/index.html#authenticate-at-gitlab","title":"Authenticate at Gitlab","text":"

The username and password authentication takes place on the gitlab website. Enter your username and password in the login form.

"},{"location":"user/website/index.html#permit-dtaas-to-use-gitlab","title":"Permit DTaaS to Use Gitlab","text":"

The DTaaS application needs your permission to use your Gitlab account for authentication. Click on Authorize button.

After successful authentication, you will be redirected to the Library page of the DTaaS website.

"},{"location":"user/website/index.html#overview-of-menu-items","title":"Overview of menu items","text":"

The menu is hidden by default. Only the icons of menu items are visible. You can click on the icon in the top-left corner of the page to see the menu.

There are three menu items:

Library: for management of reusable library assets. You can upload, download, create and modify new files on this page.

Digital Twins: for management of digital twins. You are presented with the Jupyter Lab page from which you can run the digital twins.

Workbench: Not all digital twins can be managed within Jupyter Lab. You have more tools at your disposal on this page.

"},{"location":"user/website/index.html#library-tabs-and-their-help-text","title":"Library tabs and their help text","text":"

You can see the file manager and five tabs above the library manager. Each tab provides help text to guide users in the use of different directories in their workspace.

Functions

The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform. In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.

Data

The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.

Models

The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.

Tools

The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models. A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin. Most models require tools to evaluate them in the context of data inputs. There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.

Digital

These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.

In addition to the five directories, there is also common directory in which five sub-directories exist. These sub-directories are: data, functions, models, tools and digital twins.

Common

The common directory again has four sub-directories: * data * functions * models * tools * digital twins The assets common to all users are placed in common.

The items used by more than one user are placed in common. The items in the common directory are available to all users. Further explanation of directory structure and placement of reusable assets within the the directory structure is in the assets page

The file manager is based on Jupyter notebook and all the tasks you can perform in the Jupyter Notebook can be undertaken here.

"},{"location":"user/website/index.html#digital-twins-page","title":"Digital Twins page","text":"

The digital twins page has three tabs and the central pane opens Jupyter lab. There are three tabs with helpful instructions on the suggested tasks you can undertake in the Create - Execute - Analyze life cycle phases of digital twin. You can see more explanation on the life cycle phases of digital twin.

Create

Create digital twins from tools provided within user workspaces. Each digital twin will have one directory. It is suggested that user provide one bash shell script to run their digital twin. Users can create the required scripts and other files from tools provided in Workbench page.

Execute

Digital twins are executed from within user workspaces. The given bash script gets executed from digital twin directory. Terminal-based digital twins can be executed from VSCode and graphical digital twins can be executed from VNC GUI. The results of execution can be placed in the data directory.

Analyze

The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI. The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI.

The reusable assets (files) seen in the file manager are available in the Jupyter Lab. In addition, there is a git plugin installed in the Jupyter Lab using which you can link your files with the external git repositories.

"},{"location":"user/website/index.html#workbench","title":"Workbench","text":"

The workbench page provides links to four integrated tools.

The hyperlinks open in new browser tab. The screenshots of pages opened in new browser are:

Bug

The Terminal hyperlink does not always work reliably. If you want terminal. Please use the tools dropdown in the Jupyter Notebook. The Terminal hyperlink does not always work reliably. If you want terminal. Please use the tools dropdown in the Jupyter Notebook.

"},{"location":"user/website/index.html#finally-logout","title":"Finally logout","text":"

You have to close the browser in order to completely exit the DTaaS software platform.

"}]} \ No newline at end of file diff --git a/development/sitemap.xml.gz b/development/sitemap.xml.gz index f2a421329..c23602f24 100644 Binary files a/development/sitemap.xml.gz and b/development/sitemap.xml.gz differ diff --git a/development/user/examples/incubator/index.html b/development/user/examples/incubator/index.html index 6fa9dbfd0..b47af24d3 100644 --- a/development/user/examples/incubator/index.html +++ b/development/user/examples/incubator/index.html @@ -1368,9 +1368,13 @@

Incubator Demo

1
 2
-3
docker run -d --name rabbitmq-server -p 15672:15672 -p 5672:5672 rabbitmq:3-management
-docker exec rabbitmq-server rabbitmqctl add_user incubator incubator
-docker exec rabbitmq-server rabbitmqctl set_permissions -p "/" incubator ".*" ".*" ".*"
+3
+4
+5
docker run -d --name rabbitmq-server \
+  --restart always \
+  -p 15672:15672 -p 5672:5672 rabbitmq:3-management
+docker exec rabbitmq-server rabbitmqctl add_user incubator incubator
+docker exec rabbitmq-server rabbitmqctl set_permissions -p "/" incubator ".*" ".*" ".*"
 

Access InfluxDB running on another machine. Remember that InfluxDB works only on a distinct sub-domain