diff --git a/docs/installation.rst b/docs/installation.rst index 7924672e..9e7a24ba 100644 --- a/docs/installation.rst +++ b/docs/installation.rst @@ -130,7 +130,7 @@ between you and the director, but also to ensure that only authorized parties such as yourself and the various components of ConPaaS can interact with the system. -It is therefore crucial that the SSL certificate of your director contains the +It is, therefore, crucial that the SSL certificate of your director contains the proper information. In particular, the `commonName` field of the certificate should carry the **public hostname of your director**, and it should match the *hostname* part of :envvar:`DIRECTOR_URL` in @@ -255,9 +255,9 @@ Installing and configuring cps-tools The command line ``cps-tools`` is a command line client to interact with ConPaaS. It has essentially a modular internal architecture that is easier -to extend. It has also "object-oriented" arguments where "ConPaaS" objects -are services, users, clouds and applications. The argument consists in -stating the "object" first and then calling a sub-command on it. It also +to extend. It has also *object-oriented* arguments where ConPaaS objects +are services, users, clouds, and applications. The arguments consist of +stating the object first and then calling a sub-command on it. It also replaces the command line tool ``cpsadduser.py``. ``cps-tools`` requires: @@ -341,7 +341,7 @@ Activate ``virtualenv``:: Python 2.7.2 (ve)$ -Install python argparse and argcomplete modules:: +Install python ``argparse`` and ``argcomplete`` modules:: (ve)$ pip install argparse (ve)$ pip install argcomplete @@ -433,7 +433,7 @@ to EC2 is `Getting Started with Amazon EC2 Linux Instances Pre-built Amazon Machine Images ------------------------------- ConPaaS requires the usage of an Amazon Machine Image (AMI) to contain the -dependencies of its processes. For your convenience we provide a pre-built +dependencies of its processes. For your convenience, we provide a pre-built public AMI, already configured and ready to be used on Amazon EC2, for each availability zone supported by ConPaaS. The AMI IDs of said images are: @@ -485,15 +485,15 @@ S3-backed AMIs are usually more cost-efficient, but if you plan to use *t1.micro (free tier) your VM image should be hosted on EBS. For an EBS-backed AMI, you should either create your ``conpaas.img`` on an Amazon -EC2 instance, or transfer the image to one. Once ``conpaas.img`` is there, you +EC2 instance or transfer the image to one. Once ``conpaas.img`` is there, you should execute ``register-image-ec2-ebs.sh`` as root on the EC2 instance to register your AMI. The script requires your **EC2_ACCESS_KEY** and **EC2_SECRET_KEY** to proceed. At the end, the script will output your new AMI ID. You can check this in your Amazon dashboard in the AMI section. -For a S3-backed AMI, you do not need to register your image from an EC2 +For an S3-backed AMI, you do not need to register your image from an EC2 instance. Simply run ``register-image-ec2-s3.sh`` where you have created your -``conpaas.img``. Note that you need an EC2 certificate with private key to be +``conpaas.img``. Note that you need an EC2 certificate with a private key to be able to do so. Registering an S3-backed AMI requires administrator privileges. More information on Amazon credentials can be found at `About AWS Security Credentials `_. @@ -508,7 +508,7 @@ inbound traffic. Therefore, one needs to specify a whitelist of protocols and destination ports that are accessible from the outside. The following ports should be open for all running instances: -- TCP ports 443 and 5555 used by the ConPaaS system (director, managers +- TCP ports 443 and 5555 used by the ConPaaS system (director, managers, and agents) - TCP ports 80, 8000, 8080 and 9000 – used by the Web Hosting service @@ -549,7 +549,7 @@ set the environment variables that authenticate the user is to source the Registering your ConPaaS image to OpenStack -------------------------------------------- The prebuilt ConPaaS images suitable to be used with OpenStack can be downloaded -from the following links, depending on the virtualization tehnology and +from the following links, depending on the virtualization technology and system architecture you are using: **ConPaaS VM image for OpenStack with KVM (x86_64):** @@ -613,7 +613,7 @@ section of the nova configuration file (``/etc/nova/nova.conf``):: Security Group -------------- As in the case of Amazon Web Services deployments, OpenStack deployments use -security groups to limit the the network connections allowed to an instance. +security groups to limit the network connections allowed to an instance. The list of ports that should be opened for every instance is the same as in the case of Amazon Web Services and can be consulted here: :ref:`security-group-ec2`. @@ -626,7 +626,7 @@ Using the command line, the security groups can be listed using:: $ nova secgroup-list You can use the ``default`` security group that is automatically created in every -project. However note that, unless the its default settings are changed, this +project. However note that, unless its default settings are changed, this security group denies all incoming traffic. For more details on creating and editing a security group, please refer to the @@ -655,7 +655,7 @@ ConPaaS needs to know which instance type it can use, called *flavor* in OpenSta terminology. There are quite a few flavors configured by default, which can also be customized if needed. -The list of available flavors can obtained in Horizon by navigating to the +The list of available flavors can be obtained in Horizon by navigating to the *Admin* > *System* > *Flavors* menu. Using the command line, the same result can be obtained using:: @@ -688,7 +688,7 @@ for VirtualBox. This can be done from the following link: .. warning:: It is always a good idea to check the integrity of a downloaded image before continuing - with the next step, as a corrupted image can lead to unexpected behaviour. You can do + with the next step, as a corrupted image can lead to unexpected behavior. You can do this by comparing its MD5 hash with the one shown above. To obtain the MD5 hash, you can use the ``md5sum`` command. @@ -717,7 +717,7 @@ The recommended system requirements for optimal performance:: .. warning:: It is highly advised to run the Nutshell on a system that meets the recommended - system requirements, or else the its performance may be severely impacted. For + system requirements, or else its performance may be severely impacted. For systems that do not meet the recommended requirements (but still meet the minimum requirements), a very careful split of the resources between the VM and the host system needs to be performed. @@ -773,11 +773,11 @@ The recommended system requirements for optimal performance:: then following the menu: *Settings* > *System* > *Motherboard* / *Processor*. We recommend allocating at least 4 GB of RAM for the Nutshell to function properly. Make sure that enough memory remains for the host system to operate properly and - never allocate more CPUs than what is available in your host computer. + never allocate more CPUs than what is available on your host computer. #. It is also a very good idea to create a snapshot of the initial state of the Nutshell VM, immediately after it was imported. This allows the possibility to - quickly revert to the initial state without importing the VM again, when something + quickly revert to the initial state without importing the VM again when something goes wrong. For more information regarding the usage of the Nutshell please consult the @@ -817,7 +817,7 @@ The two images can be downloaded from the following links: .. warning:: It is always a good idea to check the integrity of a downloaded image before continuing - with the next steps, as a corrupted image can lead to unexpected behaviour. You can do + with the next steps, as a corrupted image can lead to unexpected behavior. You can do this by comparing its MD5 hash with the ones shown above. To obtain the MD5 hash, you can use the ``md5sum`` command. diff --git a/docs/internals.rst b/docs/internals.rst index 65404664..a1ae4f20 100644 --- a/docs/internals.rst +++ b/docs/internals.rst @@ -9,10 +9,10 @@ A ConPaaS application represents a collection of ConPaaS services working together. The application manager is a process that resides in the first VM that is created when the application is started and is in charge of managing the entire application. The application manager represents the -single control point over the entire application. +single control point for the entire application. A ConPaaS service consists of three main entities: the service manager, -the service agent and the web frontend. The service manager is a component +the service agent, and the web frontend. The service manager is a component that supplements the application manager with service-specific functionality. Its role is to manage the service by providing supporting agents, maintaining a stable configuration at any time and by permanently monitoring the @@ -23,10 +23,10 @@ To implement a new ConPaaS service, you must provide a new service manager, a new service agent and a new service frontend (we assume that each ConPaaS service can be mapped on the three entities architecture). To ease the process of adding a new ConPaaS service, we propose a -framework which implements common functionality of the ConPaaS services. -So far, the framework provides abstraction for the IaaS layer (adding +framework which implements the common functionality of the ConPaaS services. +So far, the framework provides abstractions for the IaaS layer (adding support for a new cloud provider should not require modifications in any -ConPaaS service implementation) and it also provides abstraction for the +ConPaaS service implementation) and it also provides abstractions for the HTTP communication (we assume that HTTP is the preferred protocol for the communication between the three entities). @@ -78,13 +78,13 @@ A new service should be added in a new python module under the │── sbin │── scripts -In the next paragraphs we describe how to add the new ConPaaS service. +In the next paragraphs, we describe how to add the new ConPaaS service. Implementing a new ConPaaS service ================================== -In this section we describe how to implement a new ConPaaS service by +In this section, we describe how to implement a new ConPaaS service by providing an example which can be used as a starting point. The new service is called *helloworld* and will just generate helloworld strings. Thus, the manager will provide a method, called get\_helloworld @@ -105,7 +105,7 @@ filled in behind the scenes. This dictionary is used by the built-in server in the conpaas.core package to dispatch the HTTP requests. The module conpaas.core.http contains some useful methods, like HttpJsonResponse and HttpErrorResponse that are used to respond to the -HTTP request dispatched to the corresponding method. In this class we +HTTP request dispatched to the corresponding method. In this class, we also implemented a method called startup, which only prints a line of text in the agent's log file. This method could be used, for example, to make some initializations in the agent. @@ -153,7 +153,7 @@ Next, we will implement the service manager in the same manner: we will write the *HelloWorldManager* class and place it in the file *conpaas/services/helloworld/manager/manager.py*. (See |lst:helloworldmanagermanager|) A service manager supplements the -application manager with service specific functionality. It does so by +application manager with service-specific functionality. It does so by overriding the methods inherited from the base manager class. These methods will be called by the application manager when the corresponding event occurs. For example, *on\_start* is called immediately after the @@ -182,8 +182,8 @@ agent services, called *web* which is used by both webservices. Integrating the new service with the frontend ============================================= -So far there is no easy way to add a new frontend service. Each service -may require distinct graphical elements. In this section we explain how +So far, there is no easy way to add a new frontend service. Each service +may require distinct graphical elements. In this section, we explain how to create the web frontend page for a service. Manager states @@ -311,8 +311,8 @@ might not always reset your system to its original state. To undo everything the script has done, follow these instructions: #. The image has been mounted as a separate file system. Find the - mounted directory using command ``df -h``. The directory should be in - the form of ``/tmp/tmp.X``. + mounted directory using the ``df -h`` command. The directory should be + in the form of ``/tmp/tmp.X``. #. There may be a ``dev`` and a ``proc`` directories mounted inside it. Unmount everything using:: @@ -364,16 +364,16 @@ configuration file, *nutshell* and *container* which control the kind of image that is going to be generated. Since these two flags can take either value *true* of *false*, we distinguish four cases: -#. *nutshell = false*, *container = false*: In this case a standard ConPaaS VM +#. *nutshell = false*, *container = false*: In this case, a standard ConPaaS VM image is generated and the nutshell configurations are not taken into consideration. This is the default configuration which should be used when ConPaaS is deployed on a standard cloud. -#. *nutshell = false*, *container = true*: In this case the user indicates that +#. *nutshell = false*, *container = true*: In this case, the user indicates that the image that will be generated will be a LXC container image. This image is similar to a standard VM one, but it does not contain a kernel installation. -#. *nutshell = true*, *container = false*. In this case a Nutshell image is +#. *nutshell = true*, *container = false*. In this case, a Nutshell image is generated and a standard ConPaaS VM image will be embedded in it. This configuration should be used for deploying ConPaaS in nested standard VMs within a single VM. @@ -428,7 +428,7 @@ Preinstalling an application into a ConPaaS Services Image A ConPaaS Services Image contains all the necessary components needed in order to run the ConPaaS services. For deploying arbitrary applications using ConPaaS, -the :ref:`the-generic-service` provides a mechanism to install and run the application, +:ref:`the-generic-service` provides a mechanism to install and run the application, along with its dependencies. The installation, however, has to happen during the initialization of every new node that is started, for example in the ``init.sh`` script of the Generic Service. If installing the application with its dependencies @@ -462,7 +462,7 @@ ConPaaS Services Image. The current section describes this process. .. warning:: If you choose to use one of the images above, it is always a good idea to check its integrity before continuing to the next step. A corrupt image may result in - unexpected behaviour which may be hard to trace. You can check the integrity by + unexpected behavior which may be hard to trace. You can check the integrity by verifying the MD5 hash with the ``md5sum`` command. Alternatively, you can also create one such image using the instructions provided @@ -474,10 +474,10 @@ ConPaaS Services Image. The current section describes this process. .. warning:: The following steps need to be performed on a machine with the same architecture - and a similar operating system. For the regular images, this means the 64 bit + and a similar operating system. For the regular images, this means the 64-bit version of a Debian or Ubuntu system. For the Raspberry PI image, the steps need to be performed on the Raspberry PI itself (with a Raspbian installation, arm - architecture). Trying to customize the Raspberry PI image on a x86 system will not + architecture). Trying to customize the Raspberry PI image on an x86 system will not work! #. Log in as **root** and change to the directory where you downloaded the image. @@ -512,7 +512,7 @@ ConPaaS Services Image. The current section describes this process. correct device in the following commands. #. If you increased the size of the image in step 3, you now need to also expand the - file system. First, check the integrity of the file system with the following + file system. First, check the integrity of the filesystem with the following command:: root@raspberrypi:/home/pi# e2fsck -f /dev/loop0 @@ -560,7 +560,7 @@ ConPaaS Services Image. The current section describes this process. root@raspberrypi:/# echo "nameserver 8.8.8.8" > /etc/resolv.conf - This example uses the Google Public DNS, you may however use any DNS server you + This example uses the Google Public DNS; you may, however, use any DNS server you prefer. Check that the Internet works in this new environment:: diff --git a/docs/manifest.rst b/docs/manifest.rst index 16067a3a..f81cb2ae 100644 --- a/docs/manifest.rst +++ b/docs/manifest.rst @@ -3,7 +3,7 @@ Manifest Guide ============== A manifest is a JSON file that describes a ConPaaS application. It can be -written with your favourite text editor. +written with your favorite text editor. --------------------------------------- Creating an application from a manifest @@ -31,7 +31,7 @@ sudoku PHP program). File ``sudoku.mnf``:: This simple example states the application name and the service list which is here a single PHP service. It gives the name of the service, its type, whether it should be automatically started (1 for autostart, 0 otherwise), and it gives -the path to the PHP program that will be uploaded into the created PHP service. +the path to the PHP program that will be uploaded to the newly created PHP service. To create an application from a manifest, you can use either the web client or the command line client. @@ -46,7 +46,8 @@ the command line client. In this example, once the application has been created, you will have to start the PHP service either with the web client (button start on the PHP service -page) or with command line client (``cps-service start ``). +page) or with the command line client +(``cps-service start ``). MediaWiki example @@ -87,8 +88,8 @@ MediaWiki application as the one provided by the ConPaaS system:: } Even if the application is more complicated than the sudoku, the manifest -file is not very different. In this case the file specifies three different -services: PHP, MySQL and XtreemFS. +file is not very different. In this case, the file specifies three different +services: PHP, MySQL, and XtreemFS. ------------------------------------------- @@ -129,7 +130,7 @@ The following fields are optional and are available in all the services. the startup of the agents It is not required to define how many instances the service needs. By -default if the user starts a service, one instance will be created. If the +default, if the user starts a service, one instance will be created. If the user wants to create more instances, then the user can use this option in the manifest. - *StartupInstances*: Specify how many instances of each type needs to @@ -159,19 +160,19 @@ service. php --- -- *Archive*: Specify an URL where the service should fetch the source +- *Archive*: Specify a URL where the service should fetch the source archive. java ---- -- *Archive*: Specify an URL where the service should fetch the source +- *Archive*: Specify a URL where the service should fetch the source archive. mysql ----- -- *Dump*: Specify an URL where the service should fetch the dump +- *Dump*: Specify a URL where the service should fetch the dump xtreemfs -------- @@ -192,7 +193,7 @@ file (see the full example in the end) are the following: - *Application*: Specify the application name on which your services will start. It can be a new application or an existing one. If it is - omitted, a default application name will be choosen. + omitted, a default application name will be chosen. Full specification file ======================= diff --git a/docs/userguide.rst b/docs/userguide.rst index 19f55f9b..2a11a1ec 100644 --- a/docs/userguide.rst +++ b/docs/userguide.rst @@ -61,7 +61,7 @@ Add a service. Click on “add new service”, then select the service you want to add. This operation adds extra functionalities to the application manager which are specific to a certain service. These functionalities - enable the application manager to be charge of taking care of the + enable the application manager to be in charge of taking care of the service, but it does not host applications itself. Other instances in charge of running the actual application are called “agent” instances. @@ -71,7 +71,7 @@ Start a service. depending on the type of service. Rename the service. - By default all new services are named “New service”. To give a + By default, all new services are named “New service”. To give a meaningful name to a service, click on this name in the service-specific page and enter a new name. @@ -82,7 +82,7 @@ Check the list of virtual instances. service. Certain services use a single role for all instances, while other services specialize different instances to take different roles. For example, the PHP Web hosting service distinguishes three - roles: load balancers, web servers and PHP servers. + roles: load balancers, web servers, and PHP servers. Scale the service up and down. When a service is started it uses a single “agent” instance. To add @@ -96,7 +96,7 @@ Stop the service. button to stop the service. This stops all instances of the service. Remove the service. - Click “remove” to delete the service. At this point all the state of + Click “remove” to delete the service. At this point, all the state of the service manager is lost. Stop the application. @@ -343,10 +343,10 @@ action is necessary to use PHP sessions in ConPaaS. Debug mode ---------- -By default the PHP service does not display anything in case PHP errors +By default, the PHP service does not display anything in case PHP errors occur while executing the application. This setting is useful for production, when you do not want to reveal internal information to -external users. While developing an application it is however useful to +external users. While developing an application it is, however, useful to let PHP display errors. :: @@ -373,7 +373,7 @@ for a PHP page. If your PHP service has a slow response time, increase the number of backend nodes. -On the command line, the ``add_nodes`` sub-command can be used to add +On the command line, the ``add_nodes`` subcommand can be used to add additional nodes to a service. It takes as arguments the number of backend nodes, web nodes and proxy nodes to add:: @@ -463,15 +463,15 @@ The MySQL service offers the capability to instantiate multiple instances of database nodes, which can be used to increase the throughput and to improve features of fault tolerance through replication. The multi-master structure allows any database node to -process incoming updates, because the replication system is +process incoming updates, the replication system being responsible for propagating the data modifications made by each member to the rest of the group and resolving any conflicts that might arise between concurrent changes made by different members. These features can be used to increase the throughput of the cluster. -To obtain the better performance from a cluster, it is a best -practice to use it in balanced fashion, so that each node has -approximatively the same load of the others. To achieve this, the +To obtain better performance from a cluster, it is a best +practice to use it in a balanced fashion, so that each node has +approximately the same load of the others. To achieve this, the service allows users to allocate special load balancer nodes (``glb``) which implement load balancing. Load balancer nodes are designed to receive all incoming database queries and @@ -518,7 +518,7 @@ Performance Monitoring The MySQL service interface provides a sophisticated mechanism to monitor the service. The user interface, in the frontend, shows a monitoring control, called "Performance Monitor", that can be used to monitor a large cluster's -behaviour. It interacts with "Ganglia", "Galera" and "MySQL" to obtain various +behavior. It interacts with "Ganglia", "Galera" and "MySQL" to obtain various kinds of information. Thus, "Performance Monitor" provides a solution for maintaining control and visibility of all nodes, with a monitoring dynamic data every few seconds. @@ -533,18 +533,18 @@ It consists of three main components. - The second control highlights the cluster’s performance, with a table detailing the load, memory usage, CPU utilization, and network - traffic for each node of the cluster. Users can use these - informations in order to detect problems in their applications. The + traffic for each node of the cluster. Users can use this + information in order to detect problems in their applications. The table displays the resource utilization across all nodes, and - highlight the parameters which suggest an abnormality. For example - if CPU utilization is high, or free memory is very low this is shown + highlight the parameters which suggest an abnormality. For example, + if CPU utilization is high or free memory is very low, this is shown clearly. This may mean that processes on this node will start to - slow down, and that it may be time to add additional nodes to the - cluster. On the other hand this may indicate a malfunction of the + slow down and that it may be time to add additional nodes to the + cluster. On the other hand, this may indicate a malfunction of the specific node. - "Galera Mean Misalignment" draws a real-time measure of the mean - misalignment across the nodes. This information is derived by + misalignment across the nodes. This information is derived from Galera metrics about the average length of the receive queue since the most recent status query. If this value is noticeably larger than zero, the nodes are likely to be overloaded, and cannot apply @@ -559,7 +559,7 @@ The XtreemFS service provides POSIX compatible storage for ConPaaS. Users can create volumes that can be mounted remotely or used by other ConPaaS services, or inside applications. An XtreemFS instance consists of multiple DIR, MRC and OSD servers. The OSDs contain the actual storage, while the DIR is a directory -service and the MRC contains meta data. By default, one instance of each runs +service and the MRC contains metadata. By default, one instance of each runs inside the first agent virtual machine and the service can be scaled up and down by adding and removing additional OSD nodes. The XtreemFS documentation can be found at http://xtreemfs.org/userguide.php. @@ -570,7 +570,7 @@ SSL Certificates The XtreemFS service uses SSL certificates for authorization and authentication. There are two types of certificates, user-certificates and client-certificates. Both certificates can additionally be flagged as administrator certificates which -allows performing administrative file-systems tasks when using them to access +allow performing administrative file-systems tasks when used to access XtreemFS. Certificates are only valid for the service that was used to create them. The generated certificates are in P12-format. @@ -580,7 +580,7 @@ take the user and group with whom an XtreemFS command is called, or a mounted Xt volume is accessed. So multiple users might share a single client-certificate. On the other hand, user-certificates contain a user and group inside the certificate. So usually, each user has her personal user-certificate. Both kinds of certificate can -be used in parallel. Client-certificates are less secure, since the user and group with +be used in parallel. Client-certificates are less secure since the user and group with whom files are accessed can be arbitrarily changed if the mounting user has local superuser rights. So client-certificates should only be used in trusted environments. @@ -595,7 +595,7 @@ Accessing volumes directly Once a volume has been created, it can be directly mounted on a remote site by using the ``mount.xtreemfs`` command. A mounted volume can be used like any local -POSIX-compatible filesystem. You need a certificate for mounting (see last section). +POSIX-compatible filesystem. You need a certificate for mounting (see the last section). The command looks like this, where
is the IP of an agent running an XtreemFS directory service (usually the first agent):: @@ -611,7 +611,7 @@ Policies -------- Different aspects of XtreemFS (e.g. replica- and OSD-selection) can be -customised by setting certain policies. Those policies can be set via the +customized by setting certain policies. Those policies can be set via the ConPaaS command line client (recommended) or directly via ``xtfsutil`` (see the XtreemFS user guide). The commands are like follows, were is ``osd_sel``, ``replica_sel``, or ``replication``:: @@ -624,7 +624,7 @@ Important notes When a service is scaled down by removing OSDs, the data of those OSDs is migrated to the remaining OSDs. Always make sure there is enough free space -for this operation to succeed. Otherwise you risk data loss. +for this operation to succeed. Otherwise, you risk data loss. .. _the-generic-service: @@ -922,7 +922,7 @@ execution is completed. In the web frontend, the ``run``, ``interrupt`` and ``cleanup`` buttons are conveniently located on the top of the page, above the instances view. Pressing such a button will execute the corresponding script in all the agents. -Above the buttons there is also a parameters field which allow the user to +Above the buttons, there is also a parameters field which allows the user to specify parameters which will be forwarded to the script during the execution. On the command line, the following commands may be used:: @@ -1013,7 +1013,7 @@ you may want to check these instructions first: :ref:`conpaas-in-a-nutshell`. that may appear in the VM window at this stage are usually harmless debug messages which can be ignored. -#. When the the login prompt appears, the Nutshell VM is ready to be used. +#. When the login prompt appears, the Nutshell VM is ready to be used. Using the Nutshell via the graphical frontend --------------------------------------------- @@ -1042,7 +1042,7 @@ You can now use the frontend in the same way as any ConPaaS system, creating applications, services etc. Note that the services are also only accessible from your local machine. -Note that also *Horizon* (the Openstack dashboard) is running on it as +Note that also *Horizon* (the OpenStack dashboard) is running on it as well. In case you are curious and want to have a look under the hood, Horizon can be reached (using HTTP, not HTTPS) at the same IP address:: @@ -1088,7 +1088,7 @@ lists all the active instances and:: lists all the existing storage volumes. -The Nutshell contains a *Devstack* installation of Openstack, +The Nutshell contains a *Devstack* installation of OpenStack, therefore different services run and log on different tabs of a *screen* session. In order to stop, start or consult the logs of these services, connect to the screen session by executing::