Skip to content

Commit

Permalink
Merge pull request CanonicalLtd#754 from degville/741-1-menu
Browse files Browse the repository at this point in the history
Updated table of contents to improve navigability
  • Loading branch information
degville committed Jan 26, 2018
1 parent 44213e2 commit 58329f9
Show file tree
Hide file tree
Showing 6 changed files with 201 additions and 274 deletions.
142 changes: 32 additions & 110 deletions en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,11 +49,36 @@ MAAS works with any configuration system, and is recommended by the teams
behind both [Chef][about-chef] and [Juju][about-juju] as a physical
provisioning system.

The [web UI][webui] provides a responsive interface to the majority of MAAS
functionality, while the [CLI][manage-cli] and [REST API][api] facilitate
configuration and large-scale automation.

![web UI showing node view][img__webui]

!!! Note:
Windows, RHEL and SUSE images require
[Ubuntu Advantage][ubuntu-advantage] to work properly with MAAS.


## Key components and colocation of all services

The key components of a MAAS installation are the region controller and the
rack controller. See [Concepts and terms][concepts-controllers] for how each
are defined.

Unless there is specific reason not to, it is recommended to have both
controllers residing on the same system. A no-fuss way to achieve this is by
installing the `maas` metapackage, or by installing from the Ubuntu Server ISO.

Multiple region and rack controllers are required if
[high availability][manage-ha] and/or load balancing (see HA page) is desired.

It's important to note that the all-in-one solution will provide a DHCP
service. Review your existing network design in order to determine whether this
will cause problems. See [DHCP][dhcp] for more on this subject.

![intro-arch-overview][img__arch-overview]

## How MAAS works

MAAS manages a pool of nodes. After registering ("Enlisting" state) a new
Expand Down Expand Up @@ -93,116 +118,6 @@ Juju terminology. However, everything that was stated earlier still applies.
For instance, if Juju removes a machine then MAAS will, in turn, release that
machine to the pool.


## Key components and colocation of all services

The key components of a MAAS installation are the region controller and the
rack controller. See [Concepts and terms][concepts-controllers] for how each
are defined.

Unless there is specific reason not to, it is recommended to have both
controllers residing on the same system. A no-fuss way to achieve this is by
installing the `maas` metapackage, or by installing from the Ubuntu Server ISO.

Multiple region and rack controllers are required if
[high availability][manage-ha] and/or load balancing (see HA page) is desired.

It's important to note that the all-in-one solution will provide a DHCP
service. Review your existing network design in order to determine whether this
will cause problems. See [DHCP][dhcp] for more on this subject.


## Installation methods

There are three ways to install MAAS:

- From the Ubuntu Server ISO
- From software packages ("debs")
- As a self-contained LXD environment

These methods, and their respective advantages, are fleshed out on the
[Installation][maas-install] page.


## Minimum requirements

The minimum requirements for the machines that run MAAS vary widely depending
on local implementation and usage.

Below, resource estimates are provided based on MAAS components and operating
system (Ubuntu Server). A test (or proof of concept) and a production
environment are considered.


^# Test environment

This is a proof of concept scenario where all MAAS components are installed
on a single host. *Two* complete sets of images (latest two Ubuntu
LTS releases) for a *single* architecture (amd64) have been assumed.

| | Memory (MB) | CPU (GHz) | Disk (GB) |
| --------------------------------------------------- | ----------- | --------- | --------- |
| [Region controller][concepts-controllers] (minus PostgreSQL) | 512 | 0.5 | 5 |
| PostgreSQL | 512 | 0.5 | 5 |
| [Rack controller][concepts-controllers] | 512 | 0.5 | 5 |
| Ubuntu Server (including logs) | 512 | 0.5 | 5 |

Therefore, the approximate requirements for this scenario are: 2 GB memory,
2 GHz CPU, and 20 GB of disk space.


^# Production environment

This is a production scenario that is designed to handle a high number of
sustained client connections. Both high availability (region and rack) and load
balancing (region) have been implemented.

Even though extra space has been reserved for images (database and rack
controller) some images such as those for Microsoft Windows may require a lot
more (plan accordingly).

| | Memory (MB) | CPU (GHz) | Disk (GB) |
| --------------------------------------------------- | ----------- | --------- | --------- |
| [Region controller][concepts-controllers] (minus PostgreSQL) | 2048 | 2.0 | 5 |
| PostgreSQL | 2048 | 2.0 | 20 |
| [Rack controller][concepts-controllers] | 2048 | 2.0 | 20 |
| Ubuntu Server (including logs) | 512 | 0.5 | 20 |

Therefore, the approximate requirements for this scenario are:

- A region controller (including PostgreSQL) is installed on one host: 4.5 GB
memory, 4.5 GHz CPU, and 45 GB of disk space.
- A region controller (including PostgreSQL) is duplicated on a second
host: 4.5 GB memory, 4.5 GHz CPU, and 45 GB of disk space.
- A rack controller is installed on a third host: 2.5 GB memory, 2.5 GHz CPU,
and 40 GB of disk space.
- A rack controller is duplicated on a fourth host: 2.5 GB memory, 2.5 GHz CPU,
and 40 GB of disk space.

!!! Note:
Figures in the above two tables are for the MAAS infrastructure only.
That is, they do not cover resources needed on the nodes that will subsequently
be added to MAAS. That said, node machines should have IPMI-based BMC
controllers for power cycling, see [BMC power types][power-types].

Examples of factors that influence hardware specifications include:

- the number of connecting clients (client activity)
- the manner in which services are distributed
- whether [high availability][manage-ha] is used
- whether [load balancing][load-balancing] is used
- the number of images that are stored (disk space affecting PostgreSQL and
the rack controller)

Equally not taken into account is a possible [local image mirror][mirror] which
would be a large consumer of disk space.

One rack controller should not be used to service more than 1000 nodes (whether
on the same or multiple subnets). There is no load balancing at the rack level
so further independent rack controllers will be needed with each one servicing
its own subnet(s).


<!-- LINKS -->

[about-chef]: https://www.chef.io/chef
Expand All @@ -216,3 +131,10 @@ its own subnet(s).
[power-types]: nodes-power-types.md
[load-balancing]: manage-ha.md#load-balancing-(optional)
[mirror]: installconfig-images-mirror.md
[webui]: installconfig-webui.md
[manage-cli]: manage-cli.md
[api]: api.md

<!-- IMAGES -->
[img__webui]: ../media/intro__2.3_webui.png
[img__arch-overview]: ../media/intro-arch-overview.png
13 changes: 0 additions & 13 deletions en/intro-architecture.md

This file was deleted.

43 changes: 0 additions & 43 deletions en/intro-management.md

This file was deleted.

84 changes: 84 additions & 0 deletions en/intro-requirements.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
Title: What is MAAS?
table_of_contents: True

# Requirements

The minimum requirements for the machines that run MAAS vary widely depending
on local implementation and usage.

Below, resource estimates are provided based on MAAS components and operating
system (Ubuntu Server). A test (or proof of concept) and a production
environment are considered.


## Test environment

This is a proof of concept scenario where all MAAS components are installed
on a single host. *Two* complete sets of images (latest two Ubuntu
LTS releases) for a *single* architecture (amd64) have been assumed.

| | Memory (MB) | CPU (GHz) | Disk (GB) |
| --------------------------------------------------- | ----------- | --------- | --------- |
| [Region controller][concepts-controllers] (minus PostgreSQL) | 512 | 0.5 | 5 |
| PostgreSQL | 512 | 0.5 | 5 |
| [Rack controller][concepts-controllers] | 512 | 0.5 | 5 |
| Ubuntu Server (including logs) | 512 | 0.5 | 5 |

Therefore, the approximate requirements for this scenario are: 2 GB memory,
2 GHz CPU, and 20 GB of disk space.


## Production environment

This is a production scenario that is designed to handle a high number of
sustained client connections. Both high availability (region and rack) and load
balancing (region) have been implemented.

Even though extra space has been reserved for images (database and rack
controller) some images such as those for Microsoft Windows may require a lot
more (plan accordingly).

| | Memory (MB) | CPU (GHz) | Disk (GB) |
| --------------------------------------------------- | ----------- | --------- | --------- |
| [Region controller][concepts-controllers] (minus PostgreSQL) | 2048 | 2.0 | 5 |
| PostgreSQL | 2048 | 2.0 | 20 |
| [Rack controller][concepts-controllers] | 2048 | 2.0 | 20 |
| Ubuntu Server (including logs) | 512 | 0.5 | 20 |

Therefore, the approximate requirements for this scenario are:

- A region controller (including PostgreSQL) is installed on one host: 4.5 GB
memory, 4.5 GHz CPU, and 45 GB of disk space.
- A region controller (including PostgreSQL) is duplicated on a second
host: 4.5 GB memory, 4.5 GHz CPU, and 45 GB of disk space.
- A rack controller is installed on a third host: 2.5 GB memory, 2.5 GHz CPU,
and 40 GB of disk space.
- A rack controller is duplicated on a fourth host: 2.5 GB memory, 2.5 GHz CPU,
and 40 GB of disk space.

!!! Note:
Figures in the above two tables are for the MAAS infrastructure only.
That is, they do not cover resources needed on the nodes that will subsequently
be added to MAAS. That said, node machines should have IPMI-based BMC
controllers for power cycling, see [BMC power types][power-types].

Examples of factors that influence hardware specifications include:

- the number of connecting clients (client activity)
- the manner in which services are distributed
- whether [high availability][manage-ha] is used
- whether [load balancing][load-balancing] is used
- the number of images that are stored (disk space affecting PostgreSQL and
the rack controller)

Equally not taken into account is a possible [local image mirror][mirror] which
would be a large consumer of disk space.

One rack controller should not be used to service more than 1000 nodes (whether
on the same or multiple subnets). There is no load balancing at the rack level
so further independent rack controllers will be needed with each one servicing
its own subnet(s).

[concepts-controllers]: intro-concepts.md#controllers
[manage-ha]: manage-ha.md
[load-balancing]: manage-ha.md#load-balancing-(optional)
Loading

0 comments on commit 58329f9

Please sign in to comment.