-
Notifications
You must be signed in to change notification settings - Fork 95
Commit
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
include::modules/con_common-deployment-scenarios.adoc[] | ||
|
||
include::modules/con_single-location.adoc[leveloffset=+1] | ||
|
||
include::modules/con_single-location-with-segregated-subnets.adoc[leveloffset=+1] | ||
|
||
include::modules/con_multiple-locations.adoc[leveloffset=+1] | ||
|
||
ifdef::katello[] | ||
include::modules/con_content-view-scenarios.adoc[leveloffset=+1] | ||
endif::[] | ||
|
||
ifdef::satellite[] | ||
include::modules/con_projectserver-with-multiple-manifests.adoc[leveloffset=+1] | ||
endif::[] | ||
|
||
include::modules/con_host-group-structures.adoc[leveloffset=+1] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
[id="common-deployment-scenarios_{context}"] | ||
= Common deployment scenarios | ||
|
||
This section provides a brief overview of common deployment scenarios for {ProjectName}. | ||
Note that many variations and combinations of the following layouts are possible. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
[id="content-view-scenarios_{context}"] | ||
= Content view scenarios | ||
Check failure on line 2 in guides/common/modules/con_content-view-scenarios.adoc GitHub Actions / linter
|
||
|
||
The following section provides general scenarios for deploying content views as well as lifecycle environments. | ||
|
||
The default lifecycle environment called *Library* gathers content from all connected sources. | ||
It is not recommended to associate hosts directly with the Library as it prevents any testing of content before making it available to hosts. | ||
Instead, create a lifecycle environment path that suits your content workflow. | ||
The following scenarios are common: | ||
|
||
* *A single lifecycle environment* – content from Library is promoted directly to the production stage. | ||
This approach limits the complexity but still allows for testing the content within the Library before making it available to hosts. | ||
+ | ||
image::lifecycle-path-basic-satellite.png[A single lifecycle environment] | ||
|
||
* *A single lifecycle environment path* – both operating system and applications content is promoted through the same path. | ||
The path can consist of several stages (for example *Development*, *QA*, *Production*), which enables thorough testing but requires additional effort. | ||
+ | ||
image::lifecycle-path-simple-satellite.png[A single lifecycle environment path] | ||
|
||
* *Application specific lifecycle environment paths* – each application has a separate path, which allows for individual application release cycles. | ||
You can associate specific compute resources with application lifecycle stages to facilitate testing. | ||
On the other hand, this scenario increases the maintenance complexity. | ||
Check failure on line 23 in guides/common/modules/con_content-view-scenarios.adoc GitHub Actions / linter
|
||
+ | ||
image::lifecycle-path-diverged-satellite.png[Application specific lifecycle environment paths] | ||
|
||
|
||
The following content view scenarios are common: | ||
|
||
* *All in one content view* – a content view that contains all necessary content for the majority of your hosts. | ||
Reducing the number of content views is an advantage in deployments with constrained resources (time, storage space) or with uniform host types. | ||
However, this scenario limits the content view capabilities such as time based snapshots or intelligent filtering. | ||
Any change in content sources affects a proportion of hosts. | ||
|
||
* *Host specific content view* – a dedicated content view for each host type. | ||
This approach can be useful in deployments with a small number of host types (up to 30). | ||
However, it prevents sharing content across host types as well as separation based on criteria other than the host type (for example between operating system and applications). | ||
With critical updates every content view has to be updated, which increases maintenance efforts. | ||
|
||
* *Host specific composite content view* – a dedicated combination of content views for each host type. | ||
This approach enables separating host specific and shared content, for example you can have dedicated content views for the operating system and application content. | ||
By using a composite, you can manage your operating system and applications separately and at different frequencies. | ||
|
||
* *Component based content view* – a dedicated content view for a specific application. | ||
For example a database content view can be included into several composite content views. | ||
This approach allows for greater standardization but it leads to an increased number of content views. | ||
|
||
The optimal solution depends on the nature of your host environment. | ||
Avoid creating a large number of content views, but keep in mind that the size of a content view affects the speed of related operations (publishing, promoting). | ||
Check failure on line 49 in guides/common/modules/con_content-view-scenarios.adoc GitHub Actions / linter
|
||
Also make sure that when creating a subset of packages for the content view, all dependencies are included as well. | ||
Note that Kickstart repositories should not be added to content views, as they are used for host provisioning only. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
[id="host-group-structures"] | ||
= Host group structures | ||
|
||
The fact that host groups can be nested to inherit parameters from each other allows for designing host group hierarchies that fit particular workflows. | ||
A well planned host group structure can help to simplify the maintenance of host settings. | ||
This section outlines four approaches to organizing host groups. | ||
|
||
.Host group structuring examples | ||
|
||
image::host-group-structures-satellite.png[Host group structuring examples] | ||
|
||
*Flat structure* | ||
|
||
The advantage of a flat structure is limited complexity, as inheritance is avoided. | ||
In a deployment with few host types, this scenario is the best option. | ||
However, without inheritance there is a risk of high duplication of settings between host groups. | ||
|
||
*Lifecycle environment based structure* | ||
|
||
In this hierarchy, the first host group level is reserved for parameters specific to a lifecycle environment. | ||
The second level contains operating system related definitions, and the third level contains application specific settings. | ||
Such structure is useful in scenarios where responsibilities are divided among lifecycle environments (for example, a dedicated owner for the *Development*, *QA*, and *Production* lifecycle stages). | ||
|
||
*Application based structure* | ||
|
||
This hierarchy is based on roles of hosts in a specific application. | ||
For example, it enables defining network settings for groups of back-end and front-end servers. | ||
The selected characteristics of hosts are segregated, which supports Puppet-focused management of complex configurations. | ||
However, the content views can only be assigned to host groups at the bottom level of this hierarchy. | ||
|
||
*Location based structure* | ||
|
||
In this hierarchy, the distribution of locations is aligned with the host group structure. | ||
In a scenario where the location ({SmartProxyServer}) topology determines many other attributes, this approach is the best option. | ||
On the other hand, this structure complicates sharing parameters across locations, therefore in complex environments with a large number of applications, the number of host group changes required for each configuration change increases significantly. | ||
Check failure on line 35 in guides/common/modules/con_host-group-structures.adoc GitHub Actions / linter
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
[id="multiple-locations_{context}"] | ||
= Multiple locations | ||
|
||
It is recommended to create at least one {SmartProxyServer} per geographic location. | ||
This practice can save bandwidth since hosts obtain content from a local {SmartProxyServer}. | ||
Synchronization of content from remote repositories is done only by the {SmartProxy}, not by each host in a location. | ||
In addition, this layout makes the provisioning infrastructure more reliable and easier to configure. | ||
|
||
ifndef::satellite[] | ||
include::snip_red-hat-images.adoc[] | ||
endif::[] | ||
|
||
image::common/system-architecture-satellite.png[Content Flow in {ProjectName}] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
[id="projectserver-with-multiple-manifests_{context}"] | ||
= {ProjectServer} with multiple manifests | ||
|
||
If you plan to have more than one Red{nbsp}Hat{nbsp}Network account, or if you want to manage systems belonging to another entity that is also a Red{nbsp}Hat{nbsp}Network account holder, then you and the other account holder can assign subscriptions, as required, to manifests. | ||
A customer that does not have a {Project} subscription can create a Subscription Asset Manager manifest, which can be used with {Project}, if they have other valid subscriptions. | ||
You can then use the multiple manifests in one {ProjectServer} to manage multiple organizations. | ||
|
||
If you must manage systems but do not have access to the subscriptions for the RPMs, you must use {RHEL} {Project} Add-On. | ||
For more information, see https://www.redhat.com/en/technologies/management/satellite[{Project} Add-On]. | ||
|
||
The following diagram shows two Red{nbsp}Hat{nbsp}Network account holders, who want their systems to be managed by the same {Project} installation. | ||
In this scenario, Example Corporation 1 can allocate any subset of their 60 subscriptions, in this example they have allocated 30, to a manifest. | ||
This can be imported into the {Project} as a distinct Organization. | ||
This allows system administrators the ability to manage Example Corporation 1's systems using {Project} completely independently of Example Corporation 2's organizations (R&D, Operations, and Engineering). | ||
|
||
.{ProjectServer} with multiple manifests | ||
image::server-multiple-manifests-satellite.png[{ProjectServer} with multiple manifests] | ||
|
||
When creating a Red{nbsp}Hat Subscription Manifest: | ||
Check failure on line 19 in guides/common/modules/con_projectserver-with-multiple-manifests.adoc GitHub Actions / linter
|
||
|
||
|
||
* Add the subscription for {ProjectServer} to the manifest if planning a disconnected or self-registered {ProjectServer}. | ||
This is not necessary for a connected {ProjectServer} that is subscribed using the Subscription Manager utility on the base system. | ||
|
||
* Add subscriptions for all {SmartProxyServers} you want to create. | ||
|
||
* Add subscriptions for all Red{nbsp}Hat products you want to manage with {Project}. | ||
|
||
* Note the date when the subscriptions are due to expire and plan for their renewal before the expiry date. | ||
|
||
* Create one manifest per organization. | ||
You can use multiple manifests and they can be from different Red Hat subscriptions. | ||
|
||
{ProjectName} allows the use of future-dated subscriptions in the manifest. | ||
This enables uninterrupted access to repositories when future-dated subscriptions are added to a manifest before the expiry date of existing subscriptions. | ||
|
||
Note that the Red{nbsp}Hat Subscription Manifest can be modified and reloaded to {ProjectServer} in case of any changes in your infrastructure, or when adding more subscriptions. | ||
Check failure on line 37 in guides/common/modules/con_projectserver-with-multiple-manifests.adoc GitHub Actions / linter
|
||
Manifests should not be deleted. | ||
If you delete the manifest from the Red Hat Customer Portal or in the {ProjectWebUI} it will unregister all of your content hosts. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
[id="single-location-with-segregated-subnets_{context}"] | ||
= Single location with segregated subnets | ||
|
||
Your infrastructure might require multiple isolated subnets even if {ProjectName} is deployed in a single geographic location. | ||
This can be achieved for example by deploying multiple {SmartProxyServers} with DHCP and DNS services, but the recommended way is to create segregated subnets using a single {SmartProxy}. | ||
This {SmartProxy} is then used to manage hosts and compute resources in those segregated networks to ensure they only have to access the {SmartProxy} for provisioning, configuration, errata, and general management. | ||
For more information on configuring subnets see {ManagingHostsDocURL}[_Managing Hosts_]. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
[id="single-location_{context}"] | ||
= Single location | ||
|
||
An _integrated {SmartProxy}_ is a virtual {SmartProxyServer} that is created by default in {ProjectServer} during the installation process. | ||
This means {ProjectServer} can be used to provision directly connected hosts for {Project} deployment in a single geographical location, therefore only one physical server is needed. | ||
The base systems of isolated {SmartProxies} can be directly managed by {ProjectServer}, however it is not recommended to use this layout to manage other hosts in remote locations. |
This file was deleted.