Skip to content

Commit

Permalink
Move Deployment Scenarios sections into separate modules (#3525)
Browse files Browse the repository at this point in the history
* Move Deployment Scenarios sections into separate modules
* Drop an obsolete file
* Drop {context} from deployment scenario IDs

---------

Co-authored-by: Maximilian Kolb <[email protected]>
  • Loading branch information
asteflova and maximiliankolb authored Jan 7, 2025
1 parent f17af8b commit c87daa0
Show file tree
Hide file tree
Showing 11 changed files with 162 additions and 172 deletions.
13 changes: 13 additions & 0 deletions guides/common/assembly_common-deployment-scenarios.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
include::modules/con_common-deployment-scenarios.adoc[]

include::modules/con_single-location-with-segregated-subnets.adoc[leveloffset=+1]

include::modules/con_multiple-locations.adoc[leveloffset=+1]

include::modules/con_content-view-scenarios.adoc[leveloffset=+1]

ifdef::katello,orcharhino,satellite[]
include::modules/con_projectserver-with-multiple-manifests.adoc[leveloffset=+1]
endif::[]

include::modules/con_host-group-structures.adoc[leveloffset=+1]
5 changes: 5 additions & 0 deletions guides/common/modules/con_common-deployment-scenarios.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[id="common-deployment-scenarios"]
= Common deployment scenarios

This section provides a brief overview of common deployment scenarios for {ProjectName}.
Note that many variations and combinations of the following layouts are possible.
51 changes: 51 additions & 0 deletions guides/common/modules/con_content-view-scenarios.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
[id="content-view-scenarios"]
= Content view scenarios

The following section provides general scenarios for deploying content views as well as lifecycle environments.

The default lifecycle environment called *Library* gathers content from all connected sources.
It is not recommended to associate hosts directly with the Library as it prevents any testing of content before making it available to hosts.
Instead, create a lifecycle environment path that suits your content workflow.
The following scenarios are common:

* *A single lifecycle environment* – content from Library is promoted directly to the production stage.
This approach limits the complexity but still allows for testing the content within the Library before making it available to hosts.
+
image::lifecycle-path-basic-satellite.png[A single lifecycle environment]

* *A single lifecycle environment path* – both operating system and applications content is promoted through the same path.
The path can consist of several stages (for example *Development*, *QA*, *Production*), which enables thorough testing but requires additional effort.
+
image::lifecycle-path-simple-satellite.png[A single lifecycle environment path]

* *Application specific lifecycle environment paths* – each application has a separate path, which allows for individual application release cycles.
You can associate specific compute resources with application lifecycle stages to facilitate testing.
On the other hand, this scenario increases the maintenance complexity.
+
image::lifecycle-path-diverged-satellite.png[Application specific lifecycle environment paths]


The following content view scenarios are common:

* *All in one content view* – a content view that contains all necessary content for the majority of your hosts.
Reducing the number of content views is an advantage in deployments with constrained resources (time, storage space) or with uniform host types.
However, this scenario limits the content view capabilities such as time based snapshots or intelligent filtering.
Any change in content sources affects a proportion of hosts.

* *Host specific content view* – a dedicated content view for each host type.
This approach can be useful in deployments with a small number of host types (up to 30).
However, it prevents sharing content across host types as well as separation based on criteria other than the host type (for example between operating system and applications).
With critical updates every content view has to be updated, which increases maintenance efforts.

* *Host specific composite content view* – a dedicated combination of content views for each host type.
This approach enables separating host specific and shared content, for example you can have dedicated content views for the operating system and application content.
By using a composite, you can manage your operating system and applications separately and at different frequencies.

* *Component based content view* – a dedicated content view for a specific application.
For example a database content view can be included into several composite content views.
This approach allows for greater standardization but it leads to an increased number of content views.

The optimal solution depends on the nature of your host environment.
Avoid creating a large number of content views, but keep in mind that the size of a content view affects the speed of related operations (publishing, promoting).
Also make sure that when creating a subset of packages for the content view, all dependencies are included as well.
Note that Kickstart repositories should not be added to content views, as they are used for host provisioning only.
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ In smaller deployments or when you do not require content versioning and environ

.Additional resources
* For more information, see {ContentManagementDocURL}Managing_Application_Lifecycles_content-management[Managing application lifecycles], {ContentManagementDocURL}Managing_Content_Views_content-management[Managing content views], and {ContentManagementDocURL}Restricting_Hosts_Access_to_Content_content-management[Restricting hosts' access to content] in _{ContentManagementDocTitle}_.
* For examples of content view deployments, see xref:chap-Architecture_Guide-Deployment_Scenarios[].
* For examples of content view deployments, see xref:common-deployment-scenarios[].
32 changes: 32 additions & 0 deletions guides/common/modules/con_host-group-structures.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
[id="host-group-structures"]
= Host group structures

The fact that host groups can be nested to inherit parameters from each other allows for designing host group hierarchies that fit particular workflows.
A well planned host group structure can help to simplify the maintenance of host settings.
This section outlines four approaches to organizing host groups.

[[figu-Life_Cycle_Environment_Based_Structure]]
.Host group structuring examples

image::host-group-structures-satellite.png[Host group structuring examples]

Flat structure::
The advantage of a flat structure is limited complexity, as inheritance is avoided.
In a deployment with few host types, this scenario is the best option.
However, without inheritance there is a risk of high duplication of settings between host groups.

Lifecycle environment based structure::
In this hierarchy, the first host group level is reserved for parameters specific to a lifecycle environment.
The second level contains operating system related definitions, and the third level contains application specific settings.
Such structure is useful in scenarios where responsibilities are divided among lifecycle environments (for example, a dedicated owner for the *Development*, *QA*, and *Production* lifecycle stages).

Application based structure::
This hierarchy is based on roles of hosts in a specific application.
For example, it enables defining network settings for groups of back-end and front-end servers.
The selected characteristics of hosts are segregated, which supports Puppet-focused management of complex configurations.
However, the content views can only be assigned to host groups at the bottom level of this hierarchy.

Location based structure::
In this hierarchy, the distribution of locations is aligned with the host group structure.
In a scenario where the location ({SmartProxyServer}) topology determines many other attributes, this approach is the best option.
On the other hand, this structure complicates sharing parameters across locations, therefore in complex environments with a large number of applications, the number of host group changes required for each configuration change increases significantly.
13 changes: 13 additions & 0 deletions guides/common/modules/con_multiple-locations.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[id="multiple-locations"]
= Multiple locations

{Team} recommends to create at least one {SmartProxyServer} per geographic location.
This practice can save bandwidth since hosts obtain content from a local {SmartProxyServer}.
Synchronization of content from remote repositories is done only by the {SmartProxy}, not by each host in a location.
In addition, this layout makes the provisioning infrastructure more reliable and easier to configure.

ifndef::satellite[]
include::snip_red-hat-images.adoc[]
endif::[]

image::common/system-architecture-satellite.png[Content Flow in {ProjectName}]
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ endif::[]

ifeval::["{context}" == "planning"]
.Additional resources
* For examples of deployment scenarios, see xref:chap-Architecture_Guide-Deployment_Scenarios[].
* For examples of deployment scenarios, see xref:common-deployment-scenarios[].
ifdef::katello[]
* For information about managing organizations and locations, see {ManagingOrganizationsLocationsDocURL}[_{ManagingOrganizationsLocationsDocTitle}_].
endif::[]
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
[id="projectserver-with-multiple-manifests"]
= {ProjectServer} with multiple manifests

If you plan to have more than one Red{nbsp}Hat{nbsp}Network account, or if you want to manage systems belonging to another entity that is also a Red{nbsp}Hat{nbsp}Network account holder, then you and the other account holder can assign subscriptions, as required, to manifests.
A customer that does not have a {Project} subscription can create a Subscription Asset Manager manifest, which can be used with {Project}, if they have other valid subscriptions.
You can then use the multiple manifests in one {ProjectServer} to manage multiple organizations.

If you must manage systems but do not have access to the subscriptions for the RPMs, you must use {RHEL} {Project} Add-On.
For more information, see https://www.redhat.com/en/technologies/management/satellite[{Project} Add-On].

The following diagram shows two Red{nbsp}Hat{nbsp}Network account holders, who want their systems to be managed by the same {Project} installation.
In this scenario, Example Corporation 1 can allocate any subset of their 60 subscriptions, in this example they have allocated 30, to a manifest.
This can be imported into the {Project} as a distinct Organization.
This allows system administrators the ability to manage Example Corporation 1's systems using {Project} completely independently of Example Corporation 2's organizations (R&D, Operations, and Engineering).

.{ProjectServer} with multiple manifests
image::server-multiple-manifests-satellite.png[{ProjectServer} with multiple manifests]

When creating a Red{nbsp}Hat subscription manifest:

* Add the subscription for {ProjectServer} to the manifest if planning a disconnected or self-registered {ProjectServer}.
This is not necessary for a connected {ProjectServer} that is subscribed using the Subscription Manager utility on the base system.

* Add subscriptions for all {SmartProxyServers} you want to create.

* Add subscriptions for all Red{nbsp}Hat products you want to manage with {Project}.

* Note the date when the subscriptions are due to expire and plan for their renewal before the expiry date.

* Create one manifest per organization.
You can use multiple manifests and they can be from different Red Hat subscriptions.

{ProjectName} allows the use of future-dated subscriptions in the manifest.
This enables uninterrupted access to repositories when future-dated subscriptions are added to a manifest before the expiry date of existing subscriptions.

Note that the Red{nbsp}Hat subscription manifest can be modified and reloaded to {ProjectServer} in case of any changes in your infrastructure, or when adding more subscriptions.
Manifests should not be deleted.
If you delete the manifest from the Red Hat Customer Portal or in the {ProjectWebUI} it will unregister all of your content hosts.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
[id="single-locations-with-segregated-subnets"]
= Single location with segregated subnets

Your infrastructure might require multiple isolated subnets even if {ProjectName} is deployed in a single geographic location.
This can be achieved for example by deploying multiple {SmartProxyServers} with DHCP and DNS services, but the recommended way is to create segregated subnets using a single {SmartProxy}.
This {SmartProxy} is then used to manage hosts and compute resources in those segregated networks to ensure they only have to access the {SmartProxy} for provisioning, configuration, errata, and general management.
For more information on configuring subnets, see {ManagingHostsDocURL}[_{ManagingHostsDocTitle}_].
2 changes: 1 addition & 1 deletion guides/doc-Planning_for_Project/master.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ include::common/assembly_supported-usage-and-versions-of-project-components.adoc

include::common/assembly_deployment-path.adoc[leveloffset=+1]

include::topics/Deployment_Scenarios.adoc[]
include::common/assembly_common-deployment-scenarios.adoc[leveloffset=+1]

include::topics/Provisioning_Concepts.adoc[]

Expand Down
Loading

0 comments on commit c87daa0

Please sign in to comment.