Skip to content

Commit

Permalink
Improve team label error and its documentation (#35)
Browse files Browse the repository at this point in the history
* use better error for lack of the team label

* add prerequisites

* update readme and logo in csv

* use correct version of ceph in testing dockerfile

* add ceph version to readme

* update csv description

* update icon in csv

* Update README.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
  • Loading branch information
hoptical and coderabbitai[bot] authored Mar 20, 2024
1 parent 9c465d4 commit 3ac4d69
Show file tree
Hide file tree
Showing 6 changed files with 25 additions and 50 deletions.
46 changes: 8 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,13 @@ The Ceph S3 Operator, an open-source endeavor, is crafted to streamline the mana

## Installation

### Prerequisites

- Kubernetes v1.23.0+
- Ceph v14.2.10+
> Note: prior Ceph versions [don't support the subuser bucket policy](https://github.com/ceph/ceph/pull/33714). Nevertheless, other features are expected to work properly within those earlier releases.
- ClusterResourceQuota CRD: `kubectl apply -f config/external-crd`

### Using Makefile

Deploy using a simple command:
Expand All @@ -40,44 +47,7 @@ helm upgrade --install ceph-s3-operator oci://ghcr.io/snapp-incubator/ceph-s3-op

### Using OLM

All the operator releases are bundled and pushed to the [Snappcloud hub](https://github.com/snapp-incubator/snappcloud-hub) which is a hub for the catalog sources. Install using Operator Lifecycle Manager (OLM) by following these steps:

1. Install [snappcloud hub catalog-source](https://github.com/snapp-incubator/snappcloud-hub/blob/main/catalog-source.yml)

2. Override the `ceph-s3-operator-controller-manager-config-override` with your operator configuration.
3. Apply the subscription manifest as shown below:

```yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ceph-s3-operator
namespace: operators
spec:
channel: stable-v0
installPlanApproval: Automatic
name: ceph-s3-operator
source: snappcloud-hub-catalog
sourceNamespace: openshift-marketplace
config:
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
volumes:
- name: config
secret:
items:
- key: config.yaml
path: config.yaml
secretName: ceph-s3-operator-controller-manager-config-override
volumeMounts:
- mountPath: /ceph-s3-operator/config/
name: config
```
You can find the operator on [OperatorHub](https://operatorhub.io/operator/ceph-s3-operator) and install it using OLM.

## Usage and Documentation

Expand Down
2 changes: 1 addition & 1 deletion api/v1alpha1/quota_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ func findTeam(ctx context.Context, runtimeClient client.Client, suc *S3UserClaim

team, ok := ns.ObjectMeta.Labels[consts.LabelTeam]
if !ok {
return "", fmt.Errorf("namespace %s doesn't have team label", ns.ObjectMeta.Name)
return "", fmt.Errorf("namespace %s doesn't have the team label: %s", ns.ObjectMeta.Name, consts.LabelTeam)
}

return team, nil
Expand Down
6 changes: 2 additions & 4 deletions api/v1alpha1/s3userclaim_webhook.go
Original file line number Diff line number Diff line change
Expand Up @@ -152,8 +152,7 @@ func validateQuota(suc *S3UserClaim, allErrs field.ErrorList) field.ErrorList {
case err == consts.ErrExceededNamespaceQuota:
allErrs = append(allErrs, field.Forbidden(quotaFieldPath, err.Error()))
case err != nil:
s3userclaimlog.Error(err, "failed to validate against cluster quota")
allErrs = append(allErrs, field.InternalError(quotaFieldPath, fmt.Errorf(consts.ContactCloudTeamErrMessage)))
allErrs = append(allErrs, field.InternalError(quotaFieldPath, fmt.Errorf("failed to validate against cluster quota, %w", err)))
}

switch err := validateAgainstClusterQuota(ctx, suc); {
Expand All @@ -162,8 +161,7 @@ func validateQuota(suc *S3UserClaim, allErrs field.ErrorList) field.ErrorList {
case goerrors.Is(err, consts.ErrClusterQuotaNotDefined):
allErrs = append(allErrs, field.Forbidden(quotaFieldPath, err.Error()))
case err != nil:
s3userclaimlog.Error(err, "failed to validate against cluster quota")
allErrs = append(allErrs, field.InternalError(quotaFieldPath, fmt.Errorf(consts.ContactCloudTeamErrMessage)))
allErrs = append(allErrs, field.InternalError(quotaFieldPath, fmt.Errorf("failed to validate against cluster quota, %w", err)))
}
return allErrs
}
Expand Down
18 changes: 13 additions & 5 deletions config/manifests/bases/ceph-s3-operator.clusterserviceversion.yaml

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion pkg/consts/consts.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ const (
S3UserClassImmutableErrMessage = "s3UserClass is immutable"
S3UserRefImmutableErrMessage = "s3UserRef is immutable"
S3UserRefNotFoundErrMessage = "there is no s3UserClaim regarding the defined s3UserRef"
ContactCloudTeamErrMessage = "please contact the cloud team"

FinalizerPrefix = "s3.snappcloud.io/"
S3UserClaimCleanupFinalizer = FinalizerPrefix + "cleanup-s3userclaim"
Expand Down
2 changes: 1 addition & 1 deletion testing/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Source(with modifications): https://github.com/ceph/go-ceph/blob/master/testing/containers/ceph/Dockerfile
ARG CEPH_IMG=quay.io/ceph/ceph
ARG CEPH_TAG=v14.2.6
ARG CEPH_TAG=v14.2.22
FROM ${CEPH_IMG}:${CEPH_TAG}

RUN true && \
Expand Down

0 comments on commit 3ac4d69

Please sign in to comment.