Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to determine if the following GVK is namespaced if CRD is created in the same run #3176

Open
ecerulm opened this issue Aug 19, 2024 · 3 comments
Labels
kind/bug Some behavior is incorrect or out of spec

Comments

@ecerulm
Copy link

ecerulm commented Aug 19, 2024

What happened?

I'm deploying cert-manager helm release + a ClusterIssuer resource

cert-manager helm release creates the ClusterIssuer CRD, but it seems that the pulumi.ConfigGroup that creates the ClusterIssuer resources tries to check some properties of the ClusterIssuer CRD before it's actually created although there is depends_on to the cert-manager.

    Exception: marshaling properties: awaiting input property "resources": failed to determine if the following GVK is namespaced: cert-manager.io/v1, Kind=ClusterIssuer

If I rerun pulumi up -y --skip-preview after then the ClusterIssuer will be created fine. That's why I think it's a timing issue between the creation of the ClusterIssuer CRD and the actual ClusterIssuer resource.

Example

cert_manager = helmv3.Release(
    "cert_manager",
    helmv3.ReleaseArgs(
        # https://cert-manager.io/docs/installation/helm/
        # https://artifacthub.io/packages/helm/cert-manager/cert-manager
        # https://github.com/cert-manager/cert-manager
        name="cert-manager",
        chart="cert-manager",
        namespace=namespace.id,
        version="1.15.2",
        repository_opts=helmv3.RepositoryOptsArgs(
            repo="https://charts.jetstack.io",
        ),
        values=cert_manager_helm_values,
    ),
    opts=pulumi.ResourceOptions(
        provider=kubernetes_provider,
    ),
)


def generate_clusterissuer_manifest(name, server):
    def func(args):
        template = env.get_template("letsencrypt-clusterissuer.j2.yaml")
        rendered = template.render(
            domain_name=args["domain_name"],
            region=args["region"],
            zone_id=args["zone_id"],
            name=name,
            server=server,
        )
        return rendered

    return pulumi.Output.all(
        domain_name=domain_name,
        region=region,
        zone_id=zone.id,
    ).apply(func)


# https://www.pulumi.com/registry/packages/kubernetes/api-docs/yaml/configgroup/
letsencrypt_stagin_cluster_issuer_cg = kubernetes.yaml.v2.ConfigGroup(
    "letsencrypt-staging",
    yaml=generate_clusterissuer_manifest(
        name="letsencrypt-staging",
        server="https://acme-staging-v02.api.letsencrypt.org/directory",
    ),
    opts=pulumi.ResourceOptions(
        depends_on=[
            cert_manager,
        ],
        provider=kubernetes_provider,
    ),
)


Output of pulumi about

pulumi about
CLI          
Version      3.129.0
Go Version   go1.22.6
Go Compiler  gc

Plugins
KIND      NAME        VERSION
resource  aws         6.49.1
resource  eks         2.7.8
resource  kubernetes  4.17.1
language  python      unknown
resource  random      4.16.3

Host     
OS       darwin
Version  14.6.1
Arch     x86_64

This project is written in python: executable='/Users/xxx/git/pulumi-aws-ecerulm/venv/bin/python' version='3.12.5'


...

Backend        
Name           xxxxx
URL            file://~
User           xxxx
Organizations  
Token type     personal

Dependencies:
NAME           VERSION
Jinja2         3.1.4
pip            24.2
pulumi_eks     2.7.8
pulumi_random  4.16.3
setuptools     72.2.0
wheel          0.44.0

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@ecerulm ecerulm added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Aug 19, 2024
@ecerulm
Copy link
Author

ecerulm commented Aug 19, 2024

The workaround I use now is to create the CRDs myself with another kubernetes.yaml.v2.ConfigFile with just the CRDs

# https://www.pulumi.com/registry/packages/kubernetes/api-docs/yaml/configfile/
crds = kubernetes.yaml.v2.ConfigFile(
    "letsencrypt-prod",
    file="files/cert-manager.crds.yaml",
    opts=pulumi.ResourceOptions(
        depends_on=[

        ],
        provider=kubernetes_provider,
    ),
) 

then I set installCRDs: false for the cert-manager helm values, and make the ClusterIssuer depend on the crds ConfigFile and cert-manager's Release

blampe added a commit that referenced this issue Aug 22, 2024
This changes our error handling in `Normalize` to degrade gracefully in
the case when we can't determine namespaced-ness, probably due to the
CRD not existing yet. Instead of failing, we assume the resource is
namespaced to allow the preview to succeed.

This is consistent with our error handling for this case elsewhere:
* Check https://github.com/pulumi/pulumi-kubernetes/blob/0f834c8b0d89e0003f0dc2d527d4ca8e2cde26e9/provider/pkg/provider/provider.go#L1481-L1488
* Invoke: https://github.com/pulumi/pulumi-kubernetes/blob/0f834c8b0d89e0003f0dc2d527d4ca8e2cde26e9/provider/pkg/provider/invoke_decode_yaml.go#L49-L56

Fixes #3176
@blampe blampe removed the needs-triage Needs attention from the triage team label Aug 22, 2024
@blampe blampe mentioned this issue Aug 23, 2024
@btuffreau
Copy link

btuffreau commented Oct 30, 2024

I faced a similar issue with multiple Release, e.g Kyverno or Karpenter and after quite a lot of testing, here is my conclusion:

kubernetes.yaml.v2.ConfigFile does not seem to work when kubernetes.yaml.ConfigFile does.

A little test I run, trying to create the same resource but with a different provider (made sure there was no clash in the manifest):

Updating (dev):
     Type                                             Name                          Status            Info
     pulumi:pulumi:Stack                              pulumi-aetion-dev             **failed**        1 error
     └─ pulumi-project:k8s:core-components             core-components
 +      ├─ kubernetes:helm.sh/v3:Release              helm-kyverno                  created (67s)
 +      ├─ kubernetes:yaml:ConfigFile                 kyverno-sync-secret-crd       created
 +      │  └─ kubernetes:kyverno.io/v1:ClusterPolicy  sync-secrets                  created (3s)
 +      └─ kubernetes:yaml/v2:ConfigFile              kyverno-sync-secret-crd2       created

Diagnostics:
  pulumi:pulumi:Stack (pulumi-aetion-dev):
    error: kubernetes:yaml/v2:ConfigFile resource 'kyverno-sync-secret-crd2' has a problem: marshaling properties: awaiting input property "resources": failed to determine if the following GVK is namespaced: kyverno.io/v1, Kind=ClusterPolicy

No amount of waiting or anything like that would help, as a matter of fact CRDs are created very early with Helm, way before the Pods are rolled out and the ConfigFile provider try to apply a manifest (it's easy to check with kubectl while the program is running).

My suspicion lies here but I was not able to test it.
The fact is, It always work on a second run.

Anyway, I'm not sure what's the difference between the provider v1 and v2 but I'll stick with v1 for now as it seems to be working just fine for what I'm doing.

@EronWright
Copy link
Contributor

The basic requirement is that the CRD be definitely installed before any CRs that depends on it are registered. This requirement is usually solved with the dependsOn option between a component that installs the operator and another that uses the installed types.

In preview mode, the provider maintains a cache of the CRDs that are planned, so that Pulumi may determine whether a given CRD is namespaced or cluster-scoped. The Release/v3 resource unfortunately doesn't contribute information to said cache, which may lead to the "failed to determine if the following GVK is namespaced" error. We're tracking this limitation in #3299.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants