Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gp validate: container workspace completed; containers of a workspace pod are not supposed to do that #18091

Closed
GitMensch opened this issue Jun 28, 2023 · 24 comments · Fixed by #18143 or #18284
Labels
aspect: error-handling Issues which improve error handling when something fails in Gitpod feature: docker in workspaces feature: gp validate team: IDE type: bug Something isn't working

Comments

@GitMensch
Copy link

First: I definitely like that, no more "create extra branch, temporarily switch default branch to that, create/adjust .gitpod.yml, restart workspace" any more!
... but:

gp validate currently always yields in the workspace being force-closed with a message:

Oh, no! Something went wrong!
container workspace completed; containers of a workspace pod are not supposed to do that

after "Building the workspace image..."

(tested both with the default image and with image: gitpod/workspace-c)

Originally posted by @GitMensch in #7671 (comment)

I can reproduce that with https://github.com/opensourcecobol/Open-COBOL-ESQL when adding a new file (just copy from https://github.com/GitMensch/Open-COBOL-ESQL/blob/gitpod/.gitpod.yml) and run gp validate (either manually or triggered by the extension)

@akosyakov
Copy link
Member

Hm, I cannot reproduce on https://github.com/opensourcecobol/Open-COBOL-ESQL either.

It does not work for you at all? Or on some repos? Do you have special configurations? dotfiles? env vars?

@akosyakov
Copy link
Member

akosyakov commented Jun 28, 2023

@GitMensch When it happens do you have already all tasks in your original workspace running?

We see that OS is panicking with runtime: failed to create new OS thread (have 6002 already; errno=11). It seems you have very many processes. gp validate start another workspace within your current workspace, so maybe you hit the limit here.

Try to stop all tasks in your original workspace and then do gp validate. They will be restarted in the nested workspace anyway.

@akosyakov akosyakov added the meta: 🤔 reporter-feedback-needed cannot process further since we need more info from the reporter label Jun 28, 2023
@GitMensch
Copy link
Author

Hm. Just force-pushed the gitpod file to my fork, then stopped and deleted all workspaces, then created a new from upstream, added the file by new file, copy+paste, save as .gitpod.yml, then answered the question about running validation now with yes.

Result:

  • Building the workspace image...
  • 3 empty welcome windows opened within 1-2 minutes
  • message:

Oh, no! Something went wrong!
supervisor run error with unexpected exit code:

@GitMensch

This comment was marked as outdated.

@akosyakov
Copy link
Member

@GitMensch You can use gp top to check out what is going with your workspace performance wise.

@GitMensch

This comment was marked as outdated.

@GitMensch

This comment was marked as outdated.

@akosyakov
Copy link
Member

@GitMensch When I'm trying to run this repo I'm getting:

bash: ../autogen.sh: No such file or directory
bash: ../configure: No such file or directory
make: *** No targets specified and no makefile found. Stop.

I guess my setup is missing something since I cannot reproduce your issues? Are there repos self contained or rely on some dotfiles/env vars?

@GitMensch

This comment was marked as outdated.

@akosyakov
Copy link
Member

Do you see any other processes in ps -auxf before running gp validate? I tried to kill all tasks and then trigger gp validate, and it works for me 🤔

@GitMensch

This comment was marked as outdated.

@akosyakov
Copy link
Member

@GitMensch Could you check whether you have DOCKERD_ARGS set? and try to clean it up with gp env and then start a new workspace for the same context

@akosyakov
Copy link
Member

I think you have something in your workspace, which preventing Docker to start and then it tries many times leaking some files on our side, till workspace don't have anymore free file handlers and crashes. I'm trying to understand where is a leak, and how surface this problem better to you, but you probably need to fix a root cause and it seems to come from DOCKERD_ARGS env var. I see following error in the log:
cannot add user supplied docker args: unable to deserialize docker args: invalid character '-' in numeric literal

@GitMensch
Copy link
Author

GitMensch commented Jul 3, 2023

Ah..., yes, those are set (support asked for this when I've tried to run with rr):

DOCKERD_ARGS is set to --cap-add=SYS_PTRACE --security-opt seccomp=unconfined in https://gitpod.io/user/variables for scope */*

@akosyakov
Copy link
Member

akosyakov commented Jul 3, 2023

Try without them, maybe it is a bug on our side with parsing. If the format is correct then we should fix it, but there are multiple issues:

  • a user don’t have access to supervisor logs, we should make it possible to access via Output view in VS Code, and allow to download from the stopped/failed workspace
  • an error of Docker startup does not propagate to a user in any way, maybe we should have a way to send a notification or something, especially since it is clear that we fail to parse user args
    • an alternative skip user args if we cannot parse start without them, there is no point trying over and over
    • maybe our parsing is broken - passed arg is of wrong format ❌
  • such failure of starting Docker also leaks some file handlers or so, we need to fix it

@akosyakov akosyakov added feature: docker in workspaces aspect: error-handling Issues which improve error handling when something fails in Gitpod feature: gp validate and removed meta: 🤔 reporter-feedback-needed cannot process further since we need more info from the reporter feature: gp validate labels Jul 3, 2023
@akosyakov
Copy link
Member

@GitMensch actually I'm not sure DOCKERD_ARGS ever worked. I see that we only allow following args as it seems [1]

@akosyakov
Copy link
Member

@GitMensch our docs also say that proper format of DOCKERD_ARGS is json struct, please see https://www.gitpod.io/docs/configure/projects/environment-variables#user-specific-environment-variables

@GitMensch

This comment was marked as off-topic.

@akosyakov
Copy link
Member

I think you should use the same value but pass them to Docker directly, i.e. docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined

DOCKERD_ARGS cannot do what you want, the only setting which it supports is { "remap-user": "1000" }

@GitMensch

This comment was marked as off-topic.

@akosyakov
Copy link
Member

By Gitpod you mean generally (outer) or via gp validate (inner)? Gitpod does not use Docker to start an outer workspace. Maybe let's try different what is your original intent? Why do you need these flags? If you point me to some repo and say these things does not work we can have a look how make them work. (but yet it does not look like it related to gp validate anymore, maybe start another issue?)

@GitMensch

This comment was marked as outdated.

@akosyakov
Copy link
Member

#9687 - I don't think a user can change it, someone from the workspace team will need to do it. If it does not pose any security risk though, i'm not an expert here. (cc @kylos101)

@akosyakov
Copy link
Member

Since we reverted the fix and should apply it again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
aspect: error-handling Issues which improve error handling when something fails in Gitpod feature: docker in workspaces feature: gp validate team: IDE type: bug Something isn't working
Projects
None yet
2 participants