Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build config/swarm/executor.wasm in CI #4549

Closed
mversic opened this issue May 3, 2024 · 18 comments · Fixed by #4931
Closed

Build config/swarm/executor.wasm in CI #4549

mversic opened this issue May 3, 2024 · 18 comments · Fixed by #4931
Assignees
Labels

Comments

@mversic
Copy link
Contributor

mversic commented May 3, 2024

Currently, every developer builds executor on his own machine and commits it into git. Then this executor is used in tests. This creates too many unnecessary git conflicts and prevents me from merging backports directly into main branch. What was the reason for doing this in the first place?

We should build executor in the CI every time before running tests in the CI and not track executor.wasm in git at all

@Erigara
Copy link
Contributor

Erigara commented May 6, 2024

We also need executor.wasm when deploying iroha.
Currently deployment download genesis and executor using direct links from github.

@Erigara
Copy link
Contributor

Erigara commented May 6, 2024

Also our docker-compose files rely on existence of executor.wasm implicitly trough gensis.json.
Does end user suppose to build/download executor on their own?

@mversic
Copy link
Contributor Author

mversic commented May 6, 2024

if we build and publish it as an artifact in the CI is it possible to reference it from the compose file?

@Erigara
Copy link
Contributor

Erigara commented May 6, 2024

Github artifacts are tricky to deal with...

We can't simply get artifact by it's name like executor.
We either should specify artifact id which is not known in compose file or get list of artifacts with the same name and get latest which is not very convenient as well.

Also i'm not sure it's artifacts are suitable for long term storage since every artifact has retantion timer associated with them.

@mversic
Copy link
Contributor Author

mversic commented May 6, 2024

we can:

  1. persist artifacts somewhere publicly available
  2. put executor in the docker image itself (not a big fan of this)

@mversic
Copy link
Contributor Author

mversic commented May 6, 2024

is it possible to run a script on the host that build wasm before starting the container on docker-compose up?

edit: seems not

@BAStos525
Copy link
Contributor

we can:

  1. persist artifacts somewhere publicly available
  2. put executor in the docker image itself (not a big fan of this)
  1. I supposed a public endpoint as AWS S3, but not everybody who is involved in custom genesis creations has even AWS accounts there. Thus, a lot of new routine are coming. Also additional secrets should be added to CI that it's always better to avoid if possible.
  2. I told to @Erigara about this option, but for Orillion, longevity and CBDC we have a custom genesis.

@mversic
Copy link
Contributor Author

mversic commented May 6, 2024

I supposed a public endpoint as AWS S3, but not everybody who is involved in custom genesis creations has even AWS accounts there. Thus, a lot of new routine are coming. Also additional secrets should be added to CI that it's always better to avoid if possible.

as far as we're concerned others can store executor.wasm in their repo and not change their workflow. I don't see why them not having access to AWS prevents us from doing this. This is my preferred option atm

@BAStos525
Copy link
Contributor

is it possible to run a script on the host that build wasm before starting the container on docker-compose up?

edit: seems not

Deployment (CD) should be clear of creation smth.

We can try to push executors to the outside special repository but it's a rough option too. FTP server is suitable here, but should be configured from scratch, so, an additional effort and time spending in this case.

@BAStos525
Copy link
Contributor

BAStos525 commented May 6, 2024

I supposed a public endpoint as AWS S3, but not everybody who is involved in custom genesis creations has even AWS accounts there. Thus, a lot of new routine are coming. Also additional secrets should be added to CI that it's always better to avoid if possible.

as far as we're concerned others can store executor.wasm in their repo and not change their workflow. I don't see why them not having access to AWS prevents us from doing this. This is my preferred option atm

In this case we have to update all deployments either to download executor from S3 or from git. Also, should docker-compose files community users download executors manually before to up docker-compose?

@mversic
Copy link
Contributor Author

mversic commented May 6, 2024

what we have currently also feels wrong. Docker pulls iroha2-dev image but that image references locally built (and versioned by git) executor.wasm. If I checkout one of the previous commits it might break.

I'm thinking there are 3 reasonable choices here:

  1. upload those build artifacts somewhere external
  2. put artifacts in the docker image
  3. require executor.wasm to be built before running docker-compose

2nd approach is fine except that it's hard to edit the executor.wasm if it's baked into the image and not in an external volume shared with the host OS. Can we provide some escape hatch for this? The problem is how to make changes to executor and have them reflected in all containers in a simple way

@s8sato
Copy link
Contributor

s8sato commented May 8, 2024

What if we do the following:

  • stop tracking any executor blobs
  • make the executor path an environment variable
  • introduce a hook to build with iroha binary for the default executor
  • burn the default executor blob on the docker image
  • have custom executors in docker volume as needed

@mversic
Copy link
Contributor Author

mversic commented May 8, 2024

I want to draw everyone attention to what has happened to me when publishing RC21.3 release.

I've been looking into why this CI job it fails. All of a sudden Iroha fails too boot from docker-compose.yml and it's unrelated to the changes in the PR. What has happened is that we have merged this PR and docker-compose.yml depends on hyperledger/iroha:dev image instead of build the one from the current repository

IMO all docker images should build themselves locally. This should be the default, and if someone wants to they can change to download the docker image. If we want to offer predefined network referencing a static docker image, we should do that in the documentation repository, not here. These images are for development, not for deployment

@s8sato
Copy link
Contributor

s8sato commented May 9, 2024

I agree @mversic , there would be no good reason to pull the last published dev image and test it in the release workflow. I also think a test by pulling an image would duplicate a previous test by building the image

@s8sato
Copy link
Contributor

s8sato commented May 9, 2024

My experience in #4411 led me to think about modularity of not only executor but also genesis and swarm, and about consistency between them.
The following is a suggested flow starting from a build hook under the root, injecting configs into builders, and ending to place artifacts:

graph LR
    classDef artifact fill:#8c4351,fill-opacity:0.5
    classDef env_val fill:#166775,fill-opacity:0.5
    classDef not_planned opacity:0.5
    %% legend
    artifact(artifact):::artifact
    env_val(environment variable):::env_val
    %% build tools
    root_hook(/build.rs) --> executor_builder(iroha_wasm_builder_cli)
    root_hook --> genesis_builder(kagami genesis)
    root_hook --> schema_builder(kagami schema):::not_planned
    root_hook --> swarm_builder(iroha_swarm)
    %% config injection
    base_seed(BASE_KEYPAIR_SEED):::env_val --> genesis_pub_key(GENESIS_PUBLIC_KEY):::env_val
    base_seed .-> swarm_builder
    genesis_pub_key .-> genesis_builder
    executor_path(EXECUTOR_PATH):::env_val .-> executor_builder
    executor_path .-> genesis_builder
    %% artifacts
    executor_builder --> executor(executor.wasm):::artifact
    genesis_builder --> genesis(genesis.json):::artifact
    swarm_builder --> swarm(docker-compose.*.yml):::artifact

Loading

@BAStos525
Copy link
Contributor

BAStos525 commented May 17, 2024

I agree @mversic , there would be no good reason to pull the last published dev image and test it in the release workflow. I also think a test by pulling an image would duplicate a previous test by building the image

If you mean the step Test docker-compose.yml before pushing, image from DockerHub shouldn't pulled there. We use load: true definition while building iroha with appreciate tags before. It allows do not build/pull iroha image from scratch again during docker-compose tests because docker-compose command is able to find these tags locally.

@s8sato
Copy link
Contributor

s8sato commented May 17, 2024

@BAStos525 I mean the corresponding step of the release workflow would ignore iroha:TAG built in local and pull iroha:dev hardcoded in docker-compose.yml

@BAStos525
Copy link
Contributor

@BAStos525 I mean the corresponding step of the release workflow would ignore iroha:TAG built in local and pull iroha:dev hardcoded in docker-compose.yml

Right. For now only manual changing of docker-compose files between merging to the stable branch is available. Compose files are managed by swarm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
5 participants