Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Windows & macOS Acceptance Tests #875

Merged
merged 19 commits into from
Dec 20, 2024
Merged

Add Windows & macOS Acceptance Tests #875

merged 19 commits into from
Dec 20, 2024

Conversation

osterman
Copy link
Member

@osterman osterman commented Dec 19, 2024

what

why

  • We don't currently run any tests on windows and macOS
  • Slight differences between OSes can cause big problems and are hard to catch
  • Most of our developer team is on macOS, but more and more Windows users are using atmos
  • To maintain stability, it's critical we test all mainstream platforms

references

Summary by CodeRabbit

  • New Features

    • Enhanced CI/CD workflow to support concurrent builds and tests across multiple operating systems (Linux, Windows, macOS).
    • Introduced platform-specific build and version targets for improved organization and clarity.
  • Bug Fixes

    • Increased timeout for build jobs to ensure completion.
  • Documentation

    • Updated workflow and build process documentation to reflect changes in job configurations and targets.

@osterman osterman requested a review from a team as a code owner December 19, 2024 19:22
@mergify mergify bot added the triage Needs triage label Dec 19, 2024
Copy link

mergify bot commented Dec 19, 2024

Important

Cloud Posse Engineering Team Review Required

This pull request modifies files that require Cloud Posse's review. Please be patient, and a core maintainer will review your changes.

To expedite this process, reach out to us on Slack in the #pr-reviews channel.

@mergify mergify bot added the needs-cloudposse Needs Cloud Posse assistance label Dec 19, 2024
@osterman osterman changed the title Add windows tests Add Windows & MacOS Acceptance Tests Dec 20, 2024
@osterman osterman added the no-release Do not create a new release (wait for additional code changes) label Dec 20, 2024
@osterman osterman changed the title Add Windows & MacOS Acceptance Tests Add Windows & macOS Acceptance Tests Dec 20, 2024
@osterman
Copy link
Member Author

@coderabbitai full review

Copy link
Contributor

coderabbitai bot commented Dec 20, 2024

📝 Walkthrough

Walkthrough

The pull request introduces a comprehensive enhancement to the CI/CD workflow and build process, focusing on multi-platform support and build flexibility. The changes primarily target the GitHub Actions workflow configuration and the Makefile, enabling parallel builds and tests across Linux, Windows, and macOS environments. The modifications streamline the build targets, simplify version detection, and improve the overall robustness of the development pipeline.

Changes

File Change Summary
.github/workflows/test.yml - Added matrix strategy for build, test, mock, lint, and validate jobs
- Increased build job timeout to 15 minutes
- Dynamic OS selection using ${{ matrix.os }} and ${{ matrix.flavor.os }}
- Updated artifact naming to include target OS
Makefile - Simplified TEST variable assignment
- Renamed build to build-default
- Added platform-specific build targets: build-linux, build-windows, build-macos
- Renamed version to version-default
- Added platform-specific version targets
- Updated .PHONY declarations

Sequence Diagram

sequenceDiagram
    participant Workflow as GitHub Actions
    participant Build as Build Job
    participant Test as Test Job
    participant Artifact as Artifact Management

    Workflow->>Build: Trigger matrix build
    Build->>Build: Build on Linux/Windows/macOS
    Build-->>Artifact: Upload artifacts
    Workflow->>Test: Trigger matrix tests
    Test->>Test: Run tests on multiple OS
    Test-->>Artifact: Download and verify artifacts
Loading

Suggested labels

minor


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary or @auto-summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @auto-title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
.github/workflows/test.yml (1)

Demo matrix configuration requires attention

Based on our investigation:

  • Only a subset of demo folders currently exist: atlantis, component-versions, context, custom-command, helmfile, library, localstack, stacks, vendoring, and workflows
  • Most commented-out demos in the matrix are not present in the examples directory
  • No documentation found about deprecation or removal of these demos

This suggests the matrix configuration needs to be updated to reflect the current state of demos.

• Remove entries for non-existent demos from the matrix configuration
• Consider adding the newly discovered demos (helmfile, library, localstack) to the matrix if they should be tested

🔗 Analysis chain

Line range hint 249-285: Verify status of commented-out demo tests.

While the cross-platform mock test setup is solid, several demo folders are commented out. Let's ensure this is intentional.

Let's check the status of these demos:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check status of commented demo folders

# List all demo folders
echo "Available demo folders:"
fd -t d '^demo-' examples/

# Check if commented-out demos have test files
echo -e "\nChecking for test files in commented demos:"
for demo in component-manifest custom-command json-validation opa-validation opentofu project stacks terraform terraform-overrides workflows yaml-anchors mock-architecture stack-templating multi-cloud; do
  echo -n "demo-$demo: "
  fd -t f '^test\.' "examples/demo-$demo" || echo "No test files found"
done

Length of output: 6486


Script:

#!/bin/bash
# Let's check if these demos were removed or renamed by looking at git history
git log --oneline -n 50 -- examples/ | grep -i "demo-\|rename\|remove"

# Also check if there are any related issues or discussions
echo -e "\nChecking README or documentation for demo status:"
rg -i "demo.*deprecated|demo.*removed|demo.*renamed" -g "*.md"

Length of output: 179

🧹 Nitpick comments (2)
Makefile (1)

24-50: Consider optimizing Windows version check.

The version system is well-structured, but the Windows version check could be optimized. Currently, it rebuilds the executable every time we check the version.

Consider this optimization:

-version-windows: build-windows
+version-windows:
+	@if not exist ".\build\atmos.exe" make build-windows
 	.\build\atmos.exe version
.github/workflows/test.yml (1)

Line range hint 1-397: Consider implementing test result aggregation.

The workflow structure is solid, but tracking test results across platforms could be enhanced.

Consider adding a test report aggregation step that combines results from all platforms into a single report. This could help quickly identify platform-specific issues. Tools like JUnit XML output format could be useful here.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 80058a8 and 0bb901f.

📒 Files selected for processing (2)
  • .github/workflows/test.yml (10 hunks)
  • Makefile (2 hunks)
🔇 Additional comments (4)
Makefile (2)

1-3: LGTM! Clean and modern approach.

The simplified TEST variable leverages Go's built-in vendor directory exclusion. This is the way, warrior! 💪


59-59: PHONY targets properly declared!

All new targets are correctly marked as PHONY, preventing conflicts with same-named files. Excellent attention to Makefile best practices!

.github/workflows/test.yml (2)

67-67: Well-organized artifact handling across platforms!

The platform-specific artifact naming scheme ensures clear separation and proper distribution of builds. This is organized like a Spartan phalanx! 🛡️

Also applies to: 169-169, 222-222, 285-285


27-38: Robust matrix strategy for cross-platform builds!

The matrix configuration effectively handles all target platforms with proper timeout adjustments. This is battle-ready! 🛡️

Let's verify the matrix configuration:

✅ Verification successful

Matrix strategy is battle-tested and properly configured!

The matrix configuration is solid with:

  • Correct fail-fast setting for independent platform builds
  • Appropriate timeout of 15 minutes
  • Well-defined OS and target mappings for all three platforms
  • No conflicting matrix configurations in other workflow files
  • No conditional overrides that could affect the matrix execution
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify matrix strategy configuration

# Check if all necessary OS runners are available
gh api -X GET /repos/{owner}/{repo}/actions/runners | jq -r '.runners[] | select(.status=="online") | .os' | sort -u

# Verify recent workflow runs for each platform
gh run list --limit 5 --json conclusion,name,headBranch | jq '.[] | select(.name=="Tests")'

Length of output: 400


Script:

#!/bin/bash
# Let's verify the workflow configuration directly from the files

# Check if there are any other workflow files that might conflict
fd -e yml -e yaml . .github/workflows/ --exec cat {}

# Look for any conditional job execution or environment setup
rg -A 5 "if: " .github/workflows/

# Check for any environment variables or setup steps that might affect the matrix
rg -A 5 "env:" .github/workflows/

Length of output: 42530

Makefile Show resolved Hide resolved
@mergify mergify bot removed the triage Needs triage label Dec 20, 2024
@aknysh aknysh merged commit 262affa into main Dec 20, 2024
38 of 44 checks passed
@aknysh aknysh deleted the test-windows branch December 20, 2024 05:04
@mergify mergify bot removed the needs-cloudposse Needs Cloud Posse assistance label Dec 20, 2024
This was referenced Dec 20, 2024
@coderabbitai coderabbitai bot mentioned this pull request Dec 23, 2024
Copy link

These changes were released in v1.131.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
no-release Do not create a new release (wait for additional code changes)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants