Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: yet another attempt to add windows builds #231

Merged
merged 126 commits into from
Jan 14, 2025

Conversation

baszalmstra
Copy link
Member

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

Fixes #32

This PR is another attempt to add Windows builds (see #134) .

For now I disabled all other builds to be able to test the windows part first. I made this PR draft so we don't accidentally merge it.

@conda-forge-webservices
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe:

  • It looks like the 'libtorch' output doesn't have any tests.

@baszalmstra baszalmstra marked this pull request as draft April 5, 2024 13:00
recipe/meta.yaml Outdated Show resolved Hide resolved
@conda-forge-webservices
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I wanted to let you know that I linted all conda-recipes in your PR (recipe) and found some lint.

Here's what I've got...

For recipe:

  • Old-style Python selectors (py27, py35, etc) are only available for Python 2.7, 3.4, 3.5, and 3.6. Please use explicit comparisons with the integer py, e.g. # [py==37] or # [py>=37]. See lines [54]

For recipe:

  • It looks like the 'libtorch' output doesn't have any tests.

@conda-forge-webservices
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe:

  • It looks like the 'libtorch' output doesn't have any tests.

@baszalmstra
Copy link
Member Author

Both pipelines failed because they ran out of disk space:

FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/runtime/static/te_wrapper.cpp.obj 
C:\PROGRA~1\MICROS~2\2022\ENTERP~1\VC\Tools\MSVC\1429~1.301\bin\HostX64\x64\cl.exe  /nologo /TP -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_MIMALLOC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -Dtorch_cpu_EXPORTS -I%SRC_DIR%\build\aten\src -I%SRC_DIR%\aten\src -I%SRC_DIR%\build -I%SRC_DIR% -I%SRC_DIR%\third_party\onnx -I%SRC_DIR%\build\third_party\onnx -I%SRC_DIR%\third_party\foxi -I%SRC_DIR%\build\third_party\foxi -I%SRC_DIR%\third_party\mimalloc\include -I%SRC_DIR%\torch\csrc\api -I%SRC_DIR%\torch\csrc\api\include -I%SRC_DIR%\caffe2\aten\src\TH -I%SRC_DIR%\build\caffe2\aten\src\TH -I%SRC_DIR%\build\caffe2\aten\src -I%SRC_DIR%\build\caffe2\..\aten\src -I%SRC_DIR%\torch\csrc -I%SRC_DIR%\third_party\miniz-2.1.0 -I%SRC_DIR%\third_party\kineto\libkineto\include -I%SRC_DIR%\third_party\kineto\libkineto\src -I%SRC_DIR%\aten\src\ATen\.. -I%SRC_DIR%\c10\.. -I%SRC_DIR%\third_party\pthreadpool\include -I%SRC_DIR%\third_party\cpuinfo\include -I%SRC_DIR%\third_party\fbgemm\include -I%SRC_DIR%\third_party\fbgemm -I%SRC_DIR%\third_party\fbgemm\third_party\asmjit\src -I%SRC_DIR%\third_party\ittapi\src\ittnotify -I%SRC_DIR%\third_party\FP16\include -I%SRC_DIR%\third_party\fmt\include -I%SRC_DIR%\build\third_party\ideep\mkl-dnn\include -I%SRC_DIR%\third_party\ideep\mkl-dnn\src\..\include -I%SRC_DIR%\third_party\flatbuffers\include -external:I%SRC_DIR%\build\third_party\gloo -external:I%SRC_DIR%\cmake\..\third_party\gloo -external:I%SRC_DIR%\third_party\protobuf\src -external:I%SRC_DIR%\third_party\XNNPACK\include -external:I%SRC_DIR%\third_party\ittapi\include -external:I%SRC_DIR%\cmake\..\third_party\eigen -external:I%SRC_DIR%\third_party\ideep\mkl-dnn\include\oneapi\dnnl -external:I%SRC_DIR%\third_party\ideep\include -external:I%SRC_DIR%\caffe2 -external:W0 /DWIN32 /D_WINDOWS /GR /EHsc /bigobj /FS -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /utf-8 /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /bigobj -O2 -DONNX_BUILD_MAIN_LIB -openmp:experimental /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\runtime\static\te_wrapper.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c %SRC_DIR%\torch\csrc\jit\runtime\static\te_wrapper.cpp
%SRC_DIR%\torch\csrc\jit\runtime\static\te_wrapper.cpp : fatal error C1085: Cannot write compiler generated file: '%SRC_DIR%\build\caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\runtime\static\te_wrapper.cpp.obj': No space left on device

What would be the most idiomatic way to solve this issue?

@weiji14
Copy link
Member

weiji14 commented Apr 6, 2024

Try following https://conda-forge.org/docs/maintainer/conda_forge_yml/#azure to clear some disk space. Set this in conda-forge.yml

azure:
  free_disk_space: true

and then rerender the feedstock.

@Tobias-Fischer
Copy link
Contributor

Tobias-Fischer commented Apr 6, 2024

I think there’s little we can do - the Azure free disk space setting is already enabled. I’d try and see if these build locally. Perhaps there is a way to use the Quansight servers for Windows as well, the same way they are used for Linux builds? If not, I guess if there are some volunteers to build these locally then this would be an option - I did that for aarch64 for a while for qt. Conda-forge has a windows server too, but disk space has always been quite restricted there too so it might be a bit of a pain.

@jakirkham
Copy link
Member

Perhaps cross-compiling Windows from Linux is worth trying? Here is a different feedstock PR that does this ( conda-forge/polars-feedstock#187 )

If we were to use Quansight resources for Windows, being able to run the build on Linux (so cross-compiling) would be very helpful

@baszalmstra
Copy link
Member Author

Try following conda-forge.org/docs/maintainer/conda_forge_yml/#azure to clear some disk space. Set this in conda-forge.yml

azure:
  free_disk_space: true

Sadly thats already set:

free_disk_space: true

I think there’s little we can do - the Azure free disk space setting is already enabled. I’d try and see if these build locally. Perhaps there is a way to use the Quantstack servers for Windows as well, the same way they are used for Linux builds?

I assume you mean the runners provided through open-gpu-server by Quantsight and MetroStar? This PR only build the cpu-only version but if we also start building for Cuda I think this is the only possible way forward (let alone for other related repositories like tensorflow). However, the open-gpu-servers don't seem to provide any Windows images. Do you know who I should contact to get the ball rolling?

If not, I guess if there are some volunteers to build these locally then this would be an option

That would be an option but Id prefer to automate and open-source things as much as possible. Having something hooked up to this repository would be ideal.

Perhaps cross-compiling Windows from Linux is worth trying?

The native code of the example you linked is using Rust which makes this much easier. I doubt that this would be easy to achieve with pytorch.

@baszalmstra
Copy link
Member Author

I also expect another error when actual linking starts. On my local machine that takes at least 16GB of memory. The cuda version will mostly require more.

@jakirkham
Copy link
Member

Perhaps cross-compiling Windows from Linux is worth trying?

The native code of the example you linked is using Rust which makes this much easier. I doubt that this would be easy to achieve with pytorch.

If we don't try, we won't know

@baszalmstra
Copy link
Member Author

If we don't try, we won't know

Although that is technically true, its already hard enough to build pytorch natively. Adding cross-compilation in the mix seems to me to complicate this even further. Id much rather first focus on getting native builds working. Even if we need to modify the infrastructure to do so. I think having the ability to do resource intensive windows builds would be a huge benefit for the conda-forge ecosystem in general.

However, if all else fails cross-compiling seems like a worthwhile avenue to explore.

@bkpoon
Copy link
Member

bkpoon commented Apr 6, 2024

One thing to try is to move the build from D:\ to a directory that you have write access to on C:\. I have done this on a personal feedstock where I needed much more disk space. You can modify your conda-forge.yml file with

azure:
  settings_win:
    variables:
      CONDA_BLD_PATH: C:\\Miniconda\\envs\\

You should have roughly 70 GB free on C:\.

@baszalmstra
Copy link
Member Author

Thanks! I added that to the PR. I quickly searched github and it seems c:\bld\ is used more often so I tried that.

@bkpoon
Copy link
Member

bkpoon commented Apr 6, 2024

Just make sure that the directory exists and is writeable. Also, you need to rerender for the variable to be set. This comment should trigger the bot.

@conda-forge-admin, please rerender​

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Apr 6, 2024

This PR only build the cpu-only version but if we also start building for Cuda I think this is the only possible way forward (let alone for other related repositories like tensorflow). However, the open-gpu-servers don't seem to provide any Windows images. Do you know who I should contact to get the ball rolling?

A bit of history. Back when this feedstock was created 6 years ago, the pytorch officially suggested that people install two distinct packages pytorch-cpu or pytorch-gpu. Therefore it felt appropriate to create pytorch-cpu package because it would throw an error for those trying to install pytorch-gpu. These instructions have changed upstream.

I personally feel like for windows users, we would HURT their experience to not have a GPU package in 2024.

@baszalmstra
Copy link
Member Author

I personally feel like for windows users, we would HURT their experience to not have a GPU package in 2024.

Couldnt agree more. I started with CPU only to be able to make incremental progression. My goal is definitely to be able to build the cuda version too!

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Apr 9, 2024

well few things:

  1. I might try to build locally.
  2. After locally works for 1 python, I might try to enable the mega builds. When you build locally, it saves all the pytorch library compilation and makes compilation take "1.2x" time instead of "4x" time due to the repeated compilaiton of the library for each python version.
  3. Try to enable cuda using the CI.

Typically we "stop" the compilation on the CIs when we reach your stage (seems like it is working OK enough...).

@Tobias-Fischer
Copy link
Contributor

Hi @baszalmstra @hmaarrfk - do you have any updates on this? It would be amazing to see this happen :)!

@baszalmstra
Copy link
Member Author

@Tobias-Fischer Im still working on the Cuda builds but its a slow process because it takes ages to build them locally so iteration times are suuuper slow.

In parallel we are also looking into getting large Windows runners into the conda-forge infrastructure.

@baszalmstra
Copy link
Member Author

Small update:

image

I have something compiling locally. Still lots of issues (like Windows builds of pytorch 2.1.2 dont compile with python 3.12) but making steady progress. Currently getting megabuilds to work. Will push when I have something reliably working.

@baszalmstra
Copy link
Member Author

baszalmstra commented May 12, 2024

I got to the testing stage and noticed this:

- OMP_NUM_THREADS=4 python ./test/run_test.py || true # [not win and not (aarch64 and cuda_compiler_version != "None")]

However this seems to always fail with (this is from the logs of the latest release):

Ignoring disabled issues:  ['']
Unable to import boto3. Will not be emitting metrics.... Reason: No module named 'boto3'
Missing pip dependency: pytest-rerunfailures, please run `pip install -r .ci/docker/requirements-ci.txt`

Some dependencies are missing. Particularly:

  • pytest-rerunfailures
  • pytest-shard (not on conda-forge)
  • pytest-flakefinder (not on conda-forge)
  • pytest-xdist

(as can be seen here https://github.com/pytorch/pytorch/blob/6c8c5ad5eaf47a62fafbb4a2747198cbffbf1ff0/test/run_test.py#L1705)

Given that the test is allowed to fail (due to || true). Should we just remove it? Or put in the effort to fix these tests?

@h-vetinari
Copy link
Member

Given that the test is allowed to fail (due to || true). Should we just remove it? Or put in the effort to fix these tests?

The more we fix, the better. If it's really a lot of failures, we might not fix it right away (though depending on the severity of the failures, we might want to think twice about releasing something in that state).

In any case, let's leave the testing in, add the required dependencies, and pick up as many fixes as we can.

@Tobias-Fischer
Copy link
Contributor

So something in my last few commits fixed the builds. Also, my gut feeling that the test_mkldnn tests lead to the segfaults was right. The rest of the tests suite was running fine, with 14 failed, 7421 passed, 1433 skipped, 10 deselected, 31 xfailed, 75947 warnings. I've readded the compilers that we previously disabled, let's see how many tests failed due to this (at least a few from reading through the logs).

@Tobias-Fischer
Copy link
Contributor

And this PR provides a hint about the likely source of the mkldnn segfaults: pytorch/pytorch#138834

We could either try to use this PR and see if it solves our issues (I am still not well enough versed to understand the OMP implementations here on conda-forge and in this PR), or try to use llvm-openmp instead of intel-openmp already in this PR as that seems to be the source of the conflict (but again, I might be wrong).

@isuruf - could you please take a look at pytorch/pytorch#138834 and let me know your opinion?

@isuruf
Copy link
Member

isuruf commented Jan 9, 2025

I don't understand. What's the issue with using intel-openmp?

@Tobias-Fischer
Copy link
Contributor

Tobias-Fischer commented Jan 9, 2025

I don't understand. What's the issue with using intel-openmp?

It has somehow resolved after the merge, apologies for the noise!

PS: I was trying to run the test suite for pip-installed pytorch on the conda-forge Windows server, and I think I killed it - double apologies :(
UPDATE: Server still running, must have been the connection that crashed.

@Tobias-Fischer
Copy link
Contributor

For reference, here are the test failures for the pip-installed pytorch 2.5.1 (sorry for the screenshot - I have no idea how to copy+paste from remote desktop)
Screenshot 2025-01-09 at 4 22 08 pm

At the moment, in this PR, we have this for comparison: 10 failed, 7422 passed, 1433 skipped, 13 deselected, 31 xfailed, 75967 warnings (note that the intersection between the test failures is empty)

The failures are:

FAILED [0.1176s] test/test_nn.py::TestNN::test_Conv1d_dilated - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
FAILED [0.1497s] test/test_nn.py::TestNN::test_Conv1d_pad_same_dilated - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
FAILED [0.2559s] test/test_nn.py::TestNN::test_Conv2d_pad_same_dilated - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
FAILED [0.5184s] test/test_nn.py::TestNN::test_Conv2d_padding - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
FAILED [0.4737s] test/test_nn.py::TestNN::test_Conv2d_strided - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
FAILED [0.4108s] test/test_nn.py::TestNN::test_Conv3d_dilated - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
FAILED [0.3311s] test/test_nn.py::TestNN::test_Conv3d_dilated_strided - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
FAILED [0.9756s] test/test_nn.py::TestNN::test_Conv3d_pad_same_dilated - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
FAILED [1.4088s] test/test_nn.py::TestNN::test_Conv3d_stride - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
FAILED [1.5850s] test/test_nn.py::TestNN::test_Conv3d_stride_padding - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,

All failures look more or less the same, here is one in detail:

2025-01-09T05:31:58.0749523Z FAILED [0.1176s] test/test_nn.py::TestNN::test_Conv1d_dilated - torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
2025-01-09T05:31:58.0749607Z numerical:tensor(-0.2027)
2025-01-09T05:31:58.0749684Z analytical:tensor(-0.1468)
2025-01-09T05:31:58.0749690Z 
2025-01-09T05:31:58.0749897Z The above quantities relating the numerical and analytical jacobians are computed 
2025-01-09T05:31:58.0750133Z in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background 
2025-01-09T05:31:58.0750337Z about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
2025-01-09T05:31:58.0750341Z 
2025-01-09T05:31:58.0750402Z Numerical:
2025-01-09T05:31:58.0750531Z  tensor([[-0.1548,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0750631Z         [ 0.0000, -0.1548,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0750728Z         [-0.2432,  0.0000, -0.1548,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0750841Z         ...,
2025-01-09T05:31:58.0750966Z         [ 0.0000,  0.0000,  0.0000,  ...,  0.2555,  0.0000,  0.0728],
2025-01-09T05:31:58.0751064Z         [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.2555,  0.0000],
2025-01-09T05:31:58.0751164Z         [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.2555]])
2025-01-09T05:31:58.0751225Z Analytical:
2025-01-09T05:31:58.0751343Z tensor([[-0.1548,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0751434Z         [ 0.0000, -0.1548,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0751525Z         [-0.2432,  0.0000, -0.1548,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0751589Z         ...,
2025-01-09T05:31:58.0751678Z         [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0751775Z         [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
2025-01-09T05:31:58.0751868Z         [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000]])
2025-01-09T05:31:58.0751874Z 
2025-01-09T05:31:58.0752018Z The max per-element difference (slow mode) is: 6.479986854533841.
2025-01-09T05:31:58.0752024Z 
2025-01-09T05:31:58.0752030Z 
2025-01-09T05:31:58.0752177Z To execute this test, run the following from the base repo dir:
2025-01-09T05:31:58.0752287Z     python test\test_nn.py TestNN.test_Conv1d_dilated

I am currently running tests again with test_mkldnn enabled and will report back once that's done.

In the meantime, if anyone has opinions about the dozen of test failures, please comment :)

Do we think it's worth trying some other configurations (non-mkl, cuda, ..) after the current run (assuming it goes ok)?

@rgommers
Copy link

rgommers commented Jan 9, 2025

Do we think it's worth trying some other configurations (non-mkl, cuda, ..) after the current run (assuming it goes ok)?

I would suggest to let the dust settle for a while. Having a first PyTorch package on Windows is high-value, the rest is lower-value. CUDA would add more than non-mkl, in case one would like to try a next build config later. The upstream Windows CUDA packages were discussed as candidate for dropping multiple times, since it's a lot of work and not all that relevant to production needs, only for local development (and it's possible to use WSL for that too).

@Tobias-Fischer
Copy link
Contributor

Tobias-Fischer commented Jan 9, 2025

No additional test failures with the mkldnn tests enabled, new summary: 10 failed, 7492 passed, 1446 skipped, 13 deselected, 31 xfailed, 75976 warnings

Do we think it's worth trying some other configurations (non-mkl, cuda, ..) after the current run (assuming it goes ok)?

I would suggest to let the dust settle for a while. Having a first PyTorch package on Windows is high-value, the rest is lower-value. CUDA would add more than non-mkl, in case one would like to try a next build config later. The upstream Windows CUDA packages were discussed as candidate for dropping multiple times, since it's a lot of work and not all that relevant to production needs, only for local development (and it's possible to use WSL for that too).

Ok - let me see how the CUDA mkl build is going.

@danpetry
Copy link
Contributor

danpetry commented Jan 9, 2025

if anyone has opinions about the dozen of test failures

For whatever it's worth, it looks fine to me. As pointed out, it's comparable to the number of failures in their pip package. Pip's getting typeerrors while these are all maths accuracy errors, so less critical afaics.

@Tobias-Fischer Tobias-Fischer marked this pull request as ready for review January 10, 2025 01:55
@Tobias-Fischer
Copy link
Contributor

CUDA+mkl build succeeded - hooray!

I've marked this PR as ready for review @conda-forge/pytorch-cpu - looking for any feedback before enabling the full build pipeline.

My plan would be to mark the blas_impl: generic variant as unix-only (https://github.com/baszalmstra/pytorch-cpu-feedstock/blob/44c603513eb1166b44de29f5858763ae519ac340/recipe/conda_build_config.yaml#L9), then remove skip in https://github.com/baszalmstra/pytorch-cpu-feedstock/blob/44c603513eb1166b44de29f5858763ae519ac340/recipe/meta.yaml#L64-L66 and rerender.

Comment on lines +17 to +18
# TODO Temporary pin, remove
{% set mkl = "<2025" %}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is noted as temporary, what's the plan/status here?

Copy link
Contributor

@Tobias-Fischer Tobias-Fischer Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest I’m not sure if the right (compatible with mkl) version of intel-openmp would be pulled in without it. I can test after getting some more feedback, I want to avoid running CI more than needed now.

@h-vetinari
Copy link
Member

h-vetinari commented Jan 10, 2025

I'm attempting a merge of this PR and #305 in #316. All the commits here are maintained 1:1, and this PR will show up as merged if/once #316 is merged. 🤞 we get everything passing this time

The only question that remains: who writes a blog post about this epic journey? 😛

huge thanks to @baszalmstra @Tobias-Fischer for the work here, and of course to prefix.dev for sponsoring the server!!! 🙏 🥳

@Tobias-Fischer
Copy link
Contributor

Tobias-Fischer commented Jan 10, 2025

Happy to write a blog post - very happy to jointly write with others involved @baszalmstra et al. :)

@baszalmstra
Copy link
Member Author

Amazing! Id be happy to contribute to a blog post!

@hmaarrfk hmaarrfk merged commit 44c6035 into conda-forge:main Jan 14, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Windows builds