This is an example of the classic MNIST hand-written text recognition task using FEDn with the PyTorch C++ API.
The working environment for this example makes use of VSC remote containers. The development container is defined by the following files:
Dockerfile
defines the development container along with its dependencies..devontainer/devcontainer.json.tpl
defines how VSC will access and create the developmet container. The teplate need to be copied to.devontainer/devcontainer.json
and edited. Please refer to this document for more information: https://code.visualstudio.com/docs/remote/devcontainerjson-reference.- You may need to login into Scaleout's GitHub registry if the Dockerfile is based on
ghcr.io/scaleoutsystems/tee-gc/fedn:latest
.
Download the data:
bin/download_data.sh
Build the compute package and train the seed model:
bin/build.sh
This may take a few minutes. After completion
package.tgz
andseed.npz
should be built in your current working directory.
Start FEDn:
Note If you are running on a remote container, you need to setup the remote host data path:
echo "HOST_DATA_DIR=/path/to/tee-mnist/data" > .env
.
sudo docker-compose up -d
This may take a few minutes. After this is done you should be able to access the reducer interface at https://localhost:8090.
Now navigate to https://localhost:8090 and upload package.tgz
and seed.npz
. Alternatively, you can upload seed and package using the REST API as it follows.
# Upload package
curl -k -X POST \
-F [email protected] \
-F helper="pytorch" \
https://localhost:8090/context
# Upload seed
curl -k -X POST \
-F [email protected] \
https://localhost:8090/models
Finally, you can navigate again to https://localhost:8090 and start the experiment from the "control" tab. Alternatively, you can start the experiment using the REST API as it follows.
# Start experiment
curl -k -X POST \
-F rounds=3 \
-F validate=True \
https://localhost:8090/control
To clean up you can run: sudo docker-compose down
. To exit the Docker environment simply run exit
.
The compute package in this example supports running training and validation in Intel SGX TEE via Gramine. The code was tested using Azure Confidential Computing. To enable this running mode, you can run: echo "LOADER=gramine-sgx" >> .env
and repeat all of the seps.
To run reducer and combiner in Intel SGX you can use docker-compose-tee.yaml
to start FEDn, as it follows.
sudo docker-compose -f docker-compose-tee.yaml up -d
Next steps are the same as running without Intel SGX but it may take a bit longer for the clients to connect.
This codebase has also been tested in AMD SEV with Azure Confidential VMs. The steps to follow don't change in this case as the whole VM memory is automatically encypted by the Azure service via AMD SEV.