Skip to content

This is an example of the classic MNIST hand-written text recognition task using FEDn with the PyTorch C++ API. Optionally, you can run the example with Intel SGX.

Notifications You must be signed in to change notification settings

scaleoutsystems/tee-mnist

Repository files navigation

MNIST example - Pytorch C++

This is an example of the classic MNIST hand-written text recognition task using FEDn with the PyTorch C++ API.

Table of Contents

Prerequisites

The working environment for this example makes use of VSC remote containers. The development container is defined by the following files:

  1. Dockerfile defines the development container along with its dependencies.
  2. .devontainer/devcontainer.json.tpl defines how VSC will access and create the developmet container. The teplate need to be copied to .devontainer/devcontainer.json and edited. Please refer to this document for more information: https://code.visualstudio.com/docs/remote/devcontainerjson-reference.
  3. You may need to login into Scaleout's GitHub registry if the Dockerfile is based on ghcr.io/scaleoutsystems/tee-gc/fedn:latest.

Running the example (pseudo-distributed)

Download the data:

bin/download_data.sh

Build the compute package and train the seed model:

bin/build.sh

This may take a few minutes. After completion package.tgz and seed.npz should be built in your current working directory.

Start FEDn:

Note If you are running on a remote container, you need to setup the remote host data path: echo "HOST_DATA_DIR=/path/to/tee-mnist/data" > .env.

sudo docker-compose up -d

This may take a few minutes. After this is done you should be able to access the reducer interface at https://localhost:8090.

Now navigate to https://localhost:8090 and upload package.tgz and seed.npz. Alternatively, you can upload seed and package using the REST API as it follows.

# Upload package
curl -k -X POST \
    -F [email protected] \
    -F helper="pytorch" \
    https://localhost:8090/context

# Upload seed
curl -k -X POST \
    -F [email protected] \
    https://localhost:8090/models

Finally, you can navigate again to https://localhost:8090 and start the experiment from the "control" tab. Alternatively, you can start the experiment using the REST API as it follows.

# Start experiment
curl -k -X POST \
    -F rounds=3 \
    -F validate=True \
    https://localhost:8090/control

Clean up

To clean up you can run: sudo docker-compose down. To exit the Docker environment simply run exit.

Running in Trusted Execution Environment (TEE)

Compute package in Intel SGX

The compute package in this example supports running training and validation in Intel SGX TEE via Gramine. The code was tested using Azure Confidential Computing. To enable this running mode, you can run: echo "LOADER=gramine-sgx" >> .env and repeat all of the seps.

Reducer and combiner in Intel SGX

To run reducer and combiner in Intel SGX you can use docker-compose-tee.yaml to start FEDn, as it follows.

sudo docker-compose -f docker-compose-tee.yaml up -d

Next steps are the same as running without Intel SGX but it may take a bit longer for the clients to connect.

Running in AMD SEV

This codebase has also been tested in AMD SEV with Azure Confidential VMs. The steps to follow don't change in this case as the whole VM memory is automatically encypted by the Azure service via AMD SEV.

About

This is an example of the classic MNIST hand-written text recognition task using FEDn with the PyTorch C++ API. Optionally, you can run the example with Intel SGX.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published