Course materials for Introduction to Generative AI for Software Development from Deep Learning AI. These materials rely primarily on locally deployed LLMs using the following tools.
- Ollama (GitHub): Get up and running with Llama 3, Mistral, Gemma 2, and other large language models. Uses LLaMA C++ as the backend.
- Jupyter AI: A generative AI extension for JupyterLab.
Ollama (GitHub) gets you up and running with Llama 3, Mistral, Gemma 2, and other large language models. Uses LLaMA C++ as the backend. Download the distribution for your OS and run the installer. The installers should automatically start the Ollama server.
If you haven't already done so, install Miniforge. Miniforge provides minimal installers for Conda and Mamba specific to conda-forge, with the following features pre-configured:
- Packages in the base environment are obtained from the
conda-forge
channel. - The
conda-forge
channel is set as the default (and only) channel.
Conda/mamba will be the primary package managers used to install the required Python dependencies. For convenience, a script is included that will download and install Miniforge, Conda, and Mamba. You can run the script using the following command.
./bin/install-miniforge.sh
After adding any necessary dependencies that should be downloaded via conda
to the environment.yml
file and any
dependencies that should be downloaded via pip
to the requirements.txt
file you create the Conda environment in a
sub-directory ./env
of your project directory by running the following shell script.
./bin/create-conda-env.sh
Once the new environment has been created you can activate the environment with the following command.
conda activate ./env
Note that the ./env
directory is not under version control as it can always be re-created as
necessary.