This repo contains the source code for Pythonistas Meetups. For the repos that predate this one, they'll be included as submodules.
Very little of this gets tested on Windows hosts. Windows Subsystem for Linux (WSL) is used where necessary with the default Ubuntu LTS install.
Be the change et al if Windows is your main and you wanna raise a PR with broad instructions on getting tooling working under Windows (e.g., docker, poetry, playwright.)
Table of Contents
Development environments and tooling are first-class citizens on macOS and *nix. For Windows faithfuls, please setup WSL below.
WSL allows Windows users to run Linux (Unix) locally at a system-level. All of the standard tooling is used and community guides can be followed without standard Windows caveats (e.g., escaping file paths, GNU utilities missing, etc.)
- Install from the Setup section
- Enable
- Start Menu > search for "Turn Windows Features On" > open > toggle "Windows Subsystem for Linux"
- Restart
- M1 Macs only (Intel Macs and native Windows boxes need not apply)
- Revert WSL 2 to WSL 1 due to nested virtualization not being available at a hardware level
wsl --set-default-version 1
- Docker won't run without paravirtualization enabled, but the rest of the development environment will work as expected
- Revert WSL 2 to WSL 1 due to nested virtualization not being available at a hardware level
- Install Ubuntu
# enable default distribution (Ubuntu) wsl --install ubuntu
- Start Linux and prep for environment setup
# launch Ubuntu ubuntu # upgrade packages (as root: `sudo -s`) apt update && apt upgrade -y # create standard user adduser <username> visudo # search for 'Allow root to run any commands anywhere', then append identical line with new user root ALL=(ALL) ALL <username> ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL) NOPASSWD: ALL
- Additional configuration options
- Configuration locations
- WSL 1:
/etc/wsl.conf
- WSL 2:
~/.wslconfig
# set default user [user] default=<username> # mounts host drive at /mnt/c/ [automount] enabled = true options = "uid=1000,gid=1000" # WSL2-specific options [wsl2] memory = 8GB # Limits VM memory in WSL 2 processors = 6 # Makes the WSL 2 VM use two virtual processors
- WSL 1:
- After making changes to the configuration file, WSL needs to be shutdown for 8 seconds
wsl --shutdown
- OPTIONAL: Change home directory to host Windows' home
# copy dotfiles to host home directory cp $HOME/.* /mnt/c/Users/<username> # edit /etc/passwd <username>:x:1000:1000:,,,:/mnt/c/Users/<username>:/bin/bash
- Configuration locations
- Install from the Setup section
- WSL/Ubuntu Linux dependencies
sudo apt update && sudo apt install \ make build-essential libssl-dev zlib1g-dev \ libbz2-dev libreadline-dev libsqlite3-dev wget \ curl llvm libncursesw5-dev xz-utils tk-dev \ libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
- Fedora dependencies
sudo dnf install -y bzip2-devel libsqlite3x-devel
- All operating systems
# add python plugin asdf plugin-add python # install stable python asdf install python latest # uninstall version asdf uninstall python 3.9.6 # refresh symlinks for installed python runtimes asdf reshim python # set stable to system python asdf global python latest # optional: local python (e.g., python 3.9.10) cd $work_dir asdf list-all python 3.9 asdf install python 3.9.10 asdf local python 3.9.10 # verify python shim in use asdf current # check installed python asdf list python
If a basic virtual environment (venv
) and requirements.txt
are all that's needed, can use built-in tools.
# create a virtual environment via python
## built-in
python3 -m venv .venv
## faster
python3 -m pip install virtualenv # _or_ pipx install virtualenv
virtualenv .venv
# activate virtual environment
source .venv/bin/activate
# install dependencies
python3 -m pip install requests inquirer
# generate requirements.txt
python3 -m pip freeze > requirements.txt
# exit virtual environment
deactivate
- NOTE: it's possible to use the built-in
.venv
virtual environment (e.g., troubleshootingSolverProblemError
dependency hell)poetry env use .venv/bin/python
- Install from the Setup section
- Normal usage
# Install (modifies $PATH) curl -sSL https://install.python-poetry.org | $(which python3) - # append `--no-modify-path` to EOL if you know what you're doing # Change config poetry config virtualenvs.in-project true # .venv in `pwd` poetry config experimental.new-installer false # fixes JSONDecodeError on Python3.10 # Activate virtual environment (venv) poetry shell # Deactivate venv exit # ctrl-d # Install multiple libraries poetry add google-auth google-api-python-client # Initialize existing project poetry init # Run script and exit environment poetry run python your_script.py # Install from requirements.txt poetry add `cat requirements.txt` # Update dependencies poetry update # Remove library poetry remove google-auth # Generate requirements.txt poetry export -f requirements.txt --output requirements.txt --without-hashes # Uninstall Poetry (e.g., troubleshooting) POETRY_UNINSTALL=1 bash -c 'curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py' | $(which python3) -
- Install from the Setup section
- Usage
# clean build (remove `--no-cache` for speed) docker-compose build --no-cache --parallel # start container docker-compose up --remove-orphans -d # exec into container docker attach hello # run command inside container python hello.py # stop container docker-compose stop # destroy container and network docker-compose down
- Install from the Setup section
- Usage
# install pip install --upgrade pip pip install playwright playwright install # download new browsers (chromedriver, gecko) npx playwright install # generate code via macro playwright codegen wikipedia.org
- Follow the official Django Docker Compose article
cd django
- Generate the server boilerplate code
docker-compose run web django-admin startproject composeexample .
- Fix upstream import bug and whitelist all hosts/localhost
$ vim composeexample/settings.py import os ... ALLOWED_HOSTS = ["*"]
- Profit
docker-compose up
- Optional: Comment out Django exclusions for future commits
- Assumed if extracting Django boilerplate from template and creating a new repo
# .gitignore # ETC # django/composeexample/ # django/data/ # django/manage.py
- Easiest way to set up a local single node cluster is via one of the following:
- Rancher Desktop
- minikube
- Incidentally,
minikube
ships with the Kubernetes Dashboardminikube dashboard
- The boilerplate Terraform plan below hasn't been tested against
minikube
- Incidentally,
- multipass with microk8s
- Add aliases to
~/.bashrc
or~/.zshrc
# k8s alias k="kubectl" alias kc="kubectl config use-context" alias kns='kubectl config set-context --current --namespace' alias kgns="kubectl config view --minify --output 'jsonpath={..namespace}' | xargs printf '%s\n'" KUBECONFIG="$HOME/.kube/config:$HOME/.kube/kubeconfig:$HOME/.kube/k3s.yaml"
- CLI/TUI (terminal user interface) management of k8s
- POC
git clone https://github.com/jwsy/simplest-k8s.git k config get-contexts # should have `rancher-desktop` selected kc rancher-desktop # switch to rancher context if not k apply -f simplest-k8s k delete -f simplest-k8s
- Navigate to https://jade-shooter.rancher.localhost/ in Chrome
- Allow self-signed cert
- Profit 💸
- NOTES:
- This section depends on Kubernetes and a
~/.kubeconfig
from above NodePort
was used instead ofLoadBalancer
forservice.type
- MetalLB is a stretch goal for future deployments
- This section depends on Kubernetes and a
- Install
terraform
via `asdf# terraform asdf plugin-add terraform asdf install terraform latest
- Add aliases to
~/.bashrc
or~/.zshrc
# ~/.bashrc alias tf='terraform' alias tfi='terraform init -backend-config=./state.conf' alias tfa='terraform apply' alias tfp='terraform plan' alias tfpn='terraform plan -refresh=false'
- Navigate to
./terraform/
and initialize theterraform
working directorycd terraform/ tfi
- Create an execution plan
tfp
- Apply/execute the actions from Terraform plan
tfa
- Navigate to
http://localhost:<port>
- Port can be found via
kubectl
k get svc # 80:31942/TCP
- Port can be found via
- Tear down deployment
tf destroy
- Add the submodule to the downstream repo
git submodule add https://github.com/pythoninthegrass/automate_boring_stuff.git git commit -m "automate_boring_stuff submodule" git push
- Create a personal access token called
PRIVATE_TOKEN_GITHUB
withrepo
permissions on the downstream reporepo:status
repo_deployment
public_repo
- Add that key to the original repo
- Settings > Security > Secrets > Actions
- New repository secret
- Setup a new Action workflow
- Actions > New Workflow
- Choose a workflow > set up a workflow yourself
# main.yml # SOURCE: https://stackoverflow.com/a/68213855 name: Send submodule updates to parent repo on: push: branches: - main - master jobs: update: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 with: repository: username/repo_name token: ${{ secrets.PRIVATE_TOKEN_GITHUB }} - name: Pull & update submodules recursively run: | git submodule update --init --recursive git submodule update --recursive --remote - name: Commit run: | git config user.email "[email protected]" git config user.name "GitHub Actions - update submodules" git add --all git commit -m "Update submodules" || echo "No changes to commit" git push
No version set for command python
- Make sure
python
orpython3
isn't aliased in~/.bashrc
or~/.zshrc
bash - Is it possible to check where an alias was defined? - Unix & Linux Stack Exchange
- Watch logs in real-time:
docker-compose logs -tf --tail="50" hello
- Check exit code
$ docker-compose ps Name Command State Ports ------------------------------------------------------------------------------ docker_python python manage.py runserver ... Exit 0
asdf
,poetry
, andpython
all need to be sourced in your shell$PATH
in a specific orderasdf
stores its Python shims in~/.asdf/shims
poetry
lives in~/.local/bin
export ASDF_DIR="$HOME/.asdf" export PATH="$ASDF_DIR/shims:$HOME/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
- Furthermore, any aliases or alias files need to be sourced as well
. "$ASDF_DIR/asdf.sh" . "$ASDF_DIR/completions/asdf.bash" . /usr/local/etc/profile.d/poetry.bash-completion
-
Verbose logging and redirection to a file
export TF_LOG="trace" # unset via "off" export TF_LOG_PATH="$HOME/Downloads/tf.log" # `~` doesn't expand
- Log levels
- TRACE
- DEBUG
- INFO
- WARN
- ERROR
- Log levels
-
Error: cannot re-use a name that is still in use
I think I resolved the issue. This is what I did: 1) mv the terraform.tfstate to another name, 2) mv the terraform.tfstate.backup to terraform.tfstate, and 3) run 'terraform refresh' command to confirm the state is synchronized, and 4) run 'terraform apply' to delete/create the resource. I will mark your reply as the answer, as it gives me the clue for solving the issue. Thanks! – ozmhsh Dec 9, 2021 at 4:57
nginx - Stuck in the partial helm release on Terraform to Kubernetes - Stack Overflow
- pipx
# Install
python3 -m pip install --user pipx
python3 -m pipx ensurepath
# Usage
...
- Django
- Merge with docker_python and put the latter on an ice float
- Flask
- Bonus points for Svelte front-end ❤️
- FastAPI
- k8s
~/.kubeconfig
- ansible
- wsl
- VSCode
- Remote WSL install and usage
- Or at least further reading nods
- Remote WSL install and usage
- VSCode
- Debugging
- Dependencies
- script itself via icecream
Basic writing and formatting syntax - GitHub Docs
venv — Creation of virtual environments — Python 3.7.2 documentation
pip freeze - pip documentation v22.0.3
Introduction | Documentation | Poetry - Python dependency management and packaging made easy
Commands | Documentation | Poetry - Python dependency management and packaging made easy
Overview of Docker Compose | Docker Documentation
Compose file version 3 reference | Docker Documentation
Speed up administration of Kubernetes clusters with k9s | Opensource.com
Getting started | Playwright Python | codegen macro
Set up a WSL development environment | Microsoft Docs
Advanced settings configuration in WSL | Microsoft Docs
Understanding The Python Path Environment Variable in Python