Dev / Migration Container
This guide explains how to build and use the migration runtime image for in-cluster development. The runtime provides access to:
- Specify 7 source code (with ORM support).
- Internal cluster services (MariaDB, Redis).
- External Oracle databases (subject to firewall and whitelist rules).
1. Build the Image (Podman)
Build for linux/amd64 to match cluster compatibility requirements.
Prerequisites:
podmaninstalled.- Submodules initialized (
git submodule update --init --recursive).
Build Command:
podman build --platform linux/amd64 -t ghcr.io/unimus-natur/migration:latest .
Note: If you are pushing to a registry (e.g., GHCR), tag accordingly:
podman tag ghcr.io/unimus-natur/migration:latest ghcr.io/unimus-natur/migration:latest
podman push ghcr.io/unimus-natur/migration:latest
2. Deploy to Cluster
The migration toolbox container is no longer deployed by the specify7 Helm chart by default. For a fast in-cluster loop, use the Prefect devWorker (process type), which runs from the migration image.
- Load Image (Local Dev): If using
kindorminikubeand not pushing to a registry, load the image:# For kind kind load docker-image migration:latest # (Podman users might need to save/load archive if direct load isn't supported) podman save migration:latest -o migration.tar kind load image-archive migration.tar -
Configure Prefect dev worker image: Ensure
prefect.devWorker.image.repositoryandprefect.devWorker.image.tagpoint to your migration image. - Upgrade/Install:
helm upgrade --install staging ./charts/specify7 --values ./charts/specify7/values.yaml
3. Accessing the Runtime
The old long-lived migration toolbox pod is no longer part of the default chart. Use the Prefect devWorker pod as the runtime:
export POD_NAME=$(kubectl get pods -l component=prefect-dev-worker -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it "$POD_NAME" -- bash
4. Database Proxies (Port Forwarding)
To access Oracle or cluster MariaDB from your local machine (for tools like DBeaver/DbGate), use the included helper script.
Inside the Runtime Pod:
Start the proxies. This binds socat to the pod’s ports.
# Forward Oracle (Prod: 1553, Test: 1553)
./scripts/proxy_db.sh oracle
# Forward Cluster MariaDB (3306)
./scripts/proxy_db.sh mariadb
From Local Machine:
Forward the pod’s ports to your localhost.
kubectl port-forward $POD_NAME 1553:1553 3306:3306
Connect:
- Oracle:
localhost:1553 - MariaDB:
localhost:3306
5. Running Migration Scripts
When using Prefect with git_clone, flow runs execute from a temporary cloned directory, not /app. For ad-hoc shell work, the image still includes the required repository tooling and dependencies.
To run scripts using the Specify ORM:
# Example
python scripts/test_setup.py
6. Remote Build (Kaniko on K8s)
You can trigger a remote build (Kaniko on K8s) directly from your terminal using the helper script.
1. Prerequisites (One-time Setup)
Create Secret: The cluster needs your GitHub Container Registry credentials.
# Replace YOUR_TOKEN with a GitHub Classic PAT (read:packages, write:packages)
kubectl create secret docker-registry ghcr-secret \
--docker-server=ghcr.io \
--docker-username=unimus-natur \
--docker-password=YOUR_TOKEN
> **Note for Organizations**:
> GitHub Container Registry always requires authenticating as a **User**.
> If you are pushing to an Organization (`ghcr.io/unimus-natur/...`), you must use a Personal Access Token (PAT) from a user account that has write access to the organization's packages.
> For shared/automated setups, it is best practice to use a **Machine User** (a dedicated bot account) added to your organization.
2. Usage
Build Current Branch: Builds the current remote state of your branch and pushes to ghcr.io/unimus-natur/migration:latest.
./scripts/build-k8s.sh
Build Specific Branch:
./scripts/build-k8s.sh feature/new-setup
Build Custom Branch & Tag:
./scripts/build-k8s.sh feature/new-setup ghcr.io/unimus-natur/migration:test-1
Note on Resources: The build pod is configured with resource limits. The cluster enforces a policy where the Limit cannot exceed 2x the Request. If you adjust resources in
scripts/build-k8s.sh, ensure you maintain this ratio (e.g., Request 2Gi -> Limit 4Gi).