Launching an HA Admin Portal Tenant
Step-by-step guide to launch a new HA Admin Portal tenant from scratch.
Getting Started
Step 1: Complete the Setup Guide
Follow the setup guide to install the Zig toolchain, authenticate with Google Cloud, and verify group membership. If you’ve already done this, skip to Step 2.
Step 2: Build LaunchBot
./zig/zig build launchbotThe compiled binary will be available as ./launchbot in
the project root.
Prerequisites
Before starting a launch, confirm the following:
Convention: Run each phase for staging first, verify, then repeat for production. Never deploy staging and production simultaneously.
Phase 1: Generate Tenant & Build Infrastructure
All phases are sequential dependencies. Each phase depends on the output of the previous one. They must be run in order — skipping ahead will cause failures.
Step 3: Generate Tenant
Run LaunchBot to generate Terraform configuration and client records:
./launchbot generate tenant -t <tenant-key> -e staging
./launchbot generate tenant -t <tenant-key> -e productionOr run in interactive mode (recommended for first-time operators):
./launchbot generate tenantWhat this creates:
- Terraform
.tfvarsapp setting entries for the tenant databases.tfmodule blocks for Azure SQL elastic pool databases- Client record in the
clientstable (auto-incremented client ID) - Cloudflare tunnel configuration
- Optionally, a PR on branch
feat/tenant-<tenant>-<env>(LaunchBot will prompt you to confirm)
Verify:
- If you confirmed PR creation, check the PR in GitHub. If you
declined, create it manually with
gh pr create. - Connect via VS Code with the SQL Server (mssql) extension and run:
SELECT * FROM clients ORDER BY id DESC - Confirm the new tenant appears with the correct client ID
Step 4: Build Infrastructure
Merge the tenant configuration PR (from Step 3) into the
plan branch, then deploy via a release.
- Merge the PR into
plan - Deploy a release
What gets provisioned:
- Azure SQL database in elastic pool
- Cloudflare tunnel
- App configuration entries
Verify: Open Cloud Build History and confirm the apply completed successfully for each target environment. The build logs should show the expected resources created.
Do not proceed to Phase 2 until Cloud Build has finished applying the infrastructure. Phase 2 reads Terraform output values that only exist after apply completes.
Phase 2: Generate Deployment Configuration
Depends on Phase 1. The deploy config command reads Terraform output values (URL, database name, tunnel ID, client ID) that are created when Cloud Build applies the infrastructure in Step 4. Do not run this before Phase 1 is complete.
Step 5: Generate Deploy Config
./launchbot generate deploy_config -t <tenant-key> -e staging
./launchbot generate deploy_config -t <tenant-key> -e productionWhat this creates:
.envs/.<tenant>-<env>/— application environment variables.caddy/.<tenant>-<env>/Caddyfile— Caddy reverse proxy config.cloudflared/.<tenant>-<env>/config.yml— Cloudflare tunnel config.envs/.identityserver-<env>/— IdentityServer client config- Optionally, a PR on branch
feat/deploy_config-<tenant>-<env>targeting the release branch in healthAlignPMS (LaunchBot will prompt you to confirm)
Auto-extraction mode pulls URL, database name, tunnel ID, and client
ID from Terraform output, so only --tenant and
--environment are needed.
Known issues:
- PR auto-creation can fail silently. Check GitHub; if no PR exists,
push the branch manually and create the PR with
gh pr create --base release.
Step 6: Merge Deploy Config PR
Review and merge the deploy config PR in healthAlignPMS before proceeding to Phase 3.
ADO pipeline generation (Phase 3) reads environment variables from the deploy config. If the PR is not merged, pipeline creation will reference missing configs.
Phase 3: Generate Azure DevOps Pipelines
Depends on Phase 2. ADO pipelines reference environment variables from the deploy config generated in Phase 2. The deploy config PR must be merged before running this phase.
Step 7: Generate Azure DevOps Release Pipelines
./launchbot generate azure_devops -t <tenant-key> -e staging
./launchbot generate azure_devops -t <tenant-key> -e productionWhat this creates:
- Database release pipeline
- Web App release pipeline
- Interface Task release pipeline
- A UID client secret added to pipeline variables (also needed in GCP Vault)
Under the hood: Runs
./zig/zig build output -- ha-infra to extract deployment
values from Terraform state.
Verify: Open Azure DevOps > Releases and confirm three new release definitions exist for the tenant in each environment.
Phase 4: Database Deployment and Seeding
Depends on Phase 1 release. The ADO database pipeline deploys using a Docker image built by the release in Phase 1 (Step 4). Confirm the Docker image build has completed in ADO before creating a release. If the image is not ready, the pipeline will fail.
Step 8: Deploy Database Schema via Azure DevOps
- Go to Azure DevOps > Releases > Database pipeline for the tenant
- Create a new release using the latest build
- Deploy to staging first, then production
Database names:
- Staging:
HA_<Tenant>_UAT - Production:
HA_<Tenant>
Verify: Connect via VS Code with the SQL Server (mssql) extension. The database should have tables but no data yet.
Step 9: Seed Database
./launchbot seed databases -e staging -d HA_<Tenant>_UAT
./launchbot seed databases -e production -d HA_<Tenant>Verify: Query the seeded database — tables should now contain baseline data.
Step 10: Seed App Records (Identity Server)
./launchbot seed app_records -t <tenant-key> -e staging
./launchbot seed app_records -t <tenant-key> -e productionWhat this does: Creates a record in the
tenants.apps table in the core database, which Identity
Server uses to recognize the tenant.
Verify:
SELECT * FROM Tenants.Apps ORDER BY id DESCThe new tenant should appear.
Phase 5: Application Deployment
Strategy: Deploy staging completely and verify before starting production.
Step 11: Add Client Secret to GCP Vault
The client secret generated by generate azure_devops
(Phase 3) must be added to GCP Secret Manager for the Identity Server
vault.
Run the vault migrate script to copy pipeline variables (including the client secret) into GCP Secret Manager:
# Vault migrate script — copies ADO pipeline vars to GCP secrets
./scripts/vault-migrate.sh <tenant-key> <environment>Verify: Open GCP Console > Secret Manager and confirm the client secret exists for the tenant in the target environment.
Step 12: Deploy Identity Server
Deploy via the ADO Identity Server release pipeline.
- Create a new release with the latest build number
- Deploy to staging, verify, then deploy to production
Verify: After deployment, the tenant should appear in the Identity Server clients list.
Step 13: Deploy Sync Tenants
Important: Disable Sync Tenants before deploying to avoid conflicts. Disabling has a 15-minute timeout waiting for .NET processes to exit.
- Disable the Sync Tenants service on the target VM
- Deploy via the ADO Sync Tenants release pipeline
- Re-enable the service
Known UAT issue: .NET processes hang and don’t exit cleanly during the disable step. Workaround — SSH into the VM and kill them:
Get-Process -Name dotnet | Stop-ProcessVerify: Check the Sync Sign-in Status dashboard. The new tenant’s client ID should appear within 5 minutes (one sync cycle).
Step 14: Deploy Web App
Deploy via the ADO Web App release pipeline.
- GCP vault secrets must be in place before deploying (Step 11)
- The fetch-secrets script runs at container startup to pull secrets from GCP
- Caddy shows “bad gateway” briefly while the container starts — this is normal
- First cold launch is slow (loading database connections)
Verify: Navigate to the tenant’s admin URL. The login page should appear. After Sync Tenants completes a cycle, users should be visible.
Step 15: Deploy Interface Tasks
Deploy via the ADO Interface Tasks release pipeline.
Verify: Confirm the container is running on the target VM.
Phase 6: Post-Launch Verification
Post-Launch Tasks (Optional)
Logo Setup
- Upload the partner’s logo to Azure Storage (SFTP):
HealthAlign/prod/identity-server/logos/<tenant>/ - Update the
tenants.appstable with the logo filename
SSO Configuration
If the partner requires SSO, configure Keycloak with their client credentials. This requires coordination with the partner and cannot be fully tested without their environment.
Edit this page