High-side configuration — ACA-based disconnected Pulp control plane¶
1. Overview¶
The high-side stack is an Azure Container Apps (ACA)-hosted Pulp 3 control plane with internal-only ingress and no upstream internet connectivity. Content enters exclusively through operator-uploaded transfer bundles that are verified on the low side before crossing the boundary. The stack deliberately has no automatic upstream sync: every package in every repository was approved and transported by a human-controlled process. The runtime image, infrastructure definitions, and bootstrap automation all live in this repository; nothing is pulled from public registries at deploy time.
Relationship to low side
The high-side ACA environment mirrors the low-side deployment shape (same Bicep modules,
same container image, same Pulp version) so that bundles produced on the low side can be
consumed without schema mismatches. Keep pulpImageTag in sync across both sides.
2. Prerequisites¶
| Requirement | Detail |
|---|---|
| Azure subscription | Commercial (public) or Government (usgovernment) |
| Azure CLI ≥ 2.50 | az version — update with az upgrade |
| Bicep CLI | Bundled with az; verify with az bicep version |
| AAD object ID for KV access | az ad signed-in-user show --query id -o tsv |
| A signed transfer bundle | Produced by transfer_bundle.py build on a connected low-side instance |
| Approved cross-domain transfer mechanism | Diode, Data Box, or removable media — outside accelerator scope |
Air-gap constraint
ACR, Key Vault, Storage, Redis, Service Bus, and PostgreSQL are all accessed through private endpoints. Ensure the operator workstation has VPN or Bastion connectivity to the high-side VNet before deploying or operating the stack.
3. Deployment¶
3.1 One-command (recommended)¶
./scripts/quickstart-high-side.sh \
--cloud public \
--resource-group rg-pulp-high-prod \
--bicepparam infra/high-side/main.public.local.bicepparam
For Azure Government, swap --cloud usgovernment and use
infra/high-side/main.usgovernment.local.bicepparam. The script copies the matching
.example.bicepparam to a .local.bicepparam file if one does not already exist, then opens it
for editing before proceeding.
Optional flags:
| Flag | Purpose |
|---|---|
--bundle <path> |
Upload a local bundle and trigger the import job after deploy |
--bundle-blob <url> |
Trigger the import job with a pre-uploaded blob URL |
--rotate-passwords |
Force rotation of Pulp KV secrets |
--yes / -y |
Non-interactive (CI-friendly) |
--skip-validation |
Skip post-deploy validation |
3.2 Manual / advanced¶
Use this path when change-control processes require explicit sign-off at each phase.
Step 1 — Resource group
az group create \
--name rg-pulp-high-prod \
--location centralus \
--tags Environment=prod ManagedBy=bicep Project=linux-update-cds Classification=CUI Compliance=IL4
Step 2 — Infrastructure
az deployment group create \
--resource-group rg-pulp-high-prod \
--template-file infra/high-side/main.bicep \
--parameters infra/high-side/main.public.example.bicepparam
Key parameters in infra/high-side/main.bicep (no VM parameters):
| Parameter | Default | Notes |
|---|---|---|
cloudEnvironment |
public |
usgovernment for Azure Government |
namePrefix |
pulpm2 |
3–12 char prefix for resource names |
environment |
high |
Appended to resource names and tags |
keyVaultAccessObjectIds |
[] |
Add your operator AAD object ID here |
acrSku |
Premium |
Required for private endpoints |
serviceBusSku |
Premium |
Required for private endpoints |
pulpImageTag |
v0.2.0-preview |
Must match low-side build |
Step 3 — Bootstrap secrets and prepare container apps
python3 automation/bootstrap/prepare_high_side_container_apps.py \
--resource-group rg-pulp-high-prod
Pass --rotate-passwords on subsequent runs to regenerate pulp-admin-password and
pulp-db-password in Key Vault and update the Postgres Flex Server in one operation.
Step 4 — Run db-init job
INIT_JOB=$(az deployment group show \
--resource-group rg-pulp-high-prod \
--name main \
--query properties.outputs.initJobName.value -o tsv)
az containerapp job start \
--name "$INIT_JOB" \
--resource-group rg-pulp-high-prod
Step 5 — Validate
API_APP=$(az deployment group show \
--resource-group rg-pulp-high-prod \
--name main \
--query properties.outputs.apiAppName.value -o tsv)
az containerapp show \
--name "$API_APP" \
--resource-group rg-pulp-high-prod \
--query "{state:properties.runningStatus,fqdn:properties.configuration.ingress.fqdn}" \
-o json
4. Bundle ingest¶
Content reaches the high side exclusively through operator-uploaded transfer bundles. There is no automatic blob-trigger; the import ACA job must be started explicitly after each upload.
4.1 Acquire and verify a bundle¶
Obtain the bundle from your low-side instance. The format is documented in
automation/bootstrap/transfer_bundle.py. Before transfer, verify the manifest signature and
SHA256SUMS on the low side:
python3 automation/bootstrap/transfer_bundle.py verify \
--bundle-dir ./bundles/<snapshot-id> \
--trusted-public-key /path/to/transfer-manifest-signing.pub
Preserve the receive receipt (_receipts/<snapshot-id>.receive-receipt.json) as release
evidence. Treat the manifest, checksum file, signature, and receipt as non-optional audit
artifacts.
Classification handling
Ensure the bundle's classification label (public, controlled, confidential, secret,
top-secret) matches the destination environment's authorization boundary before initiating
any cross-domain transfer. Do not rely on this accelerator to enforce classification
controls — that is the responsibility of your organization's cross-domain solution.
4.2 Transfer across the air gap¶
Follow your organization's approved cross-domain transfer procedure. See
docs/runbooks/transfer-media.md for the standard bundle layout and
signing-key custody guidance. This step is intentionally outside accelerator scope.
4.3 Upload to the bundles container¶
STORAGE=$(az deployment group show \
--resource-group rg-pulp-high-prod \
--name main \
--query properties.outputs.storageAccountName.value -o tsv)
BUNDLE=./bundles/ubuntu-jammy-2026-04-26.tar
az storage blob upload \
--account-name "$STORAGE" \
--container-name bundles \
--file "$BUNDLE" \
--auth-mode login \
--overwrite
Government cloud blob hostname
For Azure Government, blob URLs use core.usgovcloudapi.net instead of
core.windows.net. The BUNDLE_BLOB_URL you construct in the next step must use the
correct suffix for the target cloud.
4.4 Trigger the import job¶
IMPORT_JOB=$(az deployment group show \
--resource-group rg-pulp-high-prod \
--name main \
--query properties.outputs.importJobName.value -o tsv)
# Public cloud
BUNDLE_BLOB_URL="https://${STORAGE}.blob.core.windows.net/bundles/$(basename $BUNDLE)"
# Government cloud (uncomment if applicable)
# BUNDLE_BLOB_URL="https://${STORAGE}.blob.core.usgovcloudapi.net/bundles/$(basename $BUNDLE)"
az containerapp job start \
--name "$IMPORT_JOB" \
--resource-group rg-pulp-high-prod \
--env-vars BUNDLE_BLOB_URL="$BUNDLE_BLOB_URL"
The import job (run-pulp-import.sh → import_bundle.py) downloads the bundle from the
private blob endpoint, unpacks it into /var/lib/pulp/imports (on the Azure Files share),
calls the Pulp import API, and exits with a non-zero status on any failure.
4.5 Monitor import progress¶
az containerapp job execution list \
--name "$IMPORT_JOB" \
--resource-group rg-pulp-high-prod \
--query "[].{name:name,status:properties.status,start:properties.startTime,end:properties.endTime}" \
-o table
4.6 Inspect logs¶
EXEC_NAME=<execution-name-from-above>
az containerapp job logs show \
--name "$IMPORT_JOB" \
--resource-group rg-pulp-high-prod \
--execution-name "$EXEC_NAME"
4.7 Verify content is available¶
API_APP=$(az deployment group show \
--resource-group rg-pulp-high-prod \
--name main \
--query properties.outputs.apiAppName.value -o tsv)
# Execute inside the ACA environment — avoids needing a public endpoint
az containerapp exec \
--name "$API_APP" \
--resource-group rg-pulp-high-prod \
--command "curl -s -u admin:\$PULP_ADMIN_PASSWORD ${PULP_API_BASE_URL}/pulp/api/v3/repositories/ | python3 -m json.tool"
5. Operations¶
5.1 Adding a distro¶
See docs/runbooks/adding-a-distro.md. On the high side, distro
registration is a no-op for sync (there is no upstream); however, the distro configuration in
config/repos/*.yaml must remain consistent with the low side. Bundles are produced from the
low-side distro list: if the high side omits a distro, its content arrives in the bundle but
has no Pulp publication pointing at it.
5.2 Rotating secrets¶
See docs/runbooks/secrets-rotation.md. On the high side, pay
particular attention to the coupled rotation of pulp-db-password: the Key Vault secret
and the PostgreSQL Flexible Server credential must be updated atomically. The prepare script
handles this with --rotate-passwords:
python3 automation/bootstrap/prepare_high_side_container_apps.py \
--resource-group rg-pulp-high-prod \
--rotate-passwords
Do not manually update only one side of the pair — Pulp workers will fail to connect until both are in sync.
Low-side-only secrets
Do not copy Red Hat entitlement secrets (rhsm-username, rhsm-password) to the high
side. They are not needed and should not cross the boundary.
5.3 Scaling¶
ACA apps autoscale on HTTP queue depth and concurrency. The defaults in containerapps.bicep
are suitable for small-to-medium deployments. To override:
az containerapp update \
--name <api-app-name> \
--resource-group rg-pulp-high-prod \
--min-replicas 1 \
--max-replicas 5
Worker autoscale is driven by Service Bus queue depth (import-jobs, publish-jobs).
5.4 Backups¶
- PostgreSQL Flexible Server: Automated backups with point-in-time restore (PITR). Retention
is set in
infra/_shared/database.bicep. Check the value against your RTO/RPO requirements. - Azure Files share (
/var/lib/pulp): Soft-delete and snapshot policy applied ininfra/_shared/storage.bicep. Snapshots are the primary recovery mechanism for the Pulp media root.
Re-import as recovery
Pulp publications are immutable. If the database is lost but the Azure Files share and bundle storage are intact, re-running the import jobs rebuilds the Pulp state. Keep at least one baseline bundle plus the latest approved delta in the bundles container.
5.5 Monitoring¶
Log Analytics workspace and Application Insights (if enabled) are configured in the
_shared/monitoring.bicep module and wired to all ACA apps and jobs at deploy time. See
docs/telemetry.md for dashboard and alert setup.
6. Network architecture¶
All ACA ingress is internal load balancer only. The apiFqdn and contentFqdn deployment
outputs resolve only within the high-side VNet. There is no public DNS record for any high-side
endpoint.
Operator access patterns:
- VPN to the VNet (preferred for day-to-day operations)
- Azure Bastion (browser-based SSH to a jump host inside the VNet)
- Jump host VM inside the VNet (see your organization's PAW policy)
Apt client configuration — paste the following on managed nodes inside the VNet, replacing
<content-fqdn> with the contentFqdn deployment output:
# /etc/apt/sources.list.d/pulp-high-side.list
deb [trusted=yes] https://<content-fqdn>/pulp/content/<distro-base-path>/ jammy main
Internal DNS only
The <content-fqdn> resolves only via the private DNS zone linked to the high-side VNet.
Managed nodes must use an internal DNS resolver that forwards to Azure DNS
(168.63.129.16) or to an internal forwarder that does so.
Private endpoints — all PaaS services are accessed exclusively via private endpoints backed
by private DNS zones. The zones and their names for each target cloud are derived automatically
from cloudEnvironment in infra/high-side/main.bicep:
| Service | Commercial DNS zone | Government DNS zone |
|---|---|---|
| PostgreSQL | privatelink.postgres.database.azure.com |
privatelink.postgres.database.usgovcloudapi.net |
| Blob Storage | privatelink.blob.core.windows.net |
privatelink.blob.core.usgovcloudapi.net |
| Key Vault | privatelink.vaultcore.azure.net |
privatelink.vaultcore.usgovcloudapi.net |
| Redis | privatelink.redis.cache.windows.net |
privatelink.redis.cache.usgovcloudapi.net |
| Service Bus | privatelink.servicebus.windows.net |
privatelink.servicebus.usgovcloudapi.net |
| ACR | privatelink.azurecr.io |
privatelink.azurecr.us |
7. Disaster recovery¶
| Component | Recovery mechanism |
|---|---|
| PostgreSQL | Point-in-time restore via PG Flex automated backups |
| Pulp media root | Azure Files share snapshot restore |
| Pulp publications | Re-run import jobs from bundles in the bundles container |
| ACR images | Re-transfer runtime image from low side if geo-replication is disabled |
ACR geo-replication
Geo-replication is disabled by default. In most air-gapped high-side deployments, cross-region replication is policy-disallowed. Validate with your security team before enabling.
Keep the high-side repo endpoints stable. Do not force consumers to chase versioned content URLs. Prefer re-running automation over one-off manual fixes.
8. Troubleshooting¶
See docs/runbooks/troubleshooting.md for the full runbook. High-side
specific symptoms:
| Symptom | Likely cause | Resolution |
|---|---|---|
| Import job stuck / exits 1: "bundle blob not found" | Wrong blob URL or wrong storage account | Check BUNDLE_BLOB_URL syntax; confirm the blob exists with az storage blob exists |
| Import job exits 1: "verification failed" | Bundle tampered or signing key mismatch | Re-verify bundle on low side; confirm trusted public key matches the signing key used at build time |
ACA app stuck in Activating |
ACR pull failure, KV access policy missing, or infra subnet missing Microsoft.App/environments delegation |
Check managed identity role on ACR; verify KV access; confirm subnet delegation |
| Private endpoint DNS resolution failures | Private DNS zones not linked to VNet | Verify all six private DNS zones are linked to the high-side VNet in the Azure Portal or via az network private-dns link vnet list |
pulp-db-password mismatch after rotation |
Only one side of the coupled secret was rotated | Re-run prepare_high_side_container_apps.py --rotate-passwords to sync both |
9. Key Vault secret inventory¶
The following secrets are created by prepare_high_side_container_apps.py and consumed at
runtime by the ACA workloads. Operator read access requires the Key Vault Secrets User RBAC
role, which is granted to keyVaultAccessObjectIds at deploy time.
| Secret | Consumer | Notes |
|---|---|---|
pulp-admin-password |
Pulp API admin auth | Set during bootstrap; used by import_bundle.py |
pulp-db-password |
Pulp → PostgreSQL | Coupled to PG Flex server password |
pulp-secret-key |
Django application | Fernet-safe random string |
pulp-db-symmetric-key |
Pulp field encryption | Rotate with caution — re-encryption required |
10. Source-of-truth files¶
| File | What it controls |
|---|---|
infra/high-side/main.bicep |
All high-side Azure infrastructure |
infra/high-side/containerapps.bicep |
ACA app, worker, and job definitions |
scripts/quickstart-high-side.sh |
One-command deploy entrypoint |
automation/bootstrap/prepare_high_side_container_apps.py |
Secret bootstrap and coupled rotation |
automation/bootstrap/import_bundle.py |
In-container import orchestration |
runtime/container-apps/entrypoints/run-pulp-import.sh |
ACA import job entrypoint |
config/environments/high-side.yaml |
Environment identity and validation contract |
docs/runbooks/transfer-media.md |
Cross-boundary bundle transfer procedure |
11. References¶
ROADMAP.md— icebox items including auto-trigger on blob uploaddocs/architecture/overview.md— full system architecturedocs/telemetry.md— monitoring and alerting setupdocs/runbooks/secrets-rotation.md— full secret rotation proceduresdocs/runbooks/transfer-media.md— cross-boundary media transferdocs/runbooks/adding-a-distro.md— distro configuration