Middleware layer for deployment DAGs and scripts in the qubinode ecosystem
qubinode-pipelines is the Tier 2 middleware layer in the three-tier qubinode architecture. It serves as the source of truth for deployment DAGs (Directed Acyclic Graphs) and deployment scripts that integrate with qubinode_navigator.
This repository clarifies ownership and integration patterns for external projects contributing automation to the qubinode ecosystem.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TIER 1: DOMAIN PROJECTS β
β (ocp4-disconnected-helper, freeipa-workshop-deployer) β
β β
β Own: Domain-specific playbooks, automation logic β
β Contribute: DAGs and scripts to qubinode-pipelines via PR β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β PR-based contribution
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TIER 2: QUBINODE-PIPELINES β
β (this repo - middleware layer) β
β β
β Own: β
β - Deployment scripts (scripts/*/deploy.sh) β
β - Deployment DAGs (dags/ocp/*.py, dags/infrastructure/*.py) β
β - DAG registry (dags/registry.yaml) β
β β
β Mounted at: /opt/qubinode-pipelines β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β Volume mount
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TIER 3: QUBINODE_NAVIGATOR β
β (platform / runtime) β
β β
β Own: β
β - Airflow infrastructure (docker-compose, containers) β
β - Platform DAGs (rag_*.py, dag_factory.py, dag_loader.py) β
β - ADRs, standards, validation tools β
β - AI Assistant, MCP server β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Tier 1 (Domain Projects): Focus on domain-specific automation (playbooks, configs)
- Tier 2 (qubinode-pipelines): Source of truth for deployment DAGs and scripts
- Tier 3 (qubinode_navigator): Airflow runtime, platform services, standards
qubinode-pipelines/
βββ dags/ # Deployment DAGs organized by category
β βββ registry.yaml # DAG registry and metadata
β βββ TEMPLATE.py # Template for new DAGs
β βββ ocp/ # OpenShift deployment DAGs
β β βββ README.md
β βββ infrastructure/ # Core infrastructure DAGs
β β βββ README.md
β β βββ freeipa_deployment.py
β β βββ vyos_router_deployment.py
β β βββ step_ca_deployment.py
β β βββ mirror_registry_deployment.py
β β βββ ...
β βββ networking/ # Network configuration DAGs
β β βββ README.md
β βββ storage/ # Storage cluster DAGs
β β βββ README.md
β βββ security/ # Security and compliance DAGs
β βββ README.md
βββ scripts/ # Deployment scripts called by DAGs
β βββ vyos-router/
β β βββ deploy.sh
β βββ freeipa/
β β βββ deploy-freeipa.sh
β βββ step-ca-server/
β β βββ deploy.sh
β βββ helper_scripts/
β βββ default.env # Common environment variables
β βββ helper_functions.sh
βββ CONTRIBUTING.md # Contribution guidelines
βββ README.md # This file
-
Set up qubinode_navigator:
git clone https://github.com/Qubinode/qubinode_navigator.git cd qubinode_navigator -
Mount qubinode-pipelines:
# Edit docker-compose.yml to add volume mount: volumes: - /path/to/qubinode-pipelines:/opt/qubinode-pipelines:ro -
Start Airflow:
docker compose up -d
-
Access Airflow UI: http://localhost:8080
- Username:
admin - Password: (from qubinode_navigator setup)
- Username:
-
Trigger a DAG:
- Navigate to the DAG you want to run
- Click "Trigger DAG w/ config"
- Set parameters as needed
- Click "Trigger"
See CONTRIBUTING.md for detailed guidelines on:
- Developing new DAGs
- Validating your contributions
- Submitting pull requests
- DAG and script standards
| DAG | Description | Status |
|---|---|---|
freeipa_deployment |
FreeIPA DNS and identity management | β Tested |
freeipa_dns_management |
Manage FreeIPA DNS records | β Tested |
vyos_router_deployment |
VyOS router for network segmentation | β Tested |
generic_vm_deployment |
Deploy RHEL, Fedora, Ubuntu, CentOS VMs | β Tested |
step_ca_deployment |
Step-CA certificate authority | β Tested |
step_ca_operations |
Certificate operations (request, renew, revoke) | β Tested |
mirror_registry_deployment |
Quay mirror registry for disconnected OCP | β Tested |
harbor_deployment |
Harbor enterprise container registry | β Tested |
jfrog_deployment |
JFrog Artifactory | β Tested |
jumpserver_deployment |
Apache Guacamole jumpserver | π¨ Planned |
OpenShift deployment DAGs will be contributed by external projects like ocp4-disconnected-helper.
Expected DAGs:
ocp_initial_deployment- Initial cluster deploymentocp_agent_deployment- Agent-based installer workflowocp_disconnected_workflow- Disconnected install workflowocp_incremental_update- Cluster updates and upgradesocp_pre_deployment_validation- Pre-flight checksocp_registry_sync- Mirror registry synchronization
DAGs are organized into categories based on their purpose:
- ocp: OpenShift cluster deployment and management
- infrastructure: Core services (DNS, VMs, certificates, registries)
- networking: Network configuration and management
- storage: Storage clusters (Ceph, NFS, etc.)
- security: Security scanning, compliance, hardening
All DAGs are documented in dags/registry.yaml, which tracks:
- DAG name and location
- Description and purpose
- Contributing project
- Status (tested, planned, deprecated)
- Prerequisites
Each component has a deployment script in scripts/*/deploy.sh that:
- Supports
ACTIONvariable:create,delete,status - Uses standard exit codes (0 = success)
- Outputs ASCII markers:
[OK],[ERROR],[WARN],[INFO] - Sources common environment:
scripts/helper_scripts/default.env
External projects develop domain-specific automation and contribute DAGs:
ββββββββββββββββββββββββββββββββ
β ocp4-disconnected-helper β
β - Develops playbooks β
β - Tests locally β
β - Creates DAG β
β - Validates with tools β
ββββββββββββββββ¬ββββββββββββββββ
β
β PR
βΌ
ββββββββββββββββββββββββββββββββ
β qubinode-pipelines β
β - Reviews PR β
β - Merges DAG β
β - Updates registry β
ββββββββββββββββ¬ββββββββββββββββ
β
β Volume mount
βΌ
ββββββββββββββββββββββββββββββββ
β qubinode_navigator β
β - Loads DAGs β
β - Executes workflows β
β - Provides UI β
ββββββββββββββββββββββββββββββββ
DAGs call deployment scripts via SSH to the host:
deploy_component = BashOperator(
task_id='deploy_component',
bash_command="""
ssh -o StrictHostKeyChecking=no -o LogLevel=ERROR root@localhost \
"export ACTION=create && \
export VM_NAME=my-vm && \
cd /opt/qubinode-pipelines/scripts/my-component && \
./deploy.sh"
""",
dag=dag,
)This pattern (ADR-0046, ADR-0047):
- Avoids container limitations
- Uses host's tools (kcli, virsh, ansible)
- Ensures proper permissions
- Simplifies maintenance
For systems currently using /opt/kcli-pipelines, create a symlink:
# On the host
ln -s /opt/qubinode-pipelines /opt/kcli-pipelinesThis ensures existing DAGs and scripts continue to work during migration.
-
ADRs in qubinode_navigator:
-
External Projects:
-
Platform:
Legacy documentation for individual VM deployments:
- Create KCLI profiles for multiple environments
- Deploy VM Workflow
- Deploy the freeipa-server-container on vm
- Deploy the mirror-registry on vm
- Deploy the microshift-demos on vm
- Deploy the device-edge-workshops on vm
- Deploy the openshift-jumpbox on vm
- Deploy the Red Hat Ansible Automation Platform on vm
- Deploy the ubuntu on vm
- Deploy the fedora on vm
- Deploy the rhel9 on vm
- Deploy the OpenShift 4 Disconnected Helper
- Issues: Report bugs or request features via GitHub Issues
- Discussions: Ask questions in GitHub Discussions
- Contributing: See CONTRIBUTING.md for contribution guidelines
- AI Agents: See AGENTS.md for AI coding agent instructions (Claude, Cursor, Copilot, etc.)
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.