Reference documentation for AI code assistants (Claude, Codex, Copilot, etc.) working with the Parallel Works ACTIVATE platform.
Point your AI assistant to this repository for context when developing ACTIVATE workflows, using the SDK, or working with HPC schedulers.
ACTIVATE is Parallel Works' HPC (High-Performance Computing) and cloud computing platform. It enables users to:
- Run computational workflows on remote clusters (SLURM, PBS, Kubernetes)
- Deploy interactive sessions (Jupyter, VS Code, desktops, custom services)
- Manage cloud and on-prem compute resources
- Transfer and process data across storage systems
| Guide | Description |
|---|---|
| WORKFLOW_BUILDER.md | Complete workflow YAML reference, input types, actions, expressions, SDK, CLI |
| SCHEDULER_REFERENCE.md | SLURM and PBS flags, resource requests, GPU configuration |
| ENVIRONMENT_VARIABLES.md | Platform variables, accessing inputs, writing outputs |
Read this first to avoid common errors that produce unhelpful "Unknown Error" messages.
# Marketplace workflows - use marketplace/ prefix
- uses: marketplace/script_submitter # CORRECT
- uses: workflow/script_submitter # WRONG - causes "Unknown Error"
# Your own workflows - use workflow/ prefix
- uses: workflow/my-custom-workflow # CORRECT
# Built-in actions - no prefix
- uses: checkout # CORRECT
- uses: update-session # CORRECT
- uses: scheduler-agent # CORRECT
- uses: wait-for-agent # CORRECThidden: ${{ inputs.mode != 'advanced' }} # CORRECT
hidden: ${{ inputs.mode!='advanced' }} # WRONG# If input is in a group, include group name
${{ inputs.scheduler_config.partition }} # CORRECT
${{ inputs.partition }} # WRONG (if partition is in a group)Always clarify these before writing a workflow:
- Target infrastructure: What cluster/resource will this run on?
- Scheduler type: SLURM, PBS, or direct SSH?
- Container requirements: Does this need Singularity/Apptainer?
- Interactive or batch?: Does the user need a session (Jupyter, desktop) or a batch job?
- Existing patterns: Is there a similar workflow in the reference repos to build from?
permissions: ["*"]
on:
execute:
inputs:
cluster:
type: compute-clusters
label: "Cluster"
jobs:
run:
ssh:
remoteHost: "${{ inputs.cluster }}"
steps:
- name: Run Script
run: ./my_script.shjobs:
provision:
steps:
- name: Get Compute Node
id: agent
uses: scheduler-agent
with:
scheduler-type: slurm
scheduler-flags: |
--partition=gpu
--gres=gpu:2
--time=04:00:00
run:
needs: [provision]
ssh:
remoteHost: "${{ needs.provision.steps.agent.outputs.ip }}"
steps:
- name: Train Model
run: python train.pysessions:
my-service:
type: tunnel
redirect: true
jobs:
start:
steps:
- name: Start Service
run: ./start_service.sh
update:
needs: [start]
steps:
- uses: update-session
with:
name: my-service
remotePort: 8080| Variable | Description |
|---|---|
PW_USER |
Current username |
PW_JOB_ID |
Workflow run ID |
PW_WORKFLOW_NAME |
Name of the workflow |
PW_PLATFORM_HOST |
Platform URL |
JOB_DIR |
Working directory for the job |
OUTPUTS |
File to append outputs (echo "KEY=value" >> $OUTPUTS) |
See ENVIRONMENT_VARIABLES.md for complete reference.
When generating workflows, reference these proven patterns:
| Repository | Use Case |
|---|---|
| activate-medical-finetuning | GPU training, container management, model handling |
| activate-sessions | Interactive sessions, two-script architecture |
| activate-rag-vllm | LLM deployment, vLLM, RAG |
| interactive_session | VNC, desktop environments |
Suggest users test workflows incrementally:
- Start with a minimal workflow that just echoes inputs
- Add one job/feature at a time
- Use the Build tab's Ctrl+Space for field suggestions
- Check the Runs tab for detailed logs when debugging
- Official Documentation
- API Reference
- CLI Documentation
- SDK Repository - Python, TypeScript, Go