A CLI tool to search across multiple Concourse build logs efficiently.
- Terminal-responsive tables - Beautiful, adaptive table output that adjusts to your terminal width
- Dynamic column sizing (40-120 characters for content)
- Intelligent line wrapping instead of truncation
- Rounded Unicode borders for modern terminals
- Perfect ANSI color support without breaking alignment
- Search ALL pipelines - Use
-all-pipelinesflag to search across every pipeline in your target (requires explicit opt-in) - Intelligent caching - Caches both build logs AND pipeline/job/build metadata for blazing fast repeated searches
- Build logs cached for 24 hours (configurable)
- Metadata cached for 5 minutes (configurable) - makes
-all-pipelinesextremely fast on subsequent runs
- Parallel log fetching - Fast concurrent search across multiple builds (configurable parallelism)
- Search all jobs - Search across all jobs in a pipeline or specific jobs
- Multiple pipelines - Search across multiple pipelines simultaneously (comma-separated)
- Context lines - Show N lines before/after matches for better context
- Colorized output - Highlighted matches and colorful formatting (can be disabled)
- Build status filter - Filter builds by status (succeeded, failed, errored, etc.)
- Regex patterns - Full regex support for complex search patterns
- JSON output - Structured JSON format for programmatic processing
- Auto-detection - Automatic Concourse URL detection from fly targets
- Task tracking - Shows which Concourse task generated each log line
- Go 1.16+ (for building)
flyCLI installed and authenticated with your Concourse instance
Download the latest release for your platform from GitHub Releases:
# Linux (amd64)
wget https://github.com/ramonskie/fly-search/releases/latest/download/fly-search-linux-amd64
chmod +x fly-search-linux-amd64
sudo mv fly-search-linux-amd64 /usr/local/bin/fly-search
# macOS (Intel)
wget https://github.com/ramonskie/fly-search/releases/latest/download/fly-search-darwin-amd64
chmod +x fly-search-darwin-amd64
sudo mv fly-search-darwin-amd64 /usr/local/bin/fly-search
# macOS (Apple Silicon)
wget https://github.com/ramonskie/fly-search/releases/latest/download/fly-search-darwin-arm64
chmod +x fly-search-darwin-arm64
sudo mv fly-search-darwin-arm64 /usr/local/bin/fly-searchRequires Go 1.16+:
go build -o fly-searchOr install directly:
go install github.com/ramonskie/fly-search@latestSee which Concourse targets you have configured:
./fly-search -list-targetsSearch specific pipeline:
# Long form
./fly-search --target my-target --pipeline my-pipeline --search "test-env"
# Short form
./fly-search -t my-target -p my-pipeline -s "test-env"Search ALL pipelines in a target (requires explicit flag):
# Long form
./fly-search --target my-target --all-pipelines --search "error"
# Short form (recommended for quick searches)
./fly-search -t my-target -a -s "error"Note: The Concourse URL is automatically detected from your fly target, so build links are generated automatically!
Search within a specific job:
# Long form
./fly-search --target my-target --pipeline my-pipeline --job my-job --search "test-env"
# Short form
./fly-search -t my-target -p my-pipeline -j my-job -s "test-env"By default, only the last 1 build is searched. Search through the last 20 builds:
# Long form
./fly-search --target my-target --pipeline my-pipeline --search "test-env" --count 20
# Short form
./fly-search -t my-target -p my-pipeline -s "test-env" -c 20Get results in JSON format with grouping:
# Long form
./fly-search --target my-target --pipeline my-pipeline --search "test-env" --output json
# Short form
./fly-search -t my-target -p my-pipeline -s "test-env" -o jsonBy default, the Concourse URL is auto-detected from your fly target. You can override it:
./fly-search -target my-target -pipeline my-pipeline -search "test-env" \
-url "https://concourse.example.com"Search using regex patterns:
./fly-search -target my-target -pipeline my-pipeline -search "error|failed|timeout"Show 3 lines before and after each match for better context:
./fly-search -target my-target -pipeline my-pipeline -search "ERROR" -context 3Search only failed builds:
./fly-search -target my-target -pipeline my-pipeline -search "panic" -status failedAvailable statuses: succeeded, failed, errored, aborted, pending, started
Search specific pipelines (comma-separated):
./fly-search -target my-target -pipeline "pipeline1,pipeline2,pipeline3" -search "error"Or use -all-pipelines to search ALL pipelines:
./fly-search -target my-target -all-pipelines -search "error" -count 5-pipeline OR -all-pipelines (not both). The -all-pipelines flag requires explicit opt-in to prevent accidentally heavy operations.
For piping to files or when colors aren't desired:
./fly-search -target my-target -pipeline my-pipeline -search "ERROR" -no-colorControl how many builds are fetched in parallel (default: 5):
./fly-search -target my-target -pipeline my-pipeline -search "error" -parallel 10By default, build logs are cached in ~/.fly-search/cache/ for 24 hours to speed up repeated searches.
Clear all cached logs:
./fly-search -clear-cacheDisable caching (always fetch fresh logs):
./fly-search -target my-target -pipeline my-pipeline -search "error" -no-cacheSet custom cache expiration (examples: 1h, 30m, 2h30m):
./fly-search -target my-target -pipeline my-pipeline -search "error" -cache-max-age 2h| Long Flag | Short | Description | Required | Default |
|---|---|---|---|---|
--target |
-t |
Concourse target name | Yes | - |
--search |
-s |
Search term or regex pattern | Yes | - |
--pipeline |
-p |
Pipeline name (comma-separated for multiple) | No* | - |
--all-pipelines |
-a |
Search ALL pipelines (explicit opt-in) | No* | false |
--job |
-j |
Specific job name | No | (all jobs) |
--count |
-c |
Number of recent builds to search per job | No | 1 |
--output |
-o |
Output format: grep or json |
No | grep |
--context |
-C |
Number of context lines before/after match | No | 0 |
--url |
-u |
Concourse URL for build links | No | auto-detected |
--list-targets |
-l |
List all available fly targets and exit | No | false |
*Either --pipeline or --all-pipelines is required (cannot use both)
| Flag | Description | Default |
|---|---|---|
--status |
Filter by build status (succeeded/failed/errored/aborted/pending/started) | (all) |
--no-color |
Disable colorized output | false |
--parallel |
Number of parallel log fetches | 5 |
--cache-max-age |
Maximum age of cached logs (e.g., 1h, 30m, 24h) | 24h |
--metadata-cache-max-age |
Maximum age of cached metadata (e.g., 5m, 10m) | 5m |
--no-cache |
Disable cache (always fetch fresh) | false |
--clear-cache |
Clear all cached logs and exit | false |
# Using long flags (more readable for scripts)
./fly-search --target prod --pipeline api --search "ERROR" --count 10
# Using short flags (faster to type)
./fly-search -t prod -p api -s "ERROR" -c 10
# Mixing long and short flags
./fly-search -t prod --all-pipelines -s "panic" -c 5
# Quick all-pipelines search
./fly-search -t prod -a -s "error"Beautiful, terminal-responsive table with build information and colorized output:
╭───────────────────────────────┬───────┬──────┬──────────┬─────────────────────────────────────────────────────╮
│ PIPELINE / JOB │ BUILD │ LINE │ TASK │ MATCHED CONTENT │
├───────────────────────────────┼───────┼──────┼──────────┼─────────────────────────────────────────────────────┤
│ bosh-deployment/test-bosh-gcp │ 56 │ 64 │ bbl-up │ + name = "bbl-env-winnipeg-2025-12-30t10-36z- │
│ │ │ │ │ jumpbox-ip" │
│ bosh-deployment/test-bosh-gcp │ 56 │ 85 │ bbl-up │ + name = "bbl-env-winnipeg-2025-12-30t10-36z-bosh- │
│ │ │ │ │ director" │
│ → Context │ │ │ │ + network = "bbl-env-winnipeg-2025-12-30t10-36z- │
│ │ │ │ │ network" │
╰───────────────────────────────┴───────┴──────┴──────────┴─────────────────────────────────────────────────────╯
Build URLs:
[Build 56] https://bosh.ci.cloudfoundry.org/teams/main/pipelines/bosh-deployment/jobs/test-bosh-gcp/builds/56
Found 2 match(es)
Features:
- Terminal-responsive design: Automatically adapts to your terminal width
- Content column adjusts from 40-120 characters based on available space
- Intelligent line wrapping for long content (no more 33-character truncation!)
- Works great on narrow (80 cols) and wide (200+ cols) terminals
- Beautiful Unicode borders: Modern rounded box-drawing characters (╭─┬─╮)
- Perfect color handling: Matched content highlighted in red without breaking table alignment
- Context lines: Shown inline when
-contextflag is used - Full clickable URLs: Listed separately with no truncation
- Unique build URLs: Shown once (no duplicates)
Structured output grouped by build, with context lines when requested:
[
{
"pipeline": "bosh-bootloader",
"job": "gcp-acceptance-tests-bump-deployments",
"build_id": "124",
"build_url": "https://bosh.ci.cloudfoundry.org/teams/main/pipelines/bosh-bootloader/jobs/gcp-acceptance-tests-bump-deployments/builds/124",
"matches": [
{
"line": 331,
"content": " Using state-dir: /tmp/2099801353",
"context": [
"",
" Captured StdOut/StdErr Output >>",
" Using state-dir: /tmp/2099801353",
" List Clusters for Zone me-central2-b: googleapi: Error 403...",
" step: generating terraform template"
]
}
]
}
]# Long form
./fly-search --target prod --pipeline deploy --search "ERROR" --count 15
# Short form (faster to type)
./fly-search -t prod -p deploy -s "ERROR" -c 15Search across every pipeline in your target:
# Long form
./fly-search --target prod --all-pipelines --search "ERROR" --count 5
# Short form (recommended)
./fly-search -t prod -a -s "ERROR" -c 5--all-pipelines, the first run builds metadata cache (~60s), but subsequent runs within 5 minutes are nearly instant (~0.2s)!
# Long form
./fly-search --target prod --pipeline deploy --search "ENV=" --output json
# Short form
./fly-search -t prod -p deploy -s "ENV=" -o json# Short form (most convenient)
./fly-search -t dev -p tests -j unit-tests -s "FAIL:"# Long form
./fly-search --target prod --pipeline deploy --search "panic" --status failed --context 5
# Short form
./fly-search -t prod -p deploy -s "panic" --status failed -C 5# Short form
./fly-search -t prod -p "api,worker,frontend" -s "timeout" -c 20# Short form
./fly-search -t prod -p deploy -s "error" -c 50 --parallel 10First run fetches logs, subsequent runs use cache:
# First run (slow - fetches logs)
./fly-search -t prod -p deploy -s "pattern1" -c 20
# Second run (fast - uses cache)
./fly-search -t prod -p deploy -s "pattern2" -c 20
# Different search patterns on same builds benefit from cache
./fly-search -t prod -p deploy -s "pattern3" -c 20- Metadata caching: When using
-all-pipelines, the first run is slow (discovering all pipelines/jobs/builds), but subsequent runs within 5 minutes are nearly instant - Build log caching: Logs are cached for 24 hours by default, making repeated searches very fast
- Parallel fetching: Increase
-parallelvalue for faster searches across many builds (default: 5) - Specific pipelines: Use
-pipelinefor targeted searches instead of-all-pipelineswhen possible - Specific jobs: Use
-jobflag when you know which job to search (faster than searching all jobs) - Build status filter: Use
-statusto narrow down builds before fetching logs - Lower build count: Use
-countto limit the number of builds searched per job (especially important with-all-pipelines) - Cache management: Run with
-clear-cacheperiodically to free disk space
The tool uses two types of caching stored in ~/.fly-search/cache/:
- Purpose: Stores actual build logs to avoid re-fetching
- Cache key: Based on target name and build ID (SHA256 hash)
- Default TTL: 24 hours (configurable with
-cache-max-age) - Performance gain: 2-5x faster than fresh fetches
- Purpose: Stores pipeline/job/build metadata to speed up
-all-pipelinesdiscovery - Cache key: Based on target name
- Default TTL: 5 minutes (configurable with
-metadata-cache-max-age) - Performance gain: 100-500x faster for
-all-pipelines(0.2s vs 60-300s) - Why 5 minutes?: Short TTL ensures you see new builds/jobs quickly while still providing massive speedup
Example speedup with metadata cache:
# First run: ~60 seconds (discovering 22 pipelines, 219 builds)
./fly-search -target main -all-pipelines -count 1 -search "error"
# Second run within 5 minutes: ~0.2 seconds! (using cached metadata)
./fly-search -target main -all-pipelines -count 1 -search "panic"Cache management:
- Expired entries are automatically removed
- Use
-clear-cacheto clear both log and metadata caches - Use
-no-cacheto bypass cache for fresh data
- Discovers builds using
fly buildscommand - Checks cache for previously fetched logs (unless
-no-cachespecified) - Fetches logs in parallel using
fly watchwith configurable concurrency (if not cached) - Extracts task names from Concourse build plan API
- Saves logs to cache for future searches
- Searches logs with regex patterns
- Tracks which task generated each log line
- Presents results in clean table or JSON format with optional context lines
- Stream logs in real-time for running builds
- Export results to various formats (CSV, HTML)
- Interactive mode for refining searches
- Watch mode for continuous monitoring
Feel free to open issues or submit pull requests!
MIT