Skip to main content

CLI Reference

The unified AutoPipeline entry point is:

autopipeline

If you have not installed the wrapper command, you can call the module directly:

python -m src.cli.autopipeline

Subcommands

  • annotation
  • eval
  • train-pairs
note

The current implementation hardcodes many defaults under /data/open_edit/.... If your checkout lives elsewhere, pass --pipeline-config-path, --user-config, --save-path, and related paths explicitly.

annotation

Purpose: run a human-centric or object-centric pipeline and score candidate edited images with structured metrics.

autopipeline annotation \
--edit-task <task> \
--pipeline-config-path <pipeline-yaml> \
[--max-workers 4] \
[--save-path /data/open_edit/data/c_annotated_group_data] \
[--user-config /data/open_edit/configs/pipelines/user_config.yaml] \
[--candidate-pool-dir /data/open_edit/configs/datasets/candidate_pools]

Parameters:

ParameterRequiredDescription
--edit-taskYesEdit task name. The CLI normalizes it into lowercase underscore form.
--pipeline-config-pathYesAbsolute path to the pipeline YAML.
--max-workersNoWorker parallelism.
--save-pathNoOutput directory for results and cache.
--user-configNoUser config YAML.
--candidate-pool-dirNoDirectory containing candidate pool JSON files.

eval

Purpose: run vlm-as-a-judge and produce pairwise winners on a benchmark.

autopipeline eval \
--bmk <benchmark-key> \
--pipeline-config-path <pipeline-yaml> \
[--max-workers 4] \
[--save-path /data/open_edit/data/reward_eval_results] \
[--user-config /data/open_edit/configs/pipelines/user_config.yaml] \
[--bmk-config /data/open_edit/configs/datasets/bmk.json] \
[--openedit-metadata-file metadata.jsonl]

Parameters:

ParameterRequiredDescription
--bmkYesBenchmark key defined in bmk.json.
--pipeline-config-pathYesJudge pipeline YAML.
--max-workersNoWorker parallelism.
--save-pathNoResult directory.
--user-configNoUser config YAML.
--bmk-configNoBenchmark config file.
--openedit-metadata-fileNoMetadata filename used only for openedit evaluation.

train-pairs

Purpose: convert grouped results into preference-training data.

autopipeline train-pairs \
--tasks <task1,task2,...> \
[--prompts-num 1500] \
[--prefix ""] \
[--input-dir /data/open_edit/data/c_annotated_group_data] \
[--output-dir /data/open_edit/data/d_train_data] \
[--mode auto] \
[--filt-out-strategy three_tiers] \
[--thresholds-config-file /data/open_edit/configs/pipelines/data_construction_configs.json]

Parameters:

ParameterRequiredDescription
--tasksYesComma-separated task names.
--prompts-numNoMaximum number of prompt groups per task.
--prefixNoOptional output subdirectory prefix.
--input-dirNoInput directory for grouped results.
--output-dirNoOutput directory for train pairs.
--modeNoauto, group, or judge.
--filt-out-strategyNohead_tail or three_tiers.
--thresholds-config-fileNoThreshold config for group mode.

Two common invocation styles

User-facing CLI

autopipeline annotation ...
autopipeline eval ...
autopipeline train-pairs ...

Module entry point

python -m src.cli.autopipeline annotation ...
python -m src.cli.autopipeline eval ...
python -m src.cli.autopipeline train-pairs ...