usage: ready-steady-go [-h] [--wnb WNB] [--wnb_run WNB_RUN]
[--wnb_project WNB_PROJECT] [--wnb_entity WNB_ENTITY]
[--run_number RUN_NUMBER] [--model MODEL] [--bs BS]
[--size SIZE] [--fp16] [--n_batches N_BATCHES]
[--n_seconds N_SECONDS]
options:
-h, --help show this help message and exit
--wnb WNB W&B mode. Accepted values: online, offline,
disabled. (default: disabled)
--wnb_run WNB_RUN W&B run name (auto-generate if None)
--wnb_project WNB_PROJECT
--wnb_entity WNB_ENTITY
--run_number RUN_NUMBER A unique number to keep track over repeat runs
(default: 1)
--model MODEL TIMM Model name (default: resnet50)
--bs BS Batch size (default: 32)
--size SIZE (fake) image size (default: 224)
--fp16 (default: False)
--n_batches N_BATCHES Run for N batches. Mututally exclusive with
`n_seconds` (default: 0)
--n_seconds N_SECONDS Run for N seconds. Mutually exclusive with
`n_batches` (default: 0)
CLI
Install with pip
pip install ready-steady-go
How to use
Batch mode
To run the benchmark over a range of models and batch sizes, have a look at the run_all_benahmarks.sh
script:
#!/bin/bash
WANDB_MODE="online"
WANDB_PROJECT="ready-steady-go"
MODELS="resnet50 vgg19 swin_s3_base_224"
BATCHES="8 16 32 64 128 256 512 1024 2048 4096"
N_SECONDS=30
#set -x
nvidia-smi -q > gpu-info.txt
cat /proc/cpuinfo > cpu-info.txt
wandb login
echo "Warming up the GPU for 3 minutes..."
ready-steady-go --model=resnet50 --n_seconds=180
echo "Running benchmarks..."
# You can do multiple runs, but in my experience the results barely change between runs.
for RUN in 1 #2 3
do
for m in $MODELS; do
for fp16 in " " "--fp16"; do
for bs in $BATCHES; do
ready-steady-go --model=$m $fp16 --bs=$bs --n_seconds=$N_SECONDS \
--wnb=$WANDB_MODE --wnb_project=$WANDB_PROJECT --run_number=$RUN
if [ $? -ne 0 ]; then
# We probably hit a batch size the GPU can't handle.
# No need to try larger batch sizes.
break
fi
done
done
done
done
# Sync everything just in case. On a rare occasion wandb forgets to update symmary otherwise.
wandb sync --sync-all --include-synced