Skip to content

add llama-3.3-70b to 3 M4 Pro cluster #279

add llama-3.3-70b to 3 M4 Pro cluster

add llama-3.3-70b to 3 M4 Pro cluster #279

Re-run triggered December 14, 2024 17:14
Status Failure
Total duration 12m 7s
Artifacts

benchmarks.yml

on: push
Matrix: three-m4-pro-cluster
Fit to window
Zoom out
Zoom in

Annotations

5 errors and 4 warnings
three-m4-pro-cluster (llama-3.3-70b) / run-distributed-job (M4PRO_GPU16_24GB)
Process completed with exit code 1.
three-m4-pro-cluster (llama-3.3-70b) / run-distributed-job (M4PRO_GPU16_24GB)
The job was canceled because "M4PRO_GPU16_24GB" failed.
three-m4-pro-cluster (llama-3.3-70b) / run-distributed-job (M4PRO_GPU16_24GB)
The job was canceled because "M4PRO_GPU16_24GB" failed.
three-m4-pro-cluster (llama-3.1-8b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
three-m4-pro-cluster (llama-3.2-3b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
three-m4-pro-cluster (llama-3.3-70b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
three-m4-pro-cluster (llama-3.2-1b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636