Skip to content

run llama-3.1-8b on 3 m4 pro cluster #275

run llama-3.1-8b on 3 m4 pro cluster

run llama-3.1-8b on 3 m4 pro cluster #275

Triggered via push December 12, 2024 15:13
Status Failure
Total duration 12m 25s
Artifacts

benchmarks.yml

on: push
Matrix: three-m4-pro-cluster
Fit to window
Zoom out
Zoom in

Annotations

5 errors and 3 warnings
three-m4-pro-cluster (llama-3.1-8b) / run-distributed-job (M4PRO_GPU16_24GB)
Process completed with exit code 1.
three-m4-pro-cluster (llama-3.1-8b) / run-distributed-job (M4PRO_GPU16_24GB)
The job was canceled because "M4PRO_GPU16_24GB" failed.
three-m4-pro-cluster (llama-3.1-8b) / run-distributed-job (M4PRO_GPU16_24GB)
The job was canceled because "M4PRO_GPU16_24GB" failed.
three-m4-pro-cluster (llama-3.2-1b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
three-m4-pro-cluster (llama-3.2-3b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
three-m4-pro-cluster (llama-3.1-8b) / generate-matrix
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636