-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama2 7b model C++ example #3666
base: develop
Are you sure you want to change the base?
Conversation
src/api/api.cpp
Outdated
@@ -1522,6 +1522,21 @@ extern "C" migraphx_status migraphx_module_add_instruction(migraphx_instruction_ | |||
return api_error_result; | |||
} | |||
|
|||
extern "C" migraphx_status migraphx_module_get_last_instruction(migraphx_instruction_t* out, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove these changes (commits) which are related to exposing this API part. Because we ended up not needing these changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed them
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #3666 +/- ##
========================================
Coverage 92.23% 92.23%
========================================
Files 514 514
Lines 21746 21746
========================================
Hits 20057 20057
Misses 1689 1689 ☔ View full report in Codecov by Sentry. |
…_last_instruction" This reverts commit d970b30.
This build is not recommended to merge 🔴 |
🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output |
make -j | ||
|
||
# Running the example | ||
export MIOPEN_FIND_ENFORCE=3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed? I don't believe we make any MIOpen calls for this model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
readme looks fine from a grammar/spelling angle
##################################################################################### | ||
# The MIT License (MIT) | ||
# | ||
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
# Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
make -j | ||
|
||
# Running the example | ||
export MIOPEN_FIND_ENFORCE=3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export MIOPEN_FIND_ENFORCE=3 |
##################################################################################### | ||
# The MIT License (MIT) | ||
# | ||
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
# Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
##################################################################################### | ||
# The MIT License (MIT) | ||
# | ||
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
# Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
/* | ||
* The MIT License (MIT) | ||
* | ||
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
* Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
/* | ||
* The MIT License (MIT) | ||
* | ||
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
* Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
/* | ||
* The MIT License (MIT) | ||
* | ||
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
* Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
/* | ||
* The MIT License (MIT) | ||
* | ||
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
* Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
##################################################################################### | ||
# The MIT License (MIT) | ||
# | ||
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
# Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
##################################################################################### | ||
# The MIT License (MIT) | ||
# | ||
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (c) 2015-2024 Advanced Micro Devices, Inc. All rights reserved. | |
# Copyright (c) 2015-2025 Advanced Micro Devices, Inc. All rights reserved. |
### Getting the pre-quantized model from HuggingFace | ||
```bash | ||
pip install -U "huggingface_hub[cli]" | ||
huggingface-cli login YOUR_HF_TOKEN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
huggingface-cli login YOUR_HF_TOKEN | |
huggingface-cli login login --token <YOUR_HF_TOKEN> |
```bash | ||
pip install -U "huggingface_hub[cli]" | ||
huggingface-cli login YOUR_HF_TOKEN | ||
hugginggface-cli download https://huggingface.co/amd/Llama-2-7b-chat-hf-awq-int4-asym-gs128-onnx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hugginggface-cli download https://huggingface.co/amd/Llama-2-7b-chat-hf-awq-int4-asym-gs128-onnx | |
huggingface-cli download amd/Llama-2-7b-chat-hf-awq-int4-asym-gs128-onnx |
|
||
```bash | ||
# Convert dataset to numpy format | ||
./prepocess_dataset.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a step is missing. I performed the steps listed to use the pre-quantized model
./prepocess_dataset.py | |
gunzip /mgx_llama2/open_orca/open_orca_gpt4_tokenized_llama.calibration_1000.pkl.gz | |
python3 preprocess_dataset.py |
|
||
# Builidng the example | ||
cd mgx_llama2 | ||
mkdir build && cd build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there was already a build dir so the command failed.
|
||
# Running the example | ||
export MIOPEN_FIND_ENFORCE=3 | ||
./mgxllama2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
root@12320f9edbf9:/mgx_llama2/build# ./mgxllama2
terminate called after throwing an instance of 'std::runtime_error'
what(): hip error (101): invalid device ordinal
Aborted (core dumped)
./mgxllama2 | ||
|
||
# Test the accuracy of the output | ||
python3 eval_accuracy.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
root@12320f9edbf9:/mgx_llama2# python3 eval_accuracy.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/config.json
#### Building and running the example | ||
|
||
```bash | ||
# Convert dataset to numpy format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when I launched the docker via run_docker.sh I landed in the /mgx_llama2/build
directory so the preprocess_dataset.py
command's path is wrong
Implemented an example for https://huggingface.co/amd/Llama-2-7b-chat-hf-awq-int4-asym-gs128-onnx/tree/main Llama2 7b model with MIGraphX.
Details about the example and description for running is available in README (https://github.com/ROCm/AMDMIGraphX/tree/htec/mgx-llama2-7b-example/examples/transformers/mgx_llama2)