Skip to content

Commit

Permalink
Add support for JetPack 6.1 build (#3211)
Browse files Browse the repository at this point in the history
  • Loading branch information
lanluo-nvidia authored Oct 17, 2024
1 parent 0c060d1 commit c6919f4
Show file tree
Hide file tree
Showing 8 changed files with 216 additions and 3 deletions.
119 changes: 119 additions & 0 deletions docsrc/getting_started/jetpack.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
.. _Torch_TensorRT_in_JetPack_6.1
Overview
##################

JetPack 6.1
---------------------
Nvida JetPack 6.1 is the latest production release ofJetPack 6.
With this release it incorporates:
CUDA 12.6
TensorRT 10.3
cuDNN 9.3
DLFW 24.09

You can find more details for the JetPack 6.1:

* https://docs.nvidia.com/jetson/jetpack/release-notes/index.html
* https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html


Prerequisites
~~~~~~~~~~~~~~


Ensure your jetson developer kit has been flashed with the latest JetPack 6.1. You can find more details on how to flash Jetson board via sdk-manager:

* https://developer.nvidia.com/sdk-manager


check the current jetpack version using

.. code-block:: sh
apt show nvidia-jetpack
Ensure you have installed JetPack Dev components. This step is required if you need to build on jetson board.

You can only install the dev components that you require: ex, tensorrt-dev would be the meta-package for all TRT development or install everthing.

.. code-block:: sh
# install all the nvidia-jetpack dev components
sudo apt-get update
sudo apt-get install nvidia-jetpack
Ensure you have cuda 12.6 installed(this should be installed automatically from nvidia-jetpack)

.. code-block:: sh
# check the cuda version
nvcc --version
# if not installed or the version is not 12.6, install via the below cmd:
sudo apt-get update
sudo apt-get install cuda-toolkit-12-6
Ensure libcusparseLt.so exists at /usr/local/cuda/lib64/:

.. code-block:: sh
# if not exist, download and copy to the directory
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/* /usr/local/cuda/include/
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/lib/* /usr/local/cuda/lib64/
Build torch_tensorrt
~~~~~~~~~~~~~~


Install bazel

.. code-block:: sh
wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-arm64
sudo mv bazelisk-linux-arm64 /usr/bin/bazel
chmod +x /usr/bin/bazel
Install pip and required python packages:
* https://pip.pypa.io/en/stable/installation/

.. code-block:: sh
# install pip
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
.. code-block:: sh
# install pytorch from nvidia jetson distribution: https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch
python -m pip install torch https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
.. code-block:: sh
# install required python packages
python -m pip install -r toolchains/jp_workspaces/requirements.txt
# if you want to run the test cases, then install the test required python packages
python -m pip install -r toolchains/jp_workspaces/test_requirements.txt
Build and Install torch_tensorrt wheel file


Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.1 is from DLFW 24.08/24.09(torch 2.5.0).

Please make sure to build torch_tensorrt wheel file from source release/2.5 branch
(TODO: lanl to update the branch name once release/ngc branch is available)

.. code-block:: sh
cuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g')
export TORCH_INSTALL_PATH=$(python -c "import torch, os; print(os.path.dirname(torch.__file__))")
export SITE_PACKAGE_PATH=${TORCH_INSTALL_PATH::-6}
export CUDA_HOME=/usr/local/cuda-${cuda_version}/
# replace the MODULE.bazel with the jetpack one
cat toolchains/jp_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel
# build and install torch_tensorrt wheel file
python setup.py --use-cxx11-abi install --user
1 change: 1 addition & 0 deletions docsrc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ Getting Started
:hidden:

getting_started/installation
getting_started/jetpack
getting_started/quick_start

User Guide
Expand Down
11 changes: 8 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,12 +156,14 @@ def load_dep_info():
JETPACK_VERSION = "4.6"
elif version == "5.0":
JETPACK_VERSION = "5.0"
elif version == "6.1":
JETPACK_VERSION = "6.1"

if not JETPACK_VERSION:
warnings.warn(
"Assuming jetpack version to be 5.0, if not use the --jetpack-version option"
"Assuming jetpack version to be 6.1, if not use the --jetpack-version option"
)
JETPACK_VERSION = "5.0"
JETPACK_VERSION = "6.1"

if not CXX11_ABI:
warnings.warn(
Expand Down Expand Up @@ -213,12 +215,15 @@ def build_libtorchtrt_pre_cxx11_abi(
elif JETPACK_VERSION == "5.0":
cmd.append("--platforms=//toolchains:jetpack_5.0")
print("Jetpack version: 5.0")
elif JETPACK_VERSION == "6.1":
cmd.append("--platforms=//toolchains:jetpack_6.1")
print("Jetpack version: 6.1")

if CI_BUILD:
cmd.append("--platforms=//toolchains:ci_rhel_x86_64_linux")
print("CI based build")

print("building libtorchtrt")
print(f"building libtorchtrt {cmd=}")
status_code = subprocess.run(cmd).returncode

if status_code != 0:
Expand Down
9 changes: 9 additions & 0 deletions toolchains/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,15 @@ platform(
],
)

platform(
name = "jetpack_6.1",
constraint_values = [
"@platforms//os:linux",
"@platforms//cpu:aarch64",
"@//toolchains/jetpack:6.1",
],
)

platform(
name = "ci_rhel_x86_64_linux",
constraint_values = [
Expand Down
5 changes: 5 additions & 0 deletions toolchains/jetpack/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,8 @@ constraint_value(
name = "4.6",
constraint_setting = ":jetpack",
)

constraint_value(
name = "6.1",
constraint_setting = ":jetpack",
)
61 changes: 61 additions & 0 deletions toolchains/jp_workspaces/MODULE.bazel.tmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
module(
name = "torch_tensorrt",
repo_name = "org_pytorch_tensorrt",
version = "${BUILD_VERSION}"
)

bazel_dep(name = "googletest", version = "1.14.0")
bazel_dep(name = "platforms", version = "0.0.10")
bazel_dep(name = "rules_cc", version = "0.0.9")
bazel_dep(name = "rules_python", version = "0.34.0")

python = use_extension("@rules_python//python/extensions:python.bzl", "python")
python.toolchain(
ignore_root_user_error = True,
python_version = "3.11",
)

bazel_dep(name = "rules_pkg", version = "1.0.1")
git_override(
module_name = "rules_pkg",
commit = "17c57f4",
remote = "https://github.com/narendasan/rules_pkg",
)

local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "local_repository")

# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "${SITE_PACKAGE_PATH}/torch_tensorrt",
)


new_local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "new_local_repository")

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "${CUDA_HOME}",
)

new_local_repository(
name = "libtorch",
path = "${TORCH_INSTALL_PATH}",
build_file = "third_party/libtorch/BUILD",
)

new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "${TORCH_INSTALL_PATH}",
build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)


4 changes: 4 additions & 0 deletions toolchains/jp_workspaces/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
setuptools==70.2.0
numpy<2.0.0
packaging
pyyaml
9 changes: 9 additions & 0 deletions toolchains/jp_workspaces/test_requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
expecttest==0.1.6
networkx==2.8.8
numpy<2.0.0
parameterized>=0.2.0
pytest>=8.2.1
pytest-xdist>=3.6.1
pyyaml
transformers
# TODO: currently timm torchvision nvidia-modelopt does not have distributions for jetson

0 comments on commit c6919f4

Please sign in to comment.