Skip to content

Commit

Permalink
Build(deps): Bump torch from 2.1.0 to 2.1.1 (#124)
Browse files Browse the repository at this point in the history
[//]: # (dependabot-start)
⚠️  **Dependabot is rebasing this PR** ⚠️ 

Rebasing might not happen immediately, so don't worry if this takes some
time.

Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.

---

[//]: # (dependabot-end)

Bumps [torch](https://github.com/pytorch/pytorch) from 2.1.0 to 2.1.1.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/pytorch/pytorch/commit/4c55dc50355d5e923642c59ad2a23d6ad54711e7"><code>4c55dc5</code></a>
remove _shard_tensor() call (<a
href="https://redirect.github.com/pytorch/pytorch/issues/111687">#111687</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/f58669bc5fc7d89650794881ab7cf0029b5f5bb3"><code>f58669b</code></a>
<code>c10::DriverAPI</code> Try opening libcuda.so.1 (<a
href="https://redirect.github.com/pytorch/pytorch/issues/113096">#113096</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/33106b706e9e60da1ee5c12649b0c7c30c3e9c5b"><code>33106b7</code></a>
[DCP] Add test for planner option for load_sharded_optimizer_state_dict
(<a
href="https://redirect.github.com/pytorch/pytorch/issues/11">#11</a>...</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/4b4c012a6033dbebd432706f861df7430b87d95b"><code>4b4c012</code></a>
Enable planner to be used for loading sharded optimizer state dict (<a
href="https://redirect.github.com/pytorch/pytorch/issues/112520">#112520</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/47ac50248a047aa45f0f5d358b1069361d69f7a0"><code>47ac502</code></a>
[DCP][test] Make dim_0 size of params scale with world_size in
torch/distribu...</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/dc96ecb8acba739ea3fca9882cbb4be5662352bc"><code>dc96ecb</code></a>
Fix mem eff bias bug (<a
href="https://redirect.github.com/pytorch/pytorch/issues/112673">#112673</a>)
(<a
href="https://redirect.github.com/pytorch/pytorch/issues/112796">#112796</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/18a2ed1db198c5fd017231331fceed6c3ae3227f"><code>18a2ed1</code></a>
Mirror of Xformers Fix (<a
href="https://redirect.github.com/pytorch/pytorch/issues/112267">#112267</a>)
(<a
href="https://redirect.github.com/pytorch/pytorch/issues/112795">#112795</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/b2e1277247311934dcd962afca6143d9868ab365"><code>b2e1277</code></a>
Fix the meta func for mem_eff_backward (<a
href="https://redirect.github.com/pytorch/pytorch/issues/110893">#110893</a>)
(<a
href="https://redirect.github.com/pytorch/pytorch/issues/112792">#112792</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/b249946c40eeeb3b2fc0c48e2086087351da2d8a"><code>b249946</code></a>
[Release-only] Pin Docker images to 2.1 for release (<a
href="https://redirect.github.com/pytorch/pytorch/issues/112665">#112665</a>)</li>
<li><a
href="https://github.com/pytorch/pytorch/commit/ee79fc8a35e3a075ccb8370e09f12eb4b32bd48e"><code>ee79fc8</code></a>
Revert &quot;Fix bug: not creating empty tensor with correct sizes and
device. (<a
href="https://redirect.github.com/pytorch/pytorch/issues/1">#1</a>...</li>
<li>Additional commits viewable in <a
href="https://github.com/pytorch/pytorch/compare/v2.1.0...v2.1.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=torch&package-manager=pip&previous-version=2.1.0&new-version=2.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>
  • Loading branch information
Diapolo10 authored Nov 28, 2023
2 parents 73888ca + 07aaeff commit 113fc2a
Show file tree
Hide file tree
Showing 2 changed files with 203 additions and 24 deletions.
225 changes: 202 additions & 23 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ arcade = '^2.6.17'
grpcio = '^1.59.2'
numpy = '^1.24.0'
shapely = '^2.0.2'
torch = '^2.1.0'
torch = '^2.1.1'
tensorflow = '~2.11.0' # Last known version with Windows support
tensorflow-io-gcs-filesystem = '~0.34.0' # Ditto

Expand Down

0 comments on commit 113fc2a

Please sign in to comment.