-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(deps): update python #1298
Merged
bourgeoisor
merged 13 commits into
GoogleCloudPlatform:main
from
renovate-bot:renovate/python
Jun 10, 2024
Merged
chore(deps): update python #1298
bourgeoisor
merged 13 commits into
GoogleCloudPlatform:main
from
renovate-bot:renovate/python
Jun 10, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
renovate-bot
force-pushed
the
renovate/python
branch
2 times, most recently
from
May 27, 2024 07:03
a8b857a
to
6705ae1
Compare
renovate-bot
force-pushed
the
renovate/python
branch
11 times, most recently
from
May 31, 2024 13:22
3196314
to
061be94
Compare
renovate-bot
force-pushed
the
renovate/python
branch
11 times, most recently
from
June 7, 2024 19:52
e311ff3
to
e335058
Compare
renovate-bot
force-pushed
the
renovate/python
branch
3 times, most recently
from
June 10, 2024 15:18
4f51b0a
to
7092428
Compare
renovate-bot
force-pushed
the
renovate/python
branch
from
June 10, 2024 15:19
7092428
to
a57db51
Compare
Edited/Blocked NotificationRenovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR. You can manually request rebase by checking the rebase/retry box above. |
bourgeoisor
approved these changes
Jun 10, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==8.13.1
->==8.13.2
8.14.0
==2.11.1
->==2.12.1
==1.51.0
->==1.53.0
1.54.1
(+1)==0.1.20
->==0.2.1
0.2.3
(+1)==0.0.38
->==0.2.1
0.2.4
(+2)==1.24.3
->==1.24.5
==2.32.0
->==2.32.3
==1.4.2
->==1.5.0
==1.13.0
->==1.13.1
==1.34.0
->==1.35.0
==4.40.2
->==4.41.2
Release Notes
elastic/elasticsearch-py (elasticsearch)
v8.13.2
: 8.13.2Compare Source
ml.update_trained_model_deployment
APIcarpedm20/emoji (emoji)
v2.12.1
Compare Source
typing-extensions
requires at least version4.7.0
#297v2.12.0
Compare Source
functools.lru_cache
for looking up emoji by name withget_emoji_by_name()
get_emoji_unicode_dict()
,get_aliases_unicode_dict()
,_EMOJI_UNICODE
and_ALIASES_UNICODE
totestutils
coveralls
googleapis/python-aiplatform (google-cloud-aiplatform)
v1.53.0
Compare Source
Features
cloneable
protocol for Reasoning Engine. (8960a80)GenerationConfig.response_schema
field (#3772) (5436d88)response_schema
parameter to theGenerationConfig
class (b5e2c02)seed
parameter to theTextGenerationModel
'spredict
methods (cb2f4aa)Bug Fixes
adapter_size
parameter handling to match enum values. (1cc22c3)v1.52.0
Compare Source
Features
BatchPredictionJob.submit
method (4d091c6)Bug Fixes
Documentation
pymupdf/pymupdf (pymupdf)
v1.24.5
: PyMuPDF-1.24.5 releasedCompare Source
PyMuPDF-1.24.5 has been released.
Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:
[Linux-aarch64 wheels will be built and uploaded later.]
Changes in version 1.24.5 (2024-05-30)
Fixed issues:
Other:
pymupdf.apply_pages()
andpymupdf.get_text()
.v1.24.4
: PyMuPDF-1.24.4 releasedCompare Source
PyMuPDF-1.24.4 has been released.
Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:
[Linux-aarch64 wheels will be built and uploaded later.]
Changes in version 1.24.4 (2024-05-16)
Fixed #3418
Other:
new install.
utils.do_links()
crash.TextPage
creation Code.page_merge()
.psf/requests (requests)
v2.32.3
Compare Source
Bugfixes
HTTPAdapter. (#6716)
without the
ssl
module. (#6724)v2.32.2
Compare Source
Deprecations
To provide a more stable migration for custom HTTPAdapters impacted
by the CVE changes in 2.32.0, we've renamed
_get_connection
toa new public API,
get_connection_with_tls_context
. Existing customHTTPAdapters will need to migrate their code to use this new API.
get_connection
is considered deprecated in all versions of Requests>=2.32.0.A minimal (2-line) example has been provided in the linked PR to ease
migration, but we strongly urge users to evaluate if their custom adapter
is subject to the same issue described in CVE-2024-35195. (#6710)
v2.32.1
Compare Source
Bugfixes
scikit-learn/scikit-learn (scikit-learn)
v1.5.0
: Scikit-learn 1.5.0Compare Source
We're happy to announce the 1.5.0 release.
You can read the release highlights under https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights\_1\_5\_0.html and the long version of the change log under https://scikit-learn.org/stable/whats_new/v1.5.html
This version supports Python versions 3.9 to 3.12.
You can upgrade with pip as usual:
The conda-forge builds can be installed using:
scipy/scipy (scipy)
v1.13.1
: SciPy 1.13.1Compare Source
SciPy 1.13.1 Release Notes
SciPy
1.13.1
is a bug-fix release with no new featurescompared to
1.13.0
. The version of OpenBLAS shipped withthe PyPI binaries has been increased to
0.3.27
.Authors
A total of 19 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.
streamlit/streamlit (streamlit)
v1.35.0
Compare Source
What's Changed
New Features 🎉
st.plotly_chart
by @willhuang1997 in https://github.com/streamlit/streamlit/pull/8191st.logo
by @mayagbarnes in https://github.com/streamlit/streamlit/pull/8554st.altair_chart
&st.vega_lite_chart
by @willhuang1997 in https://github.com/streamlit/streamlit/pull/8302Bug Fixes 🐛
Other Changes
st.table
by @LukasMasuch in https://github.com/streamlit/streamlit/pull/8621.update
and.from_dict
by @Asaurus1 in https://github.com/streamlit/streamlit/pull/8614New Contributors
Full Changelog: streamlit/streamlit@1.34.0...1.35.0
huggingface/transformers (transformers)
v4.41.2
Compare Source
Release v4.41.2
Mostly fixing some stuff related to
trust_remote_code=True
andfrom_pretrained
The
local_file_only
was having a hard time when a.safetensors
file did not exist. This is not expected and instead of trying to convert, we should just fallback to loading the.bin
files.v4.41.1
: Fix PaliGemma finetuning, and some small bugsCompare Source
Release v4.41.1
Fix PaliGemma finetuning:
The causal mask and label creation was causing label leaks when training. Kudos to @probicheaux for finding and reporting!
Other fixes:
Reverted huggingface/transformers@4ab7a28
v4.41.0
: : Phi3, JetMoE, PaliGemma, VideoLlava, Falcon2, FalconVLM & GGUF supportCompare Source
New models
Phi3
The Phi-3 model was proposed in Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone by Microsoft.
TLDR; Phi-3 introduces new ROPE scaling methods, which seems to scale fairly well! A 3b and a
Phi-3-mini is available in two context-length variants—4K and 128K tokens. It is the first model in its class to support a context window of up to 128K tokens, with little impact on quality.
JetMoE
JetMoe-8B is an 8B Mixture-of-Experts (MoE) language model developed by Yikang Shen and MyShell. JetMoe project aims to provide a LLaMA2-level performance and efficient language model with a limited budget. To achieve this goal, JetMoe uses a sparsely activated architecture inspired by the ModuleFormer. Each JetMoe block consists of two MoE layers: Mixture of Attention Heads and Mixture of MLP Experts. Given the input tokens, it activates a subset of its experts to process them. This sparse activation schema enables JetMoe to achieve much better training throughput than similar size dense models. The training throughput of JetMoe-8B is around 100B tokens per day on a cluster of 96 H100 GPUs with a straightforward 3-way pipeline parallelism strategy.
PaliGemma
PaliGemma is a lightweight open vision-language model (VLM) inspired by PaLI-3, and based on open components like the SigLIP vision model and the Gemma language model. PaliGemma takes both images and text as inputs and can answer questions about images with detail and context, meaning that PaliGemma can perform deeper analysis of images and provide useful insights, such as captioning for images and short videos, object detection, and reading text embedded within images.
More than 120 checkpoints are released see the collection here !
VideoLlava
Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset.
💡 Simple baseline, learning united visual representation by alignment before projection
With the binding of unified visual representations to the language feature space, we enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
🔥 High performance, complementary learning with video and image
Extensive experiments demonstrate the complementarity of modalities, showcasing significant superiority when compared to models specifically designed for either images or videos.
Falcon 2 and FalconVLM:
Two new models from TII-UAE! They published a blog-post with more details! Falcon2 introduces parallel mlp, and falcon VLM uses the
Llava
frameworkGGUF
from_pretrained
supportYou can now load most of the GGUF quants directly with transformers'
from_pretrained
to convert it to a classic pytorch model. The API is simple:We plan more closer integrations with llama.cpp / GGML ecosystem in the future, see: https://github.com/huggingface/transformers/issues/27712 for more details
Quantization
New quant methods
In this release we support new quantization methods: HQQ & EETQ contributed by the community. Read more about how to quantize any transformers model using HQQ & EETQ in the dedicated documentation section
dequantize
API for bitsandbytes modelsIn case you want to dequantize models that have been loaded with bitsandbytes, this is now possible through the
dequantize
API (e.g. to merge adapter weights)dequantize
API for bitsandbytes quantized models by @younesbelkada in https://github.com/huggingface/transformers/pull/30806API-wise, you can achieve that with the following:
Generation updates
min_p
sampling by @gante in https://github.com/huggingface/transformers/pull/30639Gemma
work withtorch.compile
by @ydshieh in https://github.com/huggingface/transformers/pull/30775SDPA support
BERT
] Add support for sdpa by @hackyon in https://github.com/huggingface/transformers/pull/28802Improved Object Detection
Addition of fine-tuning script for object detection models
Interpolation of embeddings for vision models
Add interpolation of embeddings. This enables predictions from pretrained models on input images of sizes different than those the model was originally trained on. Simply pass
interpolate_pos_embedding=True
when calling the model.Added for: BLIP, BLIP 2, InstructBLIP, SigLIP, ViViT
🚨 might be breaking
evaluation_strategy
toeval_strategy
🚨🚨🚨 by @muellerzr in https://github.com/huggingface/transformers/pull/30190Cleanups
Not breaking but important for Llama tokenizers
LlamaTokenizerFast
] Refactor default llama by @ArthurZucker in https://github.com/huggingface/transformers/pull/28881Fixes
prev_ci_results
by @ydshieh in https://github.com/huggingface/transformers/pull/30313pad token id
in pipeline forward arguments by @zucchini-nlp in https://github.com/huggingface/transformers/pull/30285jnp
import inutils/generic.py
by @ydshieh in https://github.com/huggingface/transformers/pull/30322AssertionError
in clip conversion script by @ydshieh in https://github.com/huggingface/transformers/pull/30321pad_token_id
again by @zucchini-nlp in https://github.com/huggingface/transformers/pull/30338Llama
family, fixuse_cache=False
generation by @ArthurZucker in https://github.com/huggingface/transformers/pull/30380-rs
to show skip reasons by @ArthurZucker in https://github.com/huggingface/transformers/pull/30318require_torch_sdpa
for test that needs sdpa support by @faaany in https://github.com/huggingface/transformers/pull/30408LlamaTokenizerFast
] Refactor default llama by @ArthurZucker in https://github.com/huggingface/transformers/pull/28881Llava
] + CIs fix red cis and llava integration tests by @ArthurZucker in https://github.com/huggingface/transformers/pull/30440paths
filter to avoid the chance of being triggered by @ydshieh in https://github.com/huggingface/transformers/pull/30453utils/check_if_new_model_added.py
by @ydshieh in https://github.com/huggingface/transformers/pull/30456research_project
] Most of the security issues come from this requirement.txt by @ArthurZucker in https://github.com/huggingface/transformers/pull/29977WandbCallback
with third parties by @tomaarsen in [https://github.com/huggingface/trConfiguration
📅 Schedule: Branch creation - "before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR has been generated by Mend Renovate. View repository job log here.