Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update python #1298

Merged
merged 13 commits into from
Jun 10, 2024

Conversation

renovate-bot
Copy link
Contributor

@renovate-bot renovate-bot commented May 27, 2024

Mend Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence Update Pending
elasticsearch ==8.13.1 -> ==8.13.2 age adoption passing confidence patch 8.14.0
emoji ==2.11.1 -> ==2.12.1 age adoption passing confidence minor
google-cloud-aiplatform ==1.51.0 -> ==1.53.0 age adoption passing confidence minor 1.54.1 (+1)
langchain ==0.1.20 -> ==0.2.1 age adoption passing confidence minor 0.2.3 (+1)
langchain-community ==0.0.38 -> ==0.2.1 age adoption passing confidence minor 0.2.4 (+2)
pymupdf (changelog) ==1.24.3 -> ==1.24.5 age adoption passing confidence patch
requests (source, changelog) ==2.32.0 -> ==2.32.3 age adoption passing confidence patch
scikit-learn (source, changelog) ==1.4.2 -> ==1.5.0 age adoption passing confidence minor
scipy (source) ==1.13.0 -> ==1.13.1 age adoption passing confidence patch
streamlit (source, changelog) ==1.34.0 -> ==1.35.0 age adoption passing confidence minor
transformers ==4.40.2 -> ==4.41.2 age adoption passing confidence minor
All locks refreshed lockFileMaintenance

Release Notes

elastic/elasticsearch-py (elasticsearch)

v8.13.2: 8.13.2

Compare Source

  • Added the ml.update_trained_model_deployment API
  • Marked Requests 2.32.2 as incompatible with the Elasticsearch client
carpedm20/emoji (emoji)

v2.12.1

Compare Source

  • typing-extensions requires at least version 4.7.0 #​297

v2.12.0

Compare Source

  • Move type annotations inline
  • Use functools.lru_cache for looking up emoji by name with get_emoji_by_name()
  • Move internal functions get_emoji_unicode_dict(), get_aliases_unicode_dict(), _EMOJI_UNICODE and _ALIASES_UNICODE to testutils
  • Add type hints to tests
  • Remove obsolete dev dependency coveralls
googleapis/python-aiplatform (google-cloud-aiplatform)

v1.53.0

Compare Source

Features
  • Add a cloneable protocol for Reasoning Engine. (8960a80)
  • Add labels parameter to the supervised tuning train method (f7c5567)
  • Added reboot command for PersistentResource (7785f8c)
  • Added the new GenerationConfig.response_schema field (#​3772) (5436d88)
  • Enable Tensorboard profile plugin in all regions by default. (8a4a41a)
  • GenAI - Added the response_schema parameter to the GenerationConfig class (b5e2c02)
  • LLM - Added the seed parameter to the TextGenerationModel's predict methods (cb2f4aa)
Bug Fixes
  • Create run_name when run_name_prefix is not specified for Tensorboard uploader. (ac17d87)
  • GenAI - Tuning - Supervised - Fix adapter_size parameter handling to match enum values. (1cc22c3)
  • Model Monitor console uri. (71fbc81)

v1.52.0

Compare Source

Features
  • Add FeatureGroup delete (f9011e0)
  • Add support for ToolConfig in the LangChain template (9bda328)
  • Create Vertex Experiment when uploading Tensorboard logs (339f8b6)
  • GenAI - Add BatchPredictionJob for GenAI models (df4a4f2)
  • GenAI - Add cancel, delete, list methods in BatchPredictionJob (7ff8071)
  • GenAI - Added the BatchPredictionJob.submit method (4d091c6)
  • Private Endpoints - Added private service connect support to prediction endpoint. (6bdcfb3)
Bug Fixes
  • Add validation for evaluation dataset fields, update logging info for eval api request count (d6ef500)
  • Fix feature attribution drift visualization for model monitoring SDK (710f33d)
  • Fix the default value of response_column_name in EvalTask.evaluate() (98f9b35)
  • Update get_experiment_df to pass Experiment and allow empty metrics. (de5d0f3)
Documentation
  • Add Vertex Model Monitoring V2 SDK documentation (b47e6ff)
  • Update docstrings for rapid evaluation library. (d6d371d)
pymupdf/pymupdf (pymupdf)

v1.24.5: PyMuPDF-1.24.5 released

Compare Source

PyMuPDF-1.24.5 has been released.

Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:

python -m pip install --upgrade pymupdf

[Linux-aarch64 wheels will be built and uploaded later.]

Changes in version 1.24.5 (2024-05-30)

  • Fixed issues:

  • Other:

    • Some more fixes to use MuPDF floating formatting.
    • Removed/disabled some unnecessary diagnostics.
    • Fixed utils.do_links() crash.
    • Experimental new functions pymupdf.apply_pages() and pymupdf.get_text().
    • Addresses wrong label generation for label styles "a" and "A".

v1.24.4: PyMuPDF-1.24.4 released

Compare Source

PyMuPDF-1.24.4 has been released.

Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:

python -m pip install --upgrade pymupdf

[Linux-aarch64 wheels will be built and uploaded later.]

Changes in version 1.24.4 (2024-05-16)

  • Fixed #​3418

  • Other:

    • Fixed sysinstall test failing to remove all of prior installation before
      new install.
    • Fixed utils.do_links() crash.
    • Correct TextPage creation Code.
    • Unified various diagnostics.
    • Fix bug in page_merge().
psf/requests (requests)

v2.32.3

Compare Source

Bugfixes

  • Fixed bug breaking the ability to specify custom SSLContexts in sub-classes of
    HTTPAdapter. (#​6716)
  • Fixed issue where Requests started failing to run on Python versions compiled
    without the ssl module. (#​6724)

v2.32.2

Compare Source

Deprecations

  • To provide a more stable migration for custom HTTPAdapters impacted
    by the CVE changes in 2.32.0, we've renamed _get_connection to
    a new public API, get_connection_with_tls_context. Existing custom
    HTTPAdapters will need to migrate their code to use this new API.
    get_connection is considered deprecated in all versions of Requests>=2.32.0.

    A minimal (2-line) example has been provided in the linked PR to ease
    migration, but we strongly urge users to evaluate if their custom adapter
    is subject to the same issue described in CVE-2024-35195. (#​6710)

v2.32.1

Compare Source

Bugfixes

  • Add missing test certs to the sdist distributed on PyPI.
scikit-learn/scikit-learn (scikit-learn)

v1.5.0: Scikit-learn 1.5.0

Compare Source

We're happy to announce the 1.5.0 release.

You can read the release highlights under https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights\_1\_5\_0.html and the long version of the change log under https://scikit-learn.org/stable/whats_new/v1.5.html

This version supports Python versions 3.9 to 3.12.

You can upgrade with pip as usual:

pip install -U scikit-learn

The conda-forge builds can be installed using:

conda install -c conda-forge scikit-learn
scipy/scipy (scipy)

v1.13.1: SciPy 1.13.1

Compare Source

SciPy 1.13.1 Release Notes

SciPy 1.13.1 is a bug-fix release with no new features
compared to 1.13.0. The version of OpenBLAS shipped with
the PyPI binaries has been increased to 0.3.27.

Authors

  • Name (commits)
  • h-vetinari (1)
  • Jake Bowhay (2)
  • Evgeni Burovski (6)
  • Sean Cheah (2)
  • Lucas Colley (2)
  • DWesl (2)
  • Ralf Gommers (7)
  • Ben Greiner (1) +
  • Matt Haberland (2)
  • Gregory R. Lee (1)
  • Philip Loche (1) +
  • Sijo Valayakkad Manikandan (1) +
  • Matti Picus (1)
  • Tyler Reddy (62)
  • Atsushi Sakai (1)
  • Daniel Schmitz (2)
  • Dan Schult (3)
  • Scott Shambaugh (2)
  • Edgar Andrés Margffoy Tuay (1)

A total of 19 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.

streamlit/streamlit (streamlit)

v1.35.0

Compare Source

What's Changed

New Features 🎉
Bug Fixes 🐛
Other Changes

New Contributors

Full Changelog: streamlit/streamlit@1.34.0...1.35.0

huggingface/transformers (transformers)

v4.41.2

Compare Source

Release v4.41.2

Mostly fixing some stuff related to trust_remote_code=True and from_pretrained

The local_file_only was having a hard time when a .safetensors file did not exist. This is not expected and instead of trying to convert, we should just fallback to loading the .bin files.

v4.41.1: Fix PaliGemma finetuning, and some small bugs

Compare Source

Release v4.41.1

Fix PaliGemma finetuning:

The causal mask and label creation was causing label leaks when training. Kudos to @​probicheaux for finding and reporting!

Other fixes:

Reverted huggingface/transformers@4ab7a28

v4.41.0: : Phi3, JetMoE, PaliGemma, VideoLlava, Falcon2, FalconVLM & GGUF support

Compare Source

New models
Phi3

The Phi-3 model was proposed in Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone by Microsoft.

TLDR; Phi-3 introduces new ROPE scaling methods, which seems to scale fairly well! A 3b and a
Phi-3-mini is available in two context-length variants—4K and 128K tokens. It is the first model in its class to support a context window of up to 128K tokens, with little impact on quality.

image
JetMoE

JetMoe-8B is an 8B Mixture-of-Experts (MoE) language model developed by Yikang Shen and MyShell. JetMoe project aims to provide a LLaMA2-level performance and efficient language model with a limited budget. To achieve this goal, JetMoe uses a sparsely activated architecture inspired by the ModuleFormer. Each JetMoe block consists of two MoE layers: Mixture of Attention Heads and Mixture of MLP Experts. Given the input tokens, it activates a subset of its experts to process them. This sparse activation schema enables JetMoe to achieve much better training throughput than similar size dense models. The training throughput of JetMoe-8B is around 100B tokens per day on a cluster of 96 H100 GPUs with a straightforward 3-way pipeline parallelism strategy.

image
PaliGemma

PaliGemma is a lightweight open vision-language model (VLM) inspired by PaLI-3, and based on open components like the SigLIP vision model and the Gemma language model. PaliGemma takes both images and text as inputs and can answer questions about images with detail and context, meaning that PaliGemma can perform deeper analysis of images and provide useful insights, such as captioning for images and short videos, object detection, and reading text embedded within images.

More than 120 checkpoints are released see the collection here !

image
VideoLlava

Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset.

💡 Simple baseline, learning united visual representation by alignment before projection
With the binding of unified visual representations to the language feature space, we enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
🔥 High performance, complementary learning with video and image
Extensive experiments demonstrate the complementarity of modalities, showcasing significant superiority when compared to models specifically designed for either images or videos.

image
Falcon 2 and FalconVLM:
image

Two new models from TII-UAE! They published a blog-post with more details! Falcon2 introduces parallel mlp, and falcon VLM uses the Llava framework

GGUF from_pretrained support
image

You can now load most of the GGUF quants directly with transformers' from_pretrained to convert it to a classic pytorch model. The API is simple:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF"
filename = "tinyllama-1.1b-chat-v1.0.Q6_K.gguf"

tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)

We plan more closer integrations with llama.cpp / GGML ecosystem in the future, see: https://github.com/huggingface/transformers/issues/27712 for more details

Quantization
New quant methods

In this release we support new quantization methods: HQQ & EETQ contributed by the community. Read more about how to quantize any transformers model using HQQ & EETQ in the dedicated documentation section

dequantize API for bitsandbytes models

In case you want to dequantize models that have been loaded with bitsandbytes, this is now possible through the dequantize API (e.g. to merge adapter weights)

API-wise, you can achieve that with the following:

from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer

model_id = "facebook/opt-125m"

model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=BitsAndBytesConfig(load_in_4bit=True))
tokenizer = AutoTokenizer.from_pretrained(model_id)

model.dequantize()

text = tokenizer("Hello my name is", return_tensors="pt").to(0)

out = model.generate(**text)
print(tokenizer.decode(out[0]))
Generation updates
SDPA support
Improved Object Detection

Addition of fine-tuning script for object detection models

Interpolation of embeddings for vision models

Add interpolation of embeddings. This enables predictions from pretrained models on input images of sizes different than those the model was originally trained on. Simply pass interpolate_pos_embedding=True when calling the model.

Added for: BLIP, BLIP 2, InstructBLIP, SigLIP, ViViT

import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration

image = Image.open(requests.get("https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg", stream=True).raw)
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained(
    "Salesforce/blip2-opt-2.7b", 
    torch_dtype=torch.float16
).to("cuda")
inputs = processor(images=image, size={"height": 500, "width": 500}, return_tensors="pt").to("cuda")

predictions = model(**inputs, interpolate_pos_encoding=True)

##### Generated text: "a woman and dog on the beach"
generated_text = processor.batch_decode(predictions, skip_special_tokens=True)[0].strip()
🚨 might be breaking
Cleanups
Not breaking but important for Llama tokenizers
Fixes

Configuration

📅 Schedule: Branch creation - "before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate-bot renovate-bot requested a review from a team as a code owner May 27, 2024 00:32
@forking-renovate forking-renovate bot added the lang: python Issues specific to Python. label May 27, 2024
@renovate-bot renovate-bot force-pushed the renovate/python branch 2 times, most recently from a8b857a to 6705ae1 Compare May 27, 2024 07:03
@renovate-bot renovate-bot requested a review from theemadnes as a code owner May 27, 2024 07:03
@renovate-bot renovate-bot force-pushed the renovate/python branch 11 times, most recently from 3196314 to 061be94 Compare May 31, 2024 13:22
@renovate-bot renovate-bot force-pushed the renovate/python branch 11 times, most recently from e311ff3 to e335058 Compare June 7, 2024 19:52
@renovate-bot renovate-bot force-pushed the renovate/python branch 3 times, most recently from 4f51b0a to 7092428 Compare June 10, 2024 15:18
Copy link

Edited/Blocked Notification

Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.

You can manually request rebase by checking the rebase/retry box above.

⚠️ Warning: custom changes will be lost.

@bourgeoisor bourgeoisor merged commit d93fd15 into GoogleCloudPlatform:main Jun 10, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies lang: python Issues specific to Python.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants