Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update python #1400

Merged
merged 1 commit into from
Aug 5, 2024

Conversation

renovate-bot
Copy link
Contributor

@renovate-bot renovate-bot commented Aug 5, 2024

Mend Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence Update Pending
google-cloud-aiplatform ==1.59.0 -> ==1.60.0 age adoption passing confidence minor
google-cloud-pubsub ~=2.22.0 -> ~=2.23.0 age adoption passing confidence minor
google-cloud-storage ==2.17.0 -> ==2.18.0 age adoption passing confidence minor
gradio ==4.38.1 -> ==4.39.0 age adoption passing confidence minor 4.40.0
langchain (changelog) ==0.2.10 -> ==0.2.11 age adoption passing confidence patch 0.2.12
langchain-community (changelog) ==0.2.9 -> ==0.2.10 age adoption passing confidence patch 0.2.11
langchain-google-vertexai ==1.0.6 -> ==1.0.7 age adoption passing confidence patch 1.0.8
pymupdf (changelog) ==1.24.7 -> ==1.24.9 age adoption passing confidence patch
ruff (source, changelog) >=0.5,<=0.5.4 -> >=0.5,<=0.5.5 age adoption passing confidence patch 0.5.6
streamlit (source, changelog) ==1.36.0 -> ==1.37.0 age adoption passing confidence minor
transformers ==4.42.4 -> ==4.43.3 age adoption passing confidence minor 4.43.4
All locks refreshed lockFileMaintenance

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

googleapis/python-aiplatform (google-cloud-aiplatform)

v1.60.0

Compare Source

Features
  • Add preflight validations to PipelineJob submit and run methods. (c5a3535)
  • Add support for langchain v0.2+ package versions in default installation (259b638)
  • GenAI - Added tokenization support via GenerativeModel.compute_tokens (cfe0cc6)
  • GenAI - ContextCaching - allow from_cached_content to take the cached_content resource name (8f53902)
  • Make count_tokens generally-available at TextEmbeddingModel. (efb8413)
Bug Fixes
  • Avoid throw error when Part.text is empty in modality content checks (bbd4a49)
  • Correct logit_bias type annotation to accept keys as strings (2676d25)
  • Create FV embedding dimensions sample - dimensions should be an int (2aa221e)
  • Fix the sync option for Model Monitor job creation (22151e2)
  • Include DeploymentResourcePool class in aiplatform top-level sdk module (ecc4f09)
  • Overriding the current TracerProvider when enabling tracing (1476c10)
  • Pass the project ID from vertexai.init to CloudTraceSpanExporter when enable_tracing=True for LangchainAgent (3ec043e)
Documentation
  • GenAI - Update README.md for Vertex Generative AI SDK for Python to add subsections to the right nav. (42af742)
googleapis/python-pubsub (google-cloud-pubsub)

v2.23.0

Compare Source

Features
  • Add max messages batching for Cloud Storage subscriptions (#​1224) (91c89d3)
googleapis/python-storage (google-cloud-storage)

v2.18.0

Compare Source

Features
  • Add OpenTelemetry Tracing support as a preview feature (#​1288) (c2ab0e0)
Bug Fixes
gradio-app/gradio (gradio)

v4.39.0

Compare Source

Features
Fixes
pymupdf/pymupdf (pymupdf)

v1.24.9: PyMuPDF-1.24.9 released

Compare Source

PyMuPDF-1.24.9 has been released.

Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:

python -m pip install --upgrade pymupdf

[Linux-aarch64 wheels will be built and uploaded later.]

Changes in version 1.24.9 (2024-07-24)

  • Incremented MuPDF version to 1.24.8.

v1.24.8: PyMuPDF-1.24.8 released

Compare Source

PyMuPDF-1.24.8 has been released.

Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:

python -m pip install --upgrade pymupdf

[Linux-aarch64 wheels will be built and uploaded later.]

Changes in version 1.24.8 (2024-07-22)

Other:

  • Fixed various spelling mistakes spotted by codespell.
  • Improved how we modify MuPDF's default configuration on Windows.
  • Make text search to work with ligatures.
astral-sh/ruff (ruff)

v0.5.5

Compare Source

Preview features
  • [fastapi] Implement fastapi-redundant-response-model (FAST001) and fastapi-non-annotated-dependency(FAST002) (#​11579)
  • [pydoclint] Implement docstring-missing-exception (DOC501) and docstring-extraneous-exception (DOC502) (#​11471)
Rule changes
  • [numpy] Fix NumPy 2.0 rule for np.alltrue and np.sometrue (#​12473)
  • [numpy] Ignore NPY201 inside except blocks for compatibility with older numpy versions (#​12490)
  • [pep8-naming] Avoid applying ignore-names to self and cls function names (N804, N805) (#​12497)
Formatter
  • Fix incorrect placement of leading function comment with type params (#​12447)
Server
  • Do not bail code action resolution when a quick fix is requested (#​12462)
Bug fixes
  • Fix Ord implementation of cmp_fix (#​12471)
  • Raise syntax error for unparenthesized generator expression in multi-argument call (#​12445)
  • [pydoclint] Fix panic in DOC501 reported in #​12428 (#​12435)
  • [flake8-bugbear] Allow singleton tuples with starred expressions in B013 (#​12484)
Documentation
  • Add Eglot setup guide for Emacs editor (#​12426)
  • Add note about the breaking change in nvim-lspconfig (#​12507)
  • Add note to include notebook files for native server (#​12449)
  • Add setup docs for Zed editor (#​12501)
streamlit/streamlit (streamlit)

v1.37.0

Compare Source

What's Changed

New Features 🎉
Bug Fixes 🐛
Other Changes

New Contributors

Full Changelog: streamlit/streamlit@1.36.0...1.37.0

huggingface/transformers (transformers)

v4.43.3: Patch deepspeed

Compare Source

Patch release v4.43.3:
We still saw some bugs so @​zucchini-nlp added:

Other fixes:

  • [whisper] fix short-form output type #​32178, by @​sanchit-gandhi which fixes the short audio temperature fallback!
  • [BigBird Pegasus] set _supports_param_buffer_assignment to False #​32222 by @​kashif, mostly related to the new super fast init, some models have to get this set to False. If you see a weird behavior look for that 😉

v4.43.2: : Patch release

Compare Source

  • Fix float8_e4m3fn in modeling_utils (#​32193)
  • Fix resize embedding with Deepspeed (#​32192)
  • let's not warn when someone is running a forward (#​32176)
  • RoPE: relaxed rope validation (#​32182)

v4.43.1: : Patch release

Compare Source

v4.43.0: : Llama 3.1, Chameleon, ZoeDepth, Hiera

Compare Source

Llama

The Llama 3.1 models are released by Meta and come in three flavours: 8B, 70B, and 405B.

To get an overview of Llama 3.1, please visit the Hugging Face announcement blog post.

We release a repository of llama recipes to showcase usage for inference, total and partial fine-tuning of the different variants.

image

Chameleon

The Chameleon model was proposed in Chameleon: Mixed-Modal Early-Fusion Foundation Models by META AI Chameleon Team. Chameleon is a Vision-Language Model that use vector quantization to tokenize images which enables the model to generate multimodal output. The model takes images and texts as input, including an interleaved format, and generates textual response.

ZoeDepth

The ZoeDepth model was proposed in ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth by Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, Matthias Müller. ZoeDepth extends the DPT framework for metric (also called absolute) depth estimation. ZoeDepth is pre-trained on 12 datasets using relative depth and fine-tuned on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier.

Hiera

Hiera was proposed in Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles by Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer

The paper introduces “Hiera,” a hierarchical Vision Transformer that simplifies the architecture of modern hierarchical vision transformers by removing unnecessary components without compromising on accuracy or efficiency. Unlike traditional transformers that add complex vision-specific components to improve supervised classification performance, Hiera demonstrates that such additions, often termed “bells-and-whistles,” are not essential for high accuracy. By leveraging a strong visual pretext task (MAE) for pretraining, Hiera retains simplicity and achieves superior accuracy and speed both in inference and training across various image and video recognition tasks. The approach suggests that spatial biases required for vision tasks can be effectively learned through proper pretraining, eliminating the need for added architectural complexity.

Agents

Our ReactAgent has a specific way to return its final output: it calls the tool final_answer, added to the user-defined toolbox upon agent initialization, with the answer as the tool argument. We found that even for a one-shot agent like CodeAgent, using a specific final_answer tools helps the llm_engine find what to return: so we generalized the final_answer tool for all agents.

Now if your code-based agent (like ReactCodeAgent) defines a function at step 1, it will remember the function definition indefinitely. This means your agent can create its own tools for later re-use!

This is a transformative PR: it allows the agent to regularly run a specific step for planning its actions in advance. This gets activated if you set an int for planning_interval upon agent initialization. At step 0, a first plan will be done. At later steps (like steps 3, 6, 9 if you set planning_interval=3 ), this plan will be updated by the agent depending on the history of previous steps. More detail soon!

Notable changes to the codebase

A significant RoPE refactor was done to make it model agnostic and more easily adaptable to any architecture.
It is only applied to Llama for now but will be applied to all models using RoPE over the coming days.

Breaking changes

TextGenerationPipeline and tokenizer kwargs

🚨🚨 This PR changes the code to rely on the tokenizer's defaults when these flags are unset. This means some models using TextGenerationPipeline previously did not add a <bos> by default, which (negatively) impacted their performance. In practice, this is a breaking change.

Example of a script changed as a result of this PR:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe("Foo bar"))
  • 🚨🚨 TextGenerationPipeline: rely on the tokenizer default kwargs by @​gante in #​31747

Bugfixes and improvements


Configuration

📅 Schedule: Branch creation - "before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@forking-renovate forking-renovate bot added dependencies lang: python Issues specific to Python. labels Aug 5, 2024
@bourgeoisor bourgeoisor merged commit deddfcc into GoogleCloudPlatform:main Aug 5, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies lang: python Issues specific to Python.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants