Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update all dependencies #296

Merged
merged 1 commit into from
Apr 22, 2024
Merged

chore(deps): update all dependencies #296

merged 1 commit into from
Apr 22, 2024

Conversation

platform-engineering-bot
Copy link
Collaborator

This PR contains the following updates:

Package Type Update Change
pydantic (changelog) minor ==2.6.4 -> ==2.7.0
pydantic_core minor ==2.16.3 -> ==2.18.1
regex major ==2023.12.25 -> ==2024.4.16
slackapi/slack-github-action action minor v1.25.0 -> v1.26.0
tokenizers minor ==0.15.2 -> ==0.19.1
transformers minor ==4.39.3 -> ==4.40.0

Release Notes

pydantic/pydantic (pydantic)

v2.7.0

Compare Source

GitHub release

The code released in v2.7.0 is practically identical to that of v2.7.0b1.

What's Changed
Packaging
New Features

Finalized in v2.7.0, rather than v2.7.0b1:

Changes
Performance
Fixes
New Contributors
pydantic/pydantic-core (pydantic_core)

v2.18.1: 2024-04-11

Compare Source

What's Changed

New Contributors

Full Changelog: pydantic/pydantic-core@v2.18.0...v2.18.1

v2.18.0: 2024-04-02

Compare Source

What's Changed

New Contributors

Full Changelog: pydantic/pydantic-core@v2.17.0...v2.18.0

v2.17.0

Compare Source

What's Changed

Packaging
Fixes
Performance
New Features
Changes

New Contributors

Full Changelog: pydantic/pydantic-core@v2.16.3...v2.17.0

mrabarnett/mrab-regex (regex)

v2024.4.16

Compare Source

slackapi/slack-github-action (slackapi/slack-github-action)

v1.26.0: Slack Send V1.26.0

Compare Source

What's Changed

This release provides an escape hatch for sending the JSON content of a payload file exactly as is, without replacing any templated variables!

Previously a payload file was parsed and templated variables were replaced with values from github.context and github.env. Any undefined variables were replaced with ??? in this process, which might have caused questions.

That remains the default behavior, but now the JSON contents of a payload file can be sent exactly as written by setting the payload-file-path-parsed input to false:

- name: Send custom JSON data to Slack workflow
  id: slack
  uses: slackapi/slack-github-action@v1.26.0
  with:
    payload-file-path: "./payload-slack-content.json"
    payload-file-path-parsed: false
  env:
    SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

With this change, the contents of the example payload-slack-content.json will be sent to a webhook URL exactly as is!

Recent commits
Enhancements
Documentation
Maintenance
Dependencies
New Contributors

Full Changelog: slackapi/slack-github-action@v1.25.0...v1.26.0

huggingface/tokenizers (tokenizers)

v0.19.1

Compare Source

What's Changed

Full Changelog: huggingface/tokenizers@v0.19.0...v0.19.1

v0.19.0

Compare Source

What's Changed

Full Changelog: huggingface/tokenizers@v0.15.2...v0.19.0

huggingface/transformers (transformers)

v4.40.0: : Llama 3, Idefics 2, Recurrent Gemma, Jamba, DBRX, OLMo, Qwen2MoE, Grounding Dino

Compare Source

New model additions

Llama 3

Llama 3 is supported in this release through the Llama 2 architecture and some fixes in the tokenizers library.

Idefics2

drawing

The Idefics2 model was created by the Hugging Face M4 team and authored by Léo Tronchon, Hugo Laurencon, Victor Sanh. The accompanying blog post can be found here.

Idefics2 is an open multimodal model that accepts arbitrary sequences of image and text inputs and produces text outputs. The model can answer questions about images, describe visual content, create stories grounded on multiple images, or simply behave as a pure language model without visual inputs. It improves upon IDEFICS-1, notably on document understanding, OCR, or visual reasoning. Idefics2 is lightweight (8 billion parameters) and treats images in their native aspect ratio and resolution, which allows for varying inference efficiency.

Recurrent Gemma

drawing

Recurrent Gemma architecture. Taken from the original paper.

The Recurrent Gemma model was proposed in RecurrentGemma: Moving Past Transformers for Efficient Open Language Models by the Griffin, RLHF and Gemma Teams of Google.

The abstract from the paper is the following:

We introduce RecurrentGemma, an open language model which uses Google’s novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.

Jamba

Jamba is a pretrained, mixture-of-experts (MoE) generative text model, with 12B active parameters and an overall of 52B parameters across all experts. It supports a 256K context length, and can fit up to 140K tokens on a single 80GB GPU.

As depicted in the diagram below, Jamba’s architecture features a blocks-and-layers approach that allows Jamba to successfully integrate Transformer and Mamba architectures altogether. Each Jamba block contains either an attention or a Mamba layer, followed by a multi-layer perceptron (MLP), producing an overall ratio of one Transformer layer out of every eight total layers.

image

Jamba introduces the first HybridCache object that allows it to natively support assisted generation, contrastive search, speculative decoding, beam search and all of the awesome features from the generate API!

DBRX

DBRX is a transformer-based decoder-only large language model (LLM) that was trained using next-token prediction. It uses a fine-grained mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.

It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.

This provides 65x more possible combinations of experts and the authors found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).

OLMo

The OLMo model was proposed in OLMo: Accelerating the Science of Language Models by Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi.

OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo models are trained on the Dolma dataset. We release all code, checkpoints, logs (coming soon), and details involved in training these models.

Qwen2MoE

Qwen2MoE is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen-72B, Qwen-1.8B, Qwen-VL, Qwen-Audio, etc.

Model Details
Qwen2MoE is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. Qwen2MoE has the following architectural choices:

Qwen2MoE is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
Qwen2MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, Qwen1.5-MoE-A2.7B is upcycled from Qwen-1.8B. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while it achieves comparable performance with Qwen1.5-7B, with only 25% of the training resources.

Grounding Dino

drawing

Taken from the original paper.

The Grounding DINO model was proposed in Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. Grounding DINO extends a closed-set object detection model with a text encoder, enabling open-set object detection. The model achieves remarkable results, such as 52.5 AP on COCO zero-shot.

Static pretrained maps

Static pretrained maps have been removed from the library's internals and are currently deprecated. These used to reflect all the available checkpoints for a given architecture on the Hugging Face Hub, but their presence does not make sense in light of the huge growth of checkpoint shared by the community.

With the objective of lowering the bar of model contributions and reviewing, we first start by removing legacy objects such as this one which do not serve a purpose.

Notable improvements

Processors improvements

Processors are ungoing changes in order to uniformize them and make them clearer to use.

SDPA

Push to Hub for pipelines

Pipelines can now be pushed to Hub using a convenient push_to_hub method.

Flash Attention 2 for more models (M2M100, NLLB, GPT2, MusicGen) !

Thanks to the community contribution, Flash Attention 2 has been integrated for more architectures

Improvements and bugfixes


Configuration

📅 Schedule: Branch creation - "before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

Signed-off-by: Platform Engineering Bot <platform-engineering@redhat.com>
@rhatdan
Copy link
Member

rhatdan commented Apr 22, 2024

LGTM

@rhatdan rhatdan merged commit 74c0e13 into main Apr 22, 2024
1 check passed
@platform-engineering-bot platform-engineering-bot deleted the renovate/all branch April 22, 2024 13:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants