Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Dependency] Bump onnxruntime from 1.15.1 to 1.16.3 in /.setup/pip #1048

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Dec 1, 2023

Bumps onnxruntime from 1.15.1 to 1.16.3.

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.16.3

What's Changed

  1. Stable Diffusion XL demo update by @​tianleiwu in microsoft/onnxruntime#18496
  2. Fixed a memory leak issue(#18466) in TensorRT EP by @​chilo-ms in microsoft/onnxruntime#18467
  3. Fix a use-after-free bug in SaveInputOutputNamesToNodeMapping function by @​snnn in microsoft/onnxruntime#18456 . The issue was found by AddressSanitizer.

ONNX Runtime v1.16.2

The patch release includes updates on:

  • Performance optimizations for Llama2 on CUDA EP and DirectML EP
  • Performance optimizations for Stable Diffusion XL model for CUDA EP
    • Demos for text to image generation
  • Mobile bug fixes for crash on some older 64-bit ARM devices and AOT inlining issue on iOS with C# bindings
  • TensorRT EP bug fixes for user provided compute stream and stream synchronization

ONNX Runtime v1.16.1

Patch release for 1.16

  • Fix type of weights and activations in the ONNX quantizer
  • Fix quantization bug in historic quantizer #17619
  • Enable session option access in nodejs API
  • Update nodejs to v18
  • Align ONNX Runtime extensions inclusion in source and build
  • Limit per thread context to 1 in the TensorRT EP to avoid error caused by input shape changes

ONNX Runtime v1.16.0

General

  • Support for serialization of models >=2GB

APIs

  • New session option to disable default CPU EP fallback session.disable_cpu_ep_fallback
  • Java
    • Support for fp16 and bf16 tensors as inputs and outputs, along with utilities to convert between these and fp32 data. On JDK 20 and newer the fp16 conversion methods use the JDK's Float.float16ToFloat and Float.floatToFloat16 methods which can be hardware accelerated and vectorized on some platforms.
    • Support for external initializers so that large models that can be instantiated without filesystem access
  • C#
    • Expose OrtValue API as the new preferred API to run inference in C#. This reduces garbage and exposes direct native memory access via Slice like interfaces.
    • Make Float16 and BFloat16 full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity, etc)
  • C++
    • Make Float16_t and BFloat16_t full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity, etc)

Performance

  • Improve LLM quantization accuracy with smoothquant

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [onnxruntime](https://github.com/microsoft/onnxruntime) from 1.15.1 to 1.16.3.
- [Release notes](https://github.com/microsoft/onnxruntime/releases)
- [Changelog](https://github.com/microsoft/onnxruntime/blob/main/docs/ReleaseManagement.md)
- [Commits](microsoft/onnxruntime@v1.15.1...v1.16.3)

---
updated-dependencies:
- dependency-name: onnxruntime
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Copy link
Author

dependabot bot commented on behalf of github Dec 1, 2023

The following labels could not be found: dependencies.

@codecov-commenter
Copy link

codecov-commenter commented Dec 1, 2023

Codecov Report

Merging #1048 (d01bd64) into master (00d119c) will decrease coverage by 0.08%.
Report is 397 commits behind head on master.
The diff coverage is 14.44%.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff              @@
##             master    #1048      +/-   ##
============================================
- Coverage     23.32%   23.25%   -0.08%     
- Complexity     7818     8232     +414     
============================================
  Files           220      227       +7     
  Lines         27303    29363    +2060     
  Branches         70       73       +3     
============================================
+ Hits           6369     6828     +459     
- Misses        20866    22465    +1599     
- Partials         68       70       +2     
Flag Coverage Δ
autograder 22.15% <59.25%> (+0.58%) ⬆️
js 27.51% <ø> (-0.96%) ⬇️
migrator 100.00% <100.00%> (ø)
php 20.15% <6.88%> (-0.06%) ⬇️
python_submitty_utils 71.65% <ø> (ø)
submitty_daemon_jobs 91.01% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant