Skip to content

Commit

Permalink
Create patch release r0.8.2 (#2326)
Browse files Browse the repository at this point in the history
* Move keras-cv markdown files to toplevel (#2291)

Keras, KerasNLP and KerasTuner all store things liek the CONTRIBUTING.md
in the toplevel of the repo. We should do the same here.

* Add `version()` API to unify with Keras and KerasNLP (#2199)

* Unify `version` API with keras and keras_nlp

* Formatting

* Update to keep `version` parity with KerasNLP, support nightly version string

* Update version_utils.py

* Update version_utils.py

* Update random_crop_and_zoom.py (#2294)

* Update random_crop_and_zoom.py

* Update description

* rename file

* bug fix (#2303)

* Add BASNet Segmentation Model (#2006) (#2271)

* BASNet model initial code structure

* adding test and initial preset details

* adding comments

* cleaning and formatting code

* keras 3 support added

* disabling preset test for BASNet

* Fix image.shape type (#2305)

Fixed image.shape issue for tensorflow backend

* Create workflow for auto assignment of issues and for stale issues (#2313)

* Create auto-assignment.yaml

* Create auto-assignment.js

* Create stale-issue-pr.yaml

* Rename auto-assignment.yaml to auto-assignment.yml

* Rename stale-issue-pr.yaml to stale-issue-pr.yml

* Fix format and Update Vectorized Base (#2323)

* Fix CI Test for Basnet OOM and PyCoCo Test Failure for JAX (#2322)

* Reduce memory consumption for BasNet tests (#2325)

---------

Co-authored-by: Matt Watson <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>
Co-authored-by: Sachin Prasad <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Hamid Ali <[email protected]>
Co-authored-by: Tirth Patel <[email protected]>
  • Loading branch information
7 people authored Jan 31, 2024
1 parent 0e2c479 commit 7eee38a
Show file tree
Hide file tree
Showing 38 changed files with 914 additions and 64 deletions.
21 changes: 21 additions & 0 deletions .github/workflows/auto-assignment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: auto-assignment
on:
issues:
types:
- opened

permissions:
contents: read
issues: write
pull-requests: write

jobs:
welcome:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/github-script@v7
with:
script: |
const script = require('./\.github/workflows/scripts/auto-assignment.js')
script({github, context})
43 changes: 43 additions & 0 deletions .github/workflows/scripts/auto-assignment.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
/** Automatically assign issues and PRs to users in the `assigneesList`
* on a rotating basis.
@param {!object}
GitHub objects can call GitHub APIs using their built-in library functions.
The context object contains issue and PR details.
*/

module.exports = async ({ github, context }) => {
let issueNumber;
let assigneesList;
// Is this an issue? If so, assign the issue number. Otherwise, assign the PR number.
if (context.payload.issue) {
//assignee List for issues.
assigneesList = ["SuryanarayanaY", "sachinprasadhs"];
issueNumber = context.payload.issue.number;
} else {
//assignee List for PRs.
assigneesList = [];
issueNumber = context.payload.number;
}
console.log("assignee list", assigneesList);
console.log("entered auto assignment for this issue: ", issueNumber);
if (!assigneesList.length) {
console.log("No assignees found for this repo.");
return;
}
let noOfAssignees = assigneesList.length;
let selection = issueNumber % noOfAssignees;
let assigneeForIssue = assigneesList[selection];

console.log(
"issue Number = ",
issueNumber + " , assigning to: ",
assigneeForIssue
);
return github.rest.issues.addAssignees({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
assignees: [assigneeForIssue],
});
};
50 changes: 50 additions & 0 deletions .github/workflows/stale-issue-pr.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- name: Awaiting response issues
uses: actions/stale@v9
with:
days-before-issue-stale: 14
days-before-issue-close: 14
stale-issue-label: "stale"
# reason for closed the issue default value is not_planned
close-issue-reason: completed
only-labels: "stat:awaiting response from contributor"
stale-issue-message: >
This issue is stale because it has been open for 14 days with no activity.
It will be closed if no further activity occurs. Thank you.
# List of labels to remove when issues/PRs unstale.
labels-to-remove-when-unstale: "stat:awaiting response from contributor"
close-issue-message: >
This issue was closed because it has been inactive for 28 days.
Please reopen if you'd like to work on this further.
days-before-pr-stale: 14
days-before-pr-close: 14
stale-pr-message: "This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you."
close-pr-message: "This PR was closed because it has been inactive for 28 days. Please reopen if you'd like to work on this further."
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Contribution issues
uses: actions/stale@v9
with:
days-before-issue-stale: 180
days-before-issue-close: 365
stale-issue-label: "stale"
# reason for closed the issue default value is not_planned
close-issue-reason: not_planned
any-of-labels: "stat:contributions welcome,good first issue"
# List of labels to remove when issues/PRs unstale.
labels-to-remove-when-unstale: "stat:contributions welcome,good first issue"
stale-issue-message: >
This issue is stale because it has been open for 180 days with no activity.
It will be closed if no further activity occurs. Thank you.
close-issue-message: >
This issue was closed because it has been inactive for more than 1 year.
repo-token: ${{ secrets.GITHUB_TOKEN }}
4 changes: 2 additions & 2 deletions .kokoro/github/ubuntu/gpu/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ pip install --no-deps -e "." --progress-bar off
# Run Extra Large Tests for Continuous builds
if [ "${RUN_XLARGE:-0}" == "1" ]
then
pytest --check_gpu --run_large --run_extra_large --durations 0 \
pytest --cache-clear --check_gpu --run_large --run_extra_large --durations 0 \
keras_cv/bounding_box \
keras_cv/callbacks \
keras_cv/losses \
Expand All @@ -65,7 +65,7 @@ then
keras_cv/models/segmentation \
keras_cv/models/stable_diffusion
else
pytest --check_gpu --run_large --durations 0 \
pytest --cache-clear --check_gpu --run_large --durations 0 \
keras_cv/bounding_box \
keras_cv/callbacks \
keras_cv/losses \
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
8 changes: 4 additions & 4 deletions benchmarks/vectorized_randomly_zoomed_crop.py
Original file line number Diff line number Diff line change
Expand Up @@ -249,10 +249,10 @@ def from_config(cls, config):
config["zoom_factor"]
)
if isinstance(config["aspect_ratio_factor"], dict):
config[
"aspect_ratio_factor"
] = keras.utils.deserialize_keras_object(
config["aspect_ratio_factor"]
config["aspect_ratio_factor"] = (
keras.utils.deserialize_keras_object(
config["aspect_ratio_factor"]
)
)
return cls(**config)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

"""random_resized_crop_demo.py.py shows how to use the RandomResizedCrop
preprocessing layer. Operates on an image of elephant. In this script the image
"""This demo example shows how to use the RandomCropAndResize preprocessing
layer. Operates on an image of elephant. In this script the image
is loaded, then are passed through the preprocessing layers.
Finally, they are shown using matplotlib.
"""
Expand Down
4 changes: 2 additions & 2 deletions keras_cv/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,5 +41,5 @@
from keras_cv.core import FactorSampler
from keras_cv.core import NormalFactorSampler
from keras_cv.core import UniformFactorSampler

__version__ = "0.8.1"
from keras_cv.version_utils import __version__
from keras_cv.version_utils import version
2 changes: 1 addition & 1 deletion keras_cv/layers/object_detection/anchor_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ def __call__(self, image=None, image_shape=None):
"Expected `image` to be a Tensor of rank 3. Got "
f"image.shape.rank={len(image.shape)}"
)
image_shape = image.shape
image_shape = tuple(image.shape)

results = {}
for key, generator in self.anchor_generators.items():
Expand Down
12 changes: 6 additions & 6 deletions keras_cv/layers/preprocessing/base_image_augmentation_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,15 +236,15 @@ def _compute_output_signature(self, inputs):
bounding_boxes = inputs.get(BOUNDING_BOXES, None)

if bounding_boxes is not None:
fn_output_signature[
BOUNDING_BOXES
] = self._compute_bounding_box_signature(bounding_boxes)
fn_output_signature[BOUNDING_BOXES] = (
self._compute_bounding_box_signature(bounding_boxes)
)

segmentation_masks = inputs.get(SEGMENTATION_MASKS, None)
if segmentation_masks is not None:
fn_output_signature[
SEGMENTATION_MASKS
] = self.compute_image_signature(segmentation_masks)
fn_output_signature[SEGMENTATION_MASKS] = (
self.compute_image_signature(segmentation_masks)
)

keypoints = inputs.get(KEYPOINTS, None)
if keypoints is not None:
Expand Down
8 changes: 4 additions & 4 deletions keras_cv/layers/preprocessing/random_crop_and_resize.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,10 +272,10 @@ def from_config(cls, config):
config["crop_area_factor"]
)
if isinstance(config["aspect_ratio_factor"], dict):
config[
"aspect_ratio_factor"
] = keras.utils.deserialize_keras_object(
config["aspect_ratio_factor"]
config["aspect_ratio_factor"] = (
keras.utils.deserialize_keras_object(
config["aspect_ratio_factor"]
)
)
return cls(**config)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@

from keras_cv import bounding_box
from keras_cv.api_export import keras_cv_export
from keras_cv.backend import config
from keras_cv.backend import keras
from keras_cv.backend import ops
from keras_cv.backend import scope
Expand Down Expand Up @@ -412,14 +413,16 @@ def _batch_augment(self, inputs):
def call(self, inputs):
# try to convert a given backend native tensor to TensorFlow tensor
# before passing it over to TFDataScope
is_tf_backend = config.backend() == "tensorflow"
is_in_tf_graph = not tf.executing_eagerly()
contains_ragged = lambda y: any(
tree.map_structure(
lambda x: isinstance(x, (tf.RaggedTensor, tf.SparseTensor)),
tree.flatten(y),
)
)
inputs_contain_ragged = contains_ragged(inputs)
if not inputs_contain_ragged:
if not is_tf_backend and not inputs_contain_ragged:
inputs = tree.map_structure(
lambda x: tf.convert_to_tensor(x), inputs
)
Expand All @@ -443,13 +446,14 @@ def call(self, inputs):
# backend native tensors. This is to avoid breaking TF data
# pipelines that can't easily be ported to become backend
# agnostic.
if not inputs_contain_ragged and not contains_ragged(outputs):
outputs = tree.map_structure(
# some layers return None, handle that case when
# converting to tensors
lambda x: ops.convert_to_tensor(x) if x is not None else x,
outputs,
)
if not is_tf_backend and not is_in_tf_graph:
if not inputs_contain_ragged and not contains_ragged(outputs):
outputs = tree.map_structure(
# some layers return None, handle that case when
# converting to tensors
lambda x: ops.convert_to_tensor(x) if x is not None else x,
outputs,
)
return outputs

def _format_inputs(self, inputs):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -549,3 +549,15 @@ def test_converts_ragged_to_dense_segmentation_masks(self):
{"images": images, "segmentation_masks": segmentation_masks}
)
self.assertTrue(isinstance(result["segmentation_masks"], tf.Tensor))

def test_in_tf_data_pipeline(self):
images = np.random.randn(4, 100, 100, 3).astype("float32")
train_ds = tf.data.Dataset.from_tensor_slices(images)
train_ds = train_ds.map(lambda x: {"images": x})
train_ds = train_ds.map(
VectorizedRandomAddLayer(fixed_value=2.0)
).batch(4)
for output in train_ds.take(1):
pass
self.assertTrue(isinstance(output["images"], tf.Tensor))
self.assertAllClose(output["images"], images + 2.0)
8 changes: 4 additions & 4 deletions keras_cv/layers/regularization/squeeze_excite.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,10 @@ def get_config(self):
@classmethod
def from_config(cls, config):
if isinstance(config["squeeze_activation"], dict):
config[
"squeeze_activation"
] = keras.saving.deserialize_keras_object(
config["squeeze_activation"]
config["squeeze_activation"] = (
keras.saving.deserialize_keras_object(
config["squeeze_activation"]
)
)
if isinstance(config["excite_activation"], dict):
config["excite_activation"] = keras.saving.deserialize_keras_object(
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/layers/vit_det_layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -430,9 +430,9 @@ def __init__(
key_dim=self.project_dim // self.num_heads,
use_bias=use_bias,
use_rel_pos=use_rel_pos,
input_size=input_size
if window_size == 0
else (window_size, window_size),
input_size=(
input_size if window_size == 0 else (window_size, window_size)
),
)
self.mlp_block = MLP(
mlp_dim,
Expand Down
3 changes: 3 additions & 0 deletions keras_cv/metrics/coco/pycoco_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,9 @@ def _convert_predictions_to_coco_annotations(predictions):
num_batches = len(predictions["source_id"])
for i in range(num_batches):
batch_size = predictions["source_id"][i].shape[0]
predictions["detection_boxes"][i] = predictions["detection_boxes"][
i
].copy()
for j in range(batch_size):
max_num_detections = predictions["num_detections"][i][j]
predictions["detection_boxes"][i][j] = _yxyx_to_xywh(
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/metrics/object_detection/box_coco_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,9 +212,9 @@ def result_fn(self, force=False):
)
result = {}
for i, key in enumerate(METRIC_NAMES):
result[
self.name_prefix() + METRIC_MAPPING[key]
] = py_func_result[i]
result[self.name_prefix() + METRIC_MAPPING[key]] = (
py_func_result[i]
)
return result

obj.result = types.MethodType(result_fn, obj)
Expand Down
1 change: 1 addition & 0 deletions keras_cv/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,7 @@
from keras_cv.models.object_detection.yolo_v8.yolo_v8_detector import (
YOLOV8Detector,
)
from keras_cv.models.segmentation import BASNet
from keras_cv.models.segmentation import DeepLabV3Plus
from keras_cv.models.segmentation import SAMMaskDecoder
from keras_cv.models.segmentation import SAMPromptEncoder
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/models/backbones/densenet/densenet_backbone.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,9 +119,9 @@ def __init__(
name=f"conv{len(stackwise_num_repeats) + 1}",
)

pyramid_level_inputs[
f"P{len(stackwise_num_repeats) + 1}"
] = utils.get_tensor_input_name(x)
pyramid_level_inputs[f"P{len(stackwise_num_repeats) + 1}"] = (
utils.get_tensor_input_name(x)
)
x = keras.layers.BatchNormalization(
axis=BN_AXIS, epsilon=BN_EPSILON, name="bn"
)(x)
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/models/backbones/resnet_v1/resnet_v1_backbone.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,9 +130,9 @@ def __init__(
first_shortcut=(block_type == "block" or stack_index > 0),
name=f"v2_stack_{stack_index}",
)
pyramid_level_inputs[
f"P{stack_index + 2}"
] = utils.get_tensor_input_name(x)
pyramid_level_inputs[f"P{stack_index + 2}"] = (
utils.get_tensor_input_name(x)
)

# Create model.
super().__init__(inputs=inputs, outputs=x, **kwargs)
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/models/backbones/resnet_v2/resnet_v2_backbone.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,9 @@ def __init__(
first_shortcut=(block_type == "block" or stack_index > 0),
name=f"v2_stack_{stack_index}",
)
pyramid_level_inputs[
f"P{stack_index + 2}"
] = utils.get_tensor_input_name(x)
pyramid_level_inputs[f"P{stack_index + 2}"] = (
utils.get_tensor_input_name(x)
)

x = keras.layers.BatchNormalization(
axis=BN_AXIS, epsilon=BN_EPSILON, name="post_bn"
Expand Down
6 changes: 3 additions & 3 deletions keras_cv/models/backbones/vit_det/vit_det_backbone.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,9 @@ def __init__(
num_heads=num_heads,
use_bias=use_bias,
use_rel_pos=use_rel_pos,
window_size=window_size
if i not in global_attention_indices
else 0,
window_size=(
window_size if i not in global_attention_indices else 0
),
input_size=(img_size // patch_size, img_size // patch_size),
)(x)
x = keras.models.Sequential(
Expand Down
1 change: 0 additions & 1 deletion keras_cv/models/legacy/darknet.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@

@keras.utils.register_keras_serializable(package="keras_cv.models")
class DarkNet(keras.Model):

"""Represents the DarkNet architecture.
The DarkNet architecture is commonly used for detection tasks. It is
Expand Down
Loading

0 comments on commit 7eee38a

Please sign in to comment.