Skip to content

Commit

Permalink
Merge #618
Browse files Browse the repository at this point in the history
618: fix litmus bug when using muliple inputs r=dkim-furiosa a=maxstate

### Problem
* litmus를 할시 input의 갯수를 하나로 가정하고 있었습니다.

### Solution
* 여러 input을 사용할 수 있도록 합니다.

### Testing
* 기존 테스트 통과

### Checklist
- [x] Will this be part of a product update? If yes, please update [Changelog](https://github.com/furiosa-ai/furiosa-sdk-private/blob/main/CHANGELOG.md).
- [x] Changes are less than 1000 lines (refer to [Pull Request 크기 줄이는 법](https://github.com/furiosa-ai/npu-tools/blob/master/CodingConvention.md#pull-request-%ED%81%AC%EA%B8%B0-%EC%A4%84%EC%9D%B4%EB%8A%94-%EB%B2%95))
- [x] My code has passed black/isort.
- [x] I have performed a self-review of my code.
- [x] All CI checks have passed.


Co-authored-by: SH. Song <[email protected]>
  • Loading branch information
furiosa-bors[bot] and maxstate authored Apr 25, 2023
2 parents be83adf + 786ced0 commit 85fb4c7
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions python/furiosa-litmus/furiosa/litmus/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ def calibrate_with_random_data(
initializers = set(tensor.name for tensor in model.graph.initializer)
rng = np.random.default_rng()
for _ in range(dataset_size):
inputs = []
for value_info in model.graph.input:
if value_info.name in initializers:
continue
Expand Down Expand Up @@ -77,18 +78,18 @@ def calibrate_with_random_data(
)
np_dtype = onnx.mapping.TENSOR_TYPE_TO_NP_TYPE[value_info.type.tensor_type.elem_type]
if np.issubdtype(np_dtype, np.floating):
inputs = rng.standard_normal(size=shape, dtype=np_dtype)
inputs.append(rng.standard_normal(size=shape, dtype=np_dtype))
elif np.issubdtype(np_dtype, np.integer):
iinfo = np.iinfo(np_dtype)
inputs = rng.integers(
iinfo.min, iinfo.max, size=shape, dtype=np_dtype, endpoint=True
inputs.append(
rng.integers(iinfo.min, iinfo.max, size=shape, dtype=np_dtype, endpoint=True)
)
else:
elem_type = onnx.TensorProto.DataType.Name(value_info.type.tensor_type.elem_type)
raise NotImplementedError(
f"tensor '{value_info.name}' is of {elem_type} but a model whose input tensor is of {elem_type} cannot be randomly calibrated yet"
)
calibrator.collect_data([[inputs]])
calibrator.collect_data([inputs])
return calibrator.compute_range()


Expand Down

0 comments on commit 85fb4c7

Please sign in to comment.