Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

尝试自己增加一个算子 #155

Open
chenj133 opened this issue Dec 13, 2023 · 2 comments
Open

尝试自己增加一个算子 #155

chenj133 opened this issue Dec 13, 2023 · 2 comments

Comments

@chenj133
Copy link

我在尝试自己增加一个mod算子
dialect那里,我看div算子核心代码如下:
LogicalResult top::DivOp::init(InferenceParameter &p) {
auto binary = new Binary();
int index0 = 0, index1 = 1;
if (getIsReverse()) {
index0 = 1, index1 = 0;
}
auto lhs_shape = module::getShape(getInputs()[index0]);
auto rhs_shape = module::getShape(getInputs()[index1]);

(*binary)
.hs(p.inputs[index0], p.inputs[index1], lhs_shape, rhs_shape)
.dst(p.outputs[0], module::getShape(getOutput()))
.do_relu(getDoRelu())
.relu_limit(getReluLimit().convertToDouble())
.algorithem(algorithm::binary_div)
.setup();

p.handle = (void *)binary;

return success();
}

我没找到其中的algorithm::binary_div的实现方法,这个似乎不是c++标准库algorithm里的函数,但是他确实又直接import标准库的algorithm,这儿我该怎么实现把这个除法改成std::fmod函数

@chenj133
Copy link
Author

我照搬了copy的算子,把dialect里的Mod.cpp文件编辑如下:

//===----------------------------------------------------------------------===//
//
// Copyright (C) 2022 Sophgo Technologies Inc. All rights reserved.
//
// TPU-MLIR is licensed under the 2-Clause BSD License except for the
// third-party components.
//
//===----------------------------------------------------------------------===//

#include "tpu_mlir/Support/Module.h"
int64_t top::ModOp::getFLOPs() { return module::getNumElements(getOutput()); }

LogicalResult top::ModOp::init(InferenceParameter &p) { return success(); }

void top::ModOp::deinit(InferenceParameter &p) {}

LogicalResult top::ModOp::inference(InferenceParameter &p) {
float *input_data0 = p.inputs[0];
float *input_data1 = p.inputs[1];
float *output_data = p.outputs[0];

auto shape = module::getI64Array(this->getShape());
auto i_stride = module::getI64Array(this->getInputStride());
auto o_stride = module::getI64Array(this->getOutputStride());
std::vector<int64_t> shape_4;
std::vector<int64_t> i_stride_4;
std::vector<int64_t> o_stride_4;
shape_4 = {1, 1, 1, 1};
i_stride_4 = {0, 0, 0, 0};
o_stride_4 = {0, 0, 0, 0};
int num_dims = shape->size();
assert(num_dims <= 4);
assert(i_stride->size() == shape->size());
assert(o_stride->size() == shape->size());
for (int end = num_dims - 1, idx = 3; end >= 0 && idx >= 0; end--, idx--) {
shape_4[idx] = shape->at(end);
i_stride_4[idx] = i_stride->at(end);
o_stride_4[idx] = o_stride->at(end);
}

for (int n = 0; n < shape_4[0]; n++) {
for (int c = 0; c < shape_4[1]; c++) {
for (int h = 0; h < shape_4[2]; h++) {
for (int w = 0; w < shape_4[3]; w++) {
int in_index = n * i_stride_4[0] + c * i_stride_4[1] +
h * i_stride_4[2] + w * i_stride_4[3];
int out_index = n * o_stride_4[0] + c * o_stride_4[1] +
h * o_stride_4[2] + w * o_stride_4[3];
output_data[out_index] =
std::fmod(input_data0[in_index], input_data1[in_index]);
}
}
}
}
return success();
}

void top::ModOp::shape_inference() {
broadcast_shape_inference(getOperation());
for (int i = 0; i < getNumOperands(); i++) {
auto value = getInputs()[i];
broadcast_tensor_reshape(getOutput(), value);
}
}

但是编译的时候报错
/workspace/tpu-mlir/lib/Dialect/Top/Interfaces/Mod.cpp:22:42: error: no member named 'getShape' in 'tpu_mlir::top::ModOp'
auto shape = module::getI64Array(this->getShape());
最前面跟Copy.cpp一样最前面加了#include "tpu_mlir/Support/Module.h",为啥会失败

@lordrebel
Copy link

maybe you forget add op definition in ODS file: include/tpu_mlir/Dialect/Tpu/IR/TpuOps.td ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants