Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Null check operator used on a null value in the method interpreter.run([input], [output]); #234

Open
peterbullmatti opened this issue Apr 26, 2023 · 3 comments

Comments

@peterbullmatti
Copy link

Im newbie to Tensorflow world, I learned tutorial and creat such method for text embedding, the print looks ok like this:

I/flutter (23098): input [10000, 10001, 0, 0, 0, 0, ... 
I/flutter (23098): output check [0.0, 0.0, 0.0, ...
I/flutter (23098): interpreter true
I/flutter (23098): inputShape: [1, 384]
I/flutter (23098): outputShape: [1, 384]
I/flutter (23098): input length: 384
I/flutter (23098): output length: 384

Here is my method:

 Future<List<double>> runModel(
      Interpreter interpreter, String inputText, List<int> inputShape) async {
    List<int> input = await stringToInputVector(inputText, inputShape[1]);

    List<int> outputShape = interpreter.getOutputTensor(0).shape;
    List<double> output =
        List<double>.generate(outputShape[0] * outputShape[1], (index) => 0.0);

    print('input $input');
    print('output check $output');
    print('interpreter ${interpreter.isAllocated}');
    print('inputShape: ${inputShape}');
    print('outputShape: ${outputShape}');
    print('input length: ${input.length}');
    print('output length: ${output.length}');
    
    interpreter.allocateTensors();
    interpreter.run([input], [output]);
    return output;
  }

  Future<String> extractRelevantInfo(String question, List<String> historyTexts,
      Interpreter modelInterpreter, List<int> inputShape) async {
    List<double> questionVector =
        await runModel(modelInterpreter, question, inputShape);
    double maxSimilarity = -1;
    String mostRelevantText;
    for (String historyText in historyTexts) {
      List<double> historyTextVector =
          await runModel(modelInterpreter, historyText, inputShape);
      double similarity =
          await computeCosineSimilarity(questionVector, historyTextVector);

      if (similarity > maxSimilarity) {
        maxSimilarity = similarity;
        mostRelevantText = historyText;
      }
    }

    return mostRelevantText;
  }

But now I met error:

E/flutter (23098): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: Null check operator used on a null value
E/flutter (23098): #0      Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:196:41)
E/flutter (23098): #1      Interpreter.run (package:tflite_flutter/src/interpreter.dart:157:5)
E/flutter (23098): #2      TensorflowAPI.runModel (package:accounting/api_services/tensorflow_api.dart:135:17)

I have no idea where is this error from, here is how I use this metho, the model I tried mobilebert_1_default.tflite and albert_lite_base_squadv1_1.tflite and lite-model_universal-sentence-encoder-qa-ondevice.tflite for text embedding, all of them have this error

await tensorflowAPI.extractRelevantInfo(
        userQuestion, historyStrings, interpreter, inputShape);
@JSYRD
Copy link

JSYRD commented Apr 27, 2023

If your input is not a single tensor, try useinterpreter.runForMultipleInputs(input, output), rather than interpreter.run([input], [output]).
And also, if it's a single tensor, try useinterpreter.run(input, output) directly, and do not embrace them with [].

@Mroles
Copy link

Mroles commented May 29, 2023

I also encountered the same issue, but for object detection models. I am following the example in the tflite_flutter repo. After playing around a bit, I realized my outputs are null beyond index 0. Is there something I am missing?

Future<void> processImage() async {
    if (imagePath != null) {
      final imageData = File(imagePath!).readAsBytesSync();
      image = img.decodeImage(imageData);
      setState(() {});

      final imageInput = img.copyResize(
        image!,
        width: WIDTH,
        height: HEIGHT,
      );

      final imageMatrix = List.generate(
        imageInput.height,
        (y) => List.generate(
          imageInput.width,
          (x) {
            final pixel = imageInput.getPixel(x, y);
            return [img.getRed(pixel), img.getGreen(pixel), img.getBlue(pixel)];
          },
        ),
      );
      runInference(imageMatrix);
    }
  }

and here is the runInference method

  
  Future<void> runInference(
    List<List<List<num>>> imageMatrix,
  ) async {
    final input = [imageMatrix];

    //output shape [1,100, 4]
    final output = [
      List.generate(
        100,
        (index) => List<double>.filled(4, 0.0),
      )
    ];
    // Run inference
    interpreter.run(input, output);

    // Get first output tensor
    final result = output.first;

    setState(() {});

    interpreter.close();
  }
  

@kvanry
Copy link

kvanry commented Jun 14, 2023

@Mroles

I believe it's a bug with the code, I forked and fixed. The problem is it's trying to load a single buffer for each input even though there's multiple buffers for the output, so runForMultiple is expecting something like

Map<int, Buffer>

But Object detection is

Map<int, Map<int, Buffer>>

I forked and fixed for my purposes, but hope that helps. This repo is no longer maintained and the TF team at Google has a new repo, though the fix isn't there yet either...

https://github.com/tensorflow/flutter-tflite

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants