Skip to content

How to use with a machine vision camera...

Dr. Daniel L. Lau edited this page May 15, 2018 · 5 revisions

I know what you're thinking, how do I use this without the webcam? Well, I've added a new class just for you. It's called the LAUCalTagWidget. Just create the widget with the following code:

#include "laucaltagglwidget.h"

LAUCalTagWidget p;
p.setFrameSize(640, 480);
p.setPixelFormat(QOpenGLTexture::RGB);
p.setPixelType(QOpenGLTexture::UInt8);
p.show();

The setFrameSize(int cols, int rows) method tells the widget to create a 2D texture on the GPU of size cols x rows. The setPixelFormat(QOpenGLTexture::PixelFormat) is one of two arguments needed by the QOpenGLContext::setData() method to tell the GPU what the format of your incoming machine camera frame buffer is. In this case, I am saying that my frame buffer is composed of red, green, and blue in chunky format. The setPixelType(QOpenGLTexture::PixelType) method tells the widget is how the bits associated with each color component of each pixel is to be interpreted. In this case, I am specifying 8-bit unsigned characters. So each pixel is composed of 3 unsigned bytes or 24 bits per pixel.

You can now use this widget to process your incoming video using the following pseudo-code:

LAUMachineVisionCamera mvc;
while (1) {
    unsigned char *frame = mvc.grabFrame();
    p.setFrame(frame);

    LAUMemoryObject obj = p.grabImage();
    obj.save(QString());
}

In any case, the GPU texture is a floating point, RGBA texture and is processed in the exact same way regardless of the incoming video format. So monochrome machine vision cameras will have their frame buffers copied to the red channel of the GPU texture. This will then be converted to gray scale using the RGB to luminance transform. The only impact this has on you is how to set the threshold offset used for binarizing the luminance image.

Clone this wiki locally