Saturday, 13 March 2021

Enter non-ascii charaters into the app created by Qt for webassembly

    With the help of wasm, we are able to port the apps written by Qt to the browser, this save us a lot of times in many cases, but same as android/ios, wasm of Qt do have their limitations and the Qt company do not have much interest to fix those issues, I guess this is because major market of Qt are automobile, IOT etc, but not mobile phone or browser. 

    In this post I would like to write down how do I get around the issue 78826--Cannot enter non ascii characters to the TextField and QLineEdit from browser(wasm).


Limitations


1.Cannot access system font


    Download the font you want to display/enter first, unlike desktop/mobile, Qt for wasm cannot access system font.  Today I would use the font download from oppo as an example. After you download the ttf, compress it by qCompress in order to reduce the size.



QFile file("OPPOSans-H.ttf");
if(file.open(QIODevice::ReadOnly)){
    auto const contents = qCompress(file.readAll(), 9);
    QFile fwrite("OPPOSans-H.zip");
    if(fwrite.open(QIODevice::WriteOnly)){
        fwrite.write(contents);
    }else{
        qDebug()<<__func__<<": cannot write file";
    }
}else{
    qDebug()<<__func__<<": cannot open file";
}


    After that, add the zip file into the Qt resource system, load it and set it as the font of the QPlainTextEdit.


//MainWindow.cpp, set font for QPlainTextEdit
QFont const monospace(font_manager().get_font_family());
ui->plainTextEdit->setFont(monospace);

//font_manager.cpp, a class use to register the font and access the font id
font_manager::font_manager()
{
    QFile ifile(":/assets/OPPOSans-H.zip");
    if(ifile.open(QIODevice::ReadOnly)){
        auto const font_id = QFontDatabase::addApplicationFontFromData(qUncompress(ifile.readAll()));
        qDebug()<<__func__<<"font id:"<<font_id;

        font_family_ = QFontDatabase::applicationFontFamilies(font_id).at(0);
    }
}

QFont font_manager::get_font() const
{
    return QFont(get_font_family());
}

QString font_manager::get_font_family() const noexcept
{
    return font_family_;
}


2. Cannot enter Chinese from QPlainTextEdit

    This issue is quite annoying, but we do have a "stupid" solution to overcome this problem. The way I found is use the prompt dialog of js library to input the text. The library I pick called bootbox. You can get around this limit by following steps

a. Use js to enter the text


var _global_text_process_result = ''
//You need to allocate memory before return the array to the c++ side 
function getStringFromJS(targetStr, msg)
{
    console.log(msg + ":" + targetStr);
    var jsString = targetStr;
    var lengthBytes = lengthBytesUTF8(jsString)+1;
    var stringOnWasmHeap = _malloc(lengthBytes);
    stringToUTF8(jsString, stringOnWasmHeap, lengthBytes);
    console.log(msg + " heap:" + stringOnWasmHeap);

    return stringOnWasmHeap;
}
 
//call the prompt dialog of bootbox
function multiLinesPrompt(inputString, inputMode, inputTitle)
{
    bootbox.prompt({        
	    title: inputTitle,
            inputType: "textarea",
            value: inputString,
                
            callback: function (result) { console.log(result); _global_text_process_result = result; }
    });
} 
 
//This function use to avoid the text update repeatly
function globalTextProcessResultIsValid()
{
    return _global_text_process_result !== ""
}

function getGlobalTextProcessResult()
{    
    results = getStringFromJS(_global_text_process_result, "js getGlobalTextProcessResult")
    _global_text_process_result = ""
    
    return results	
}


b. Call the js function from c++


#include "custom_qplain_text_edit.hpp"

#include <QDebug>
#include <QTimer>

#ifdef Q_OS_WASM

#include <emscripten.h>

namespace{

EM_JS(char*, global_text_process_result_is_valid, (), {
          return globalTextProcessResultIsValid();
      })

EM_JS(char*, get_global_text_process_result, (), {
          return getGlobalTextProcessResult();
      })

EM_JS(void, multi_lines_prompt, (char const *input_strings, char const *input_mode, char const *title), {
          multiLinesPrompt(UTF8ToString(input_strings), UTF8ToString(input_mode), UTF8ToString(title));
      })

}

#endif

custom_qplain_text_edit::custom_qplain_text_edit(QWidget *parent) :
    QPlainTextEdit(parent)
{
#ifdef Q_OS_WASM  
    timer_ = new QTimer(this);
    timer_->setInterval(100);

    connect(timer_, &QTimer::timeout, [this]()
    {
        if(global_text_process_result_is_valid()){
            char *msg = get_global_text_process_result();
            setPlainText(QString(msg));
            free(msg);
            timer_->stop();
        }
    });
#endif
}

void custom_qplain_text_edit::mousePressEvent(QMouseEvent *e)
{
#ifdef Q_OS_WASM
    if(e->button() == Qt::RightButton){
        QPlainTextEdit::mousePressEvent(e);
    }else{
        multi_lines_prompt(toPlainText().toUtf8().data(), "textarea", "Input plain text");
        timer_->start();
    }
#else
    QPlainTextEdit::mousePressEvent(e);
#endif
}

c. Include js script and css in generated html file




    To be honest, this is really troublesome, and I hope that one day this shortcoming can be resolved.

Source codes

Github

Full package, include the fonts and js libraries

Thursday, 25 June 2020

Dense extreme inception edge detection with opencv and deep learning

    This tutorial introduce how to perform edge detection with opencv and deep learning. You will learn how to apply Dense Extreme Inception Network(DexiNed) and Holistically-Nested Edge Detection (HED) to images and videos. If you were interesting about HED, I recommend you study a brilliant post of pyimagesearch, I would explain the main points of how to perform edge detection by these networks with c++ and opencv.


Apply HED by opencv and c++


    This page explain how to do it,  you can find the link of the model and prototxt from there too. The author register a new layer, without it opencv cannot generate proper results.

Apply DexiNed by opencv and c++



    You can find out the explanation of DexiNed from this page, in order to perform edge detection by DexiNed, we need to convert the model to onnx, I prefer pytorch for this purpose. Why I do not prefer tensorflow? Because convert the model of tensorflow to the format opencv can read is much more complicated from my ex experiences, tensorflow is feature rich but I always feel like they are trying very hard to make things unnecessary complicated, their notoriously bad api design explain this very well.

1.Convert pytorch model of DexiNed to onnx

  1. Clone the project blogCodes2
  2. Navigate into edges_detection_with_deep_learning
  3. Clone the project DexiNed
  4. Copy the file model.py in edges_detection_with_deep_learning/model.py into DexiNed/DexiNed-Pytorch
  5. Run the script to_onnx.py
    If you do not want to go through the trouble, just download the model from here.

2.Load and forward image by DexiNed and HED




void switch_to_cuda(cv::dnn::Net &net)
{
    try {
        for(auto const &vpair : cv::dnn::getAvailableBackends()){
            std::cout<<vpair.first<<", "<<vpair.second<<std::endl;
            if(vpair.first == cv::dnn::DNN_BACKEND_CUDA && vpair.second == cv::dnn::DNN_TARGET_CUDA){
                std::cout<<"can switch to cuda"<<std::endl;
                net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
                net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
                break;
            }
        }
    }catch(std::exception const &ex){
        net.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
        net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
        throw std::runtime_error(ex.what());
    }
}

std::tuple<cv::Mat, long long> forward_utils(cv::dnn::Net &net, cv::Mat const &input, cv::Size const &blob_size)
{
    using namespace std::chrono;

    //measure duration
    auto const start = high_resolution_clock::now();
    cv::Mat blob = cv::dnn::blobFromImage(input, 1.0, blob_size,
                                          cv::Scalar(104.00698793, 116.66876762, 122.67891434), false, false);
    net.setInput(blob);
    cv::Mat out = net.forward();
    cv::resize(out.reshape(1, blob_size.height), out, input.size());
    //the data type of out is CV_32F(single channel, floating point) so we need to upscale the value and convert
    //it to CV_8U(single channel, uchar)
    out *= 255;
    out.convertTo(out, CV_8U);
    //convert gray to bgr because we need to create montage(1 row, 3 column of images in our case)
    auto const finish = high_resolution_clock::now();
    auto const elapsed = duration_cast<milliseconds>(finish - start).count();
    cv::cvtColor(out, out, cv::COLOR_GRAY2BGR);

    return {out, elapsed};
}

class hed_edges_detector
{
public:
    hed_edges_detector(std::string const &weights, std::string const &config) :
        net_(cv::dnn::readNet(config, weights))
    {
        switch_to_cuda(net_);
    }

    long long elapsed() const
    {
        return elapsed_;
    }

    cv::Mat forward(cv::Mat const &input)
    {
        auto result = forward_utils(net_, input, {500, 500});
        elapsed_ += std::get<1>(result);
        return std::get<0>(result);
    }

private:
    long long elapsed_ = 0;
    cv::dnn::Net net_;
};

class dexi_edges_detector
{
public:
    explicit dexi_edges_detector(std::string const &model) :
        net_(cv::dnn::readNet(model))
    {
        switch_to_cuda(net_);
    }

    long long elapsed() const
    {
        return elapsed_;
    }

    cv::Mat forward(cv::Mat const &input)
    {
        auto result = forward_utils(net_, input, {400, 400});
        elapsed_ += std::get<1>(result);
        return std::get<0>(result);
    }

private:
    long long elapsed_ = 0;
    cv::dnn::Net net_;
};


3. Detect edges of image

   


void test_image(std::string const &mpath)
{
    cv::Mat img = cv::imread("2007_000129.jpg");
    hed_edges_detector hed(mpath + "hed_pretrained_bsds.caffemodel", mpath + "deploy.prototxt");
    auto hed_out = hed.forward(img);

    dexi_edges_detector dexi(mpath + "24_model.onnx");
    auto dexi_out = dexi.forward(img);

    cv::Size const frame_size(img.cols, img.rows);
    int constexpr grid_x = 3;
    int constexpr grid_y = 1;
    ocv::montage mt(frame_size, grid_x, grid_y);
    mt.add_image(img);
    mt.add_image(hed_out);
    mt.add_image(dexi_out);

    cv::imshow("results", mt.get_montage());
    cv::imwrite("results2.jpg", mt.get_montage());
    cv::waitKey();
}

4. Detect edges of video




void test_video(std::string const &mpath)
{
    cv::VideoCapture cap("pedestrian.mp4");
    if(cap.isOpened()){
        hed_edges_detector hed(mpath + "hed_pretrained_bsds.caffemodel", mpath + "deploy.prototxt");
        dexi_edges_detector dexi(mpath + "24_model.onnx");

        //unique_ptr is a resource manager class(smart pointer) of c++,
        //we allocate memory by the reset(or make_unique) api,
        //after leaving the scope(scope is surrounded by {}), the memory will be released. In c++, the
        //best way of manage the resource is avoid explicit memory allocation, if you really need to do it,
        //guard your memory by smart pointer. I use unique_ptr at here because I cannot
        //initialize the objects before I know the frame size of the video.
        std::unique_ptr<ocv::montage> mt;
        std::unique_ptr<cv::VideoWriter> vwriter;

        cv::Mat frame;
        float frame_count = 0;
        while(1){
            cap>>frame;
            if(frame.empty()){
                break;
            }

            ++frame_count;
            cv::resize(frame, frame, {}, 0.5, 0.5);
            auto const hed_out = hed.forward(frame);
            auto const dexi_out = dexi.forward(frame);
            if(!mt){
                //initialize the class to create montage
                //First arguments tell the class the size of each frame
                cv::Size const frame_size(frame.cols, frame.rows);
                int constexpr grid_x = 3;
                int constexpr grid_y = 1;
                mt.reset(new ocv::montage(frame_size, grid_x, grid_y));
            }
            if(!vwriter){
                auto const fourcc = cv::VideoWriter::fourcc('F', 'M', 'P', '4');
                int constexpr fps = 30;
                //because the montage is 3 columns and 1 row, so the cols need to multiply by 3
                vwriter.reset(new cv::VideoWriter("out.avi", fourcc, fps, {frame.cols * 3, frame.rows}));
            }
            mt->add_image(frame);
            mt->add_image(hed_out);
            mt->add_image(dexi_out);

            auto const montage = mt->get_montage();
            cv::imshow("out", mt->get_montage());
            vwriter->write(montage);
            cv::waitKey(10);
            mt->clear();
        }
        std::cout<<"hed elapsed time = "<<hed.elapsed()<<", frame count = "<<frame_count
                <<", fps = "<<1000.0f/(hed.elapsed()/frame_count)<<std::endl;
        std::cout<<"dexi elapsed time = "<<dexi.elapsed()<<", frame count = "<<frame_count
                <<", fps = "<<1000.0f/(dexi.elapsed()/frame_count)<<std::endl;
    }else{
        std::cerr<<"cannot open video pedestrian.mp4"<<std::endl;
    }
}

Results of image detection


img_00

img_01

Results of video detection



Runtime performance on gpu(gtx 1060)

    Following results are based on the video I posted on youtube.The video has 733 images. From left to right is original frame, frame processed by HED, frame processed by DexiNed.

    HED elapsed time is 43870ms, fps is 16.7085.
    DexiNed elapsed time is 45149ms, fps is 16.2351.

    The crop layer of HED do not support cuda, it should become faster after the cuda layer is done.

Source codes


    Located at github.

Tuesday, 9 June 2020

Asynchronous video capture written by opencv and Qt

Before we start


  1. If you do not familiar with QThread, the document of Qt5 show us how to use QThread properly already, please check it out by google(keyword is "QThread doc", or you could open the page by this link), and read this post, it may save you a lot of troubles.
  2. If you do not familiar with thread, I suggest you read the book c++ concurrency in action, chapter 1~4 and basic atomic knowledge from this site should be more than enough for most of the tasks.

 Why do we need asynchronous video capture


  1. Performance, VideoCapture could be slow when capturing the frame from rtsp protocol, especially when the frame size is big, the ideal solution is capture the frame and process the frame in different thread.
  2. Do not freeze the ui. cv::waitKey is a blocking operation, it is a bad idea to use it directly in gui programming.

Dependencies


    opencv4(using 4.3.0 in this tutorial)
    Qt5(using 5.13.2 in this tutorial)
   
    If you do not want to register a new account in order to download Qt5(they did a great job to piss off open source communities), you can try this link.


Define interfaces for worker

    In order to make us easier to switch the frame capture in the future, I create a worker_base class for this purpose.

frame_capture_config.hpp



#ifndef FRAME_CAPTURE_CONFIG_HPP
#define FRAME_CAPTURE_CONFIG_HPP

#include <QString>

namespace frame_capture{

struct frame_capture_config
{
    //True will copy the frame captured, useful if the functors
    //you add work in different thread
    bool deep_copy_ = false;
    int fps_ = 30;
    QString url_;
};

}

#endif // FRAME_CAPTURE_CONFIG_HPP


frame_worker_base.hpp



#ifndef FRAME_WORKER_BASE_HPP
#define FRAME_WORKER_BASE_HPP

#include <opencv2/core.hpp>

#include <QObject>

#include <functional>

namespace frame_capture{

struct frame_capture_config;

class worker_base : public QObject
{
    Q_OBJECT
public:
    explicit worker_base(QObject *parent = nullptr);

    /**
     * Add listener to process the frame
     * @param functor Process the frame
     * @param key Key of the functor, we need it to remove the functor
     * @return True if able to add the listener and vice versa
     */
    virtual bool add_image_listener(std::function<void(cv::Mat)> functor, void *key) = 0;

    virtual frame_capture_config get_params() const = 0;
    virtual QString get_url() const = 0;
    virtual bool is_stop() const = 0;
    /**
     * This function will stop the frame capturer, release all functor etc
     */
    virtual void release() = 0;
    /**
     * Remove the listener
     * @param key The key same as add_image_listener when you add the functor
     * @return True if able to remove the listener and vice versa
     * @warning Remember to remove_image_listener before the resources of the register functor
     * released, else the app may crash.
     */
    virtual bool remove_image_listener(void *key) = 0;
    virtual void set_max_fps(int input) = 0;
    virtual void set_params(frame_capture_config const &config) = 0;
    /**
     * Will start the frame captured with the url set by the start_url api
     */
    virtual void start() = 0;
    virtual void start_url(QString const &url) = 0;
    virtual void stop() = 0;

signals:
    void cannot_open(QString const &media_url);    
};

}

#endif // FRAME_WORKER_BASE_HPP


Implement frame capture by opencv


    Compare with another libraries like gstreamer, ffmpeg, libvlc, Qt etc. cv::VideoCapture has the simplest api to capture the frame, although c++ api of this class do not work on android yet(you have to use jni, this make porting the app to android become more troubles).


frame_capture_opencv_worker.hpp



#ifndef FRAME_CAPTURED_OPENCV_WORKER_HPP
#define FRAME_CAPTURED_OPENCV_WORKER_HPP

#include "frame_worker_base.hpp"

#include <QObject>

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>

#include <atomic>
#include <functional>
#include <map>
#include <mutex>

class QTimer;

namespace frame_capture{

struct frame_capture_config;

class capture_opencv_worker : public worker_base
{
    Q_OBJECT
public:
    explicit capture_opencv_worker(frame_capture_config const &config);
    ~capture_opencv_worker() override;

    bool add_image_listener(std::function<void(cv::Mat)> functor, void *key) override;
    frame_capture_config get_params() const override;
    QString get_url() const override;
    void set_max_fps(int input) override;

    bool is_stop() const override;
    void release() override;
    bool remove_image_listener(void *key) override;
    void set_params(frame_capture_config const &config) override;
    void start() override;
    void start_url(QString const &url) override;
    void stop() override;

private:
    void open_media(QString const &media_url);
    void captured_frame();
    void set_max_fps_non_ts(int input);
    void time_out();

    cv::VideoCapture capture_;
    bool deep_copy_ = false;
    int frame_duration_ = 30;
    std::map<void*, std::function<void(cv::Mat)>> functors_;
    int max_fps_;
    QString media_url_;
    mutable std::mutex mutex_;
    bool stop_;
    QTimer *timer_ = nullptr;
    int webcam_index_ = 0;
};

}

#endif // FRAME_CAPTURED_OPENCV_WORKER_HPP


frame_capture_opencv_worker.cpp


    Please read the comments carefully, they may help you avoid subtle bugs.


#include "frame_capture_opencv_worker.hpp"

#include "frame_capture_config.hpp"

#include <QDebug>
#include <QElapsedTimer>
#include <QThread>

#include <QTimer>

#include <chrono>
#include <thread>

namespace frame_capture{

capture_opencv_worker::capture_opencv_worker(frame_capture_config const &config) :
    worker_base(),
    deep_copy_(config.deep_copy_),
    media_url_(config.url_),
    stop_(true)
{
    set_max_fps(config.fps_);
}

capture_opencv_worker::~capture_opencv_worker()
{
    release();
}

bool capture_opencv_worker::add_image_listener(std::function<void (cv::Mat)> functor, void *key)
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return functors_.insert(std::make_pair(key, std::move(functor))).second;
}

frame_capture_config capture_opencv_worker::get_params() const
{
    std::lock_guard<std::mutex> lock(mutex_);
    frame_capture_config config;
    config.deep_copy_ = deep_copy_;
    config.fps_ = max_fps_;
    config.url_ = media_url_;

    return config;
}

QString capture_opencv_worker::get_url() const
{
    std::lock_guard<std::mutex> lock(mutex_);
    return media_url_;
}

void capture_opencv_worker::set_max_fps(int input)
{
    std::lock_guard<std::mutex> lock(mutex_);
    set_max_fps_non_ts(input);
}

void capture_opencv_worker::open_media(const QString &media_url)
{   
    qDebug()<<__func__<<": "<<media_url;
    bool can_convert_to_int = 0;
    if(timer_){
        timer_->stop();
    }
    media_url.toInt(&can_convert_to_int);
    stop_ = true;
    try{
        capture_.release();
        //If you pass in int, opencv will open webcam if it could
        if(can_convert_to_int){
            capture_.open(media_url.toInt());
        }else{
            capture_.open(media_url.toStdString());
        }
    }catch(std::exception const &ex){
        qDebug()<<__func__<<ex.what();
    }

    if(capture_.isOpened()){
        stop_ = false;        
    }else{
        stop_ = true;
        emit cannot_open(media_url);
    }
}

bool capture_opencv_worker::is_stop() const
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return stop_;
}

void capture_opencv_worker::release()
{
    qDebug()<<__func__<<": delete cam with url = "<<media_url_;
    std::lock_guard<std::mutex> lock(mutex_);
    qDebug()<<__func__<<": enter lock region";
    stop_ = true;
    qDebug()<<__func__<<": clear functor";
    functors_.clear();
    qDebug()<<__func__<<": release capture";
    capture_.release();
    qDebug()<<__func__<<": delete timer later";
}

bool capture_opencv_worker::remove_image_listener(void *key)
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return functors_.erase(key) > 0;
}

void capture_opencv_worker::set_params(const frame_capture_config &config)
{
    std::lock_guard<std::mutex> lock(mutex_);
    deep_copy_ = config.deep_copy_;
    media_url_ = config.url_;
    set_max_fps_non_ts(config.fps_);
}

void capture_opencv_worker::start()
{
    start_url(media_url_);
}

void capture_opencv_worker::start_url(QString const &url)
{    
    stop();
    qDebug()<<__func__<<": stop = "<<stop_<<", url = "<<url;
    std::lock_guard<std::mutex> lock(mutex_);
    open_media(url);
    if(capture_.isOpened()){
        media_url_ = url;
        captured_frame();
    }
}

void capture_opencv_worker::stop()
{        
    std::lock_guard<std::mutex> lock(mutex_);
    stop_ = true;
    qDebug()<<__func__<<": stop the worker = "<<stop_;
}

void capture_opencv_worker::captured_frame()
{        
    qDebug()<<__func__<<": capture_.isOpened()";

    //You must initialize and delete timer in the same thread
    //of the VideoCapture running, else you may trigger undefined
    //behavior
    if(!timer_){
        qDebug()<<__func__<<": init timer";
        timer_ = new QTimer;
        timer_->setSingleShot(true);
        connect(timer_, &QTimer::timeout, this, &capture_opencv_worker::time_out);
    }

    qDebug()<<__func__<<": start timer";
    timer_->start();
    qDebug()<<__func__<<": called start timer";
}

void capture_opencv_worker::set_max_fps_non_ts(int input)
{
    max_fps_ = std::max(input, 1);
    frame_duration_ = std::max(1000 / max_fps_, 1);
}

void capture_opencv_worker::time_out()
{        
    QElapsedTimer elapsed;
    elapsed.start();
    std::lock_guard<std::mutex> lock(mutex_);    
    if(!stop_ && timer_){
        capture_.grab();
        cv::Mat frame;
        capture_.retrieve(frame);
        if(!frame.empty()){
            for(auto &iter : functors_){
                iter.second(deep_copy_ ? frame.clone() : frame);
            }
            auto const interval = frame_duration_ - elapsed.elapsed();
            timer_->start(std::max(static_cast<int>(interval), 10));
        }else{
            open_media(media_url_);
            timer_->start();
        }
    }else{
        capture_.release();
        if(timer_){
            //you must delete timer in the thread where you initiaize it
            delete timer_;
            timer_ = nullptr;
        }
    }
}

}


Create controller associate with the worker


frame_capture_controller.hpp




#ifndef FRAME_CAPTURE_OPENCV_CONTROLLER_HPP
#define FRAME_CAPTURE_OPENCV_CONTROLLER_HPP

#include <QObject>
#include <QThread>
#include <QVariant>

#include <opencv2/core.hpp>

#include <functional>

namespace frame_capture{

struct frame_capture_config;

class worker_base;

class capture_controller : public QObject
{
    Q_OBJECT
public:
    explicit capture_controller(frame_capture_config const &config);
    ~capture_controller() override;

    bool add_image_listener(std::function<void(cv::Mat)> functor, void *key);
    frame_capture_config get_params() const;
    QString get_url() const;
    bool is_stop() const;
    bool remove_image_listener(void *key);
    void set_max_fps(int input);
    void set_params(frame_capture_config const &config);    
    void stop();

signals:
    void cannot_open(QString const &media_url);
    void reach_the_end();

    void start();
    void start_url(QString const &url);

private:
    void init_frame_capture();

    worker_base *frame_capture_;
    QThread thread_;
};

}

#endif // FRAME_CAPTURE_OPENCV_CONTROLLER_HPP


frame_capture_controller.cpp




#include "frame_capture_controller.hpp"

#include "frame_capture_config.hpp"

#include "frame_capture_opencv_worker.hpp"

#include <QDebug>

namespace frame_capture{

void capture_controller::init_frame_capture()
{
    frame_capture_->moveToThread(&thread_);
    connect(&thread_, &QThread::finished, frame_capture_, &QObject::deleteLater);

    connect(frame_capture_, &worker_base::cannot_open, this, &capture_controller::cannot_open);    

    connect(this, &capture_controller::start, frame_capture_, &worker_base::start);
    connect(this, &capture_controller::start_url, frame_capture_, &worker_base::start_url);

    thread_.start();
}

capture_controller::capture_controller(frame_capture_config const &config) :
    QObject(),
    frame_capture_(new capture_opencv_worker(config))
{
    init_frame_capture();
}

capture_controller::~capture_controller()
{        
    qDebug()<<__func__<<": quit";
    //must called release or before quit and wait, else the
    //frame capture will fall into infinite loop
    frame_capture_->release();
    thread_.quit();
    qDebug()<<__func__<<": wait";
    thread_.wait();
    qDebug()<<__func__<<": wait exit";
}

bool capture_controller::add_image_listener(std::function<void (cv::Mat)> functor, void *key)
{
    return frame_capture_->add_image_listener(std::move(functor), key);
}

frame_capture_config capture_controller::get_params() const
{
    return frame_capture_->get_params();
}

QString capture_controller::get_url() const
{
    return frame_capture_->get_url();
}

bool capture_controller::is_stop() const
{
    return frame_capture_->is_stop();
}

bool capture_controller::remove_image_listener(void *key)
{
    return frame_capture_->remove_image_listener(key);
}

void capture_controller::set_max_fps(int input)
{
    frame_capture_->set_max_fps(input);
}

void capture_controller::set_params(const frame_capture_config &config)
{
    frame_capture_->set_params(config);
}

void capture_controller::stop()
{
    frame_capture_->stop();
}

}


Do we need mutex?

    With current solution, yes, we need it. But we could avoid mutex if we declare the other api as signal and connect them to the worker, just like the start and start_url signals of the frame_capture_controller did.

How to use it?

    You can find the answer from tthis file--mainwindow.cpp. For simplicity, I did not put the functor into another thread.

Example






Warning

    Remeber to release the listener of the frame capture if the resources associate will be deleted.





Wednesday, 27 March 2019

Asynchronous computer vision algorithm

    In last post I introduce how to create an asynchronous class to capture the frame by cv::VideoCapture, today I would show you how to create an asynchronous algorithm which could be called a lot of times without re-spawn any new thread. 

  Main flow of async_to_gray_algo 

    async_to_gray_algo is a small class which convert the image from bgr channels to gray image in another thread.  If you have use any thread pool library before, all of them are using similar logic under the hood, but with more generic, flexible api.

async_to_gray_algo::async_to_gray_algo(cv::Mat &result, std::mutex &result_mutex) :
    result_(result),
    result_mutex_(result_mutex),
    stop_(false)
{
    auto func = [&]()
    {
        //1. In order to reuse the thread, we need to keep it alive
        //that is why we should put it in an infinite for loop
        for(;;){
            unique_lock<mutex> lock(mutex_);
            //2. use condition_variable to replace sleep(x milliseconds) is more efficient
            wait_.wait(lock, [&]() //wait_ will acquire the lock if condition satisfied
            {
                return stop_ || !input_.empty();
            });
            //3. stop the thread in destructor
            if(stop_){
                return;
            }

            //4. convert and write the results into result_
            //we need gmutex to synchronize the result_, else it may incur
            //race condition in the main thread.
            {
                lock_guard<mutex> glock(result_mutex);
                cv::cvtColor(input_, result_, COLOR_BGR2GRAY);
            }
            //5: clear the input_, else the wait_ variable may wake up and continue the task
            //due to spurious wake up
            input_.resize(0);
        }
    };
    thread_ = std::thread(func);
}

    After we initialize the thread, all we need to do is call it by the process api whenever we need to convert image from bgr channels to gray image.


void async_to_gray_algo::process(Mat input)
{
    {
        lock_guard<mutex> lock(mutex_);
        input_ = input;
    }
    //wait condition will acquire the mutex after it receive notification 
    wait_.notify_one();
}

     If we do not need this class anymore, we can and should stop it in the destructor, always followed the rule of RAII when you can is a best practices to keep your codes clean, robust and (much)easier to maintain(let the machine do the jobs of book keeping for humans).



async_to_gray_algo::~async_to_gray_algo()
{
    {
        lock_guard<mutex> lock(mutex_);
        stop_ = true;
    }
    wait_.notify_one();
    thread_.join();
}

What is spurious wake up?

    That means the condition_variable may wake up even no notification(notify_one or notify_all) happened. This is one of the reason why we should not wait without a condition(Another reason lost wake up).

Do we have a better way to reuse the thread?

    Yes, we have. The easiest solution is create a generic thread pool, you can check the codes of a simple thread pool at here. I would show you how to use it in the future.

Better way to pass the variable between different thread?

    As you see, the way I communicate between main thread and the other thread are awkward, it will be a hell to maintain the source codes like that when your program become bigger and bigger. Fortunately, we have better way to pass the variable between different thread with the help of Qt5, by their signal and slot mechanism.Not to mention, Qt5 can help us make the codes much more easy to maintain.

Summary

      The source codes of async_opencv_video_capture could find on github.

Saturday, 23 March 2019

Asynchronous videoCapture of opencv

    Today I would like to introduce how to create an asynchronous videoCapture by opencv and standard library of c++. Captured video from HD video, especially the HD video from internet could be a time consuming task, it is not a good idea to waste the cpu cycle to wait the frame arrive, in order to speed up our app, or keep the gui alive, we better put the video capture part into another thread.

    With the helps of thread facilities added since c++11, make the videoCapture of opencv support cross platform asynchronous read operation become a simple task, let us have a simple example.


#include <ocv_libs/camera/async_opencv_video_capture.hpp>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>

#include <iostream>
#include <mutex>

int main(int argc, char *argv[])
{    
    if(argc != 2){
        std::cerr<<"must enter url of media\n";
        return -1;
    }

    std::mutex emutex;
    //create the functor to handle the exception when cv::VideoCapture fail
    //to capture the frame and wait 30 msec between each frame
    long long constexpr wait_msec = 30;
    ocv::camera::async_opencv_video_capture<> cl([&](std::exception const &ex)
    {
        //cerr of c++ is not a thread safe class, so we need to lock the mutex
        std::lock_guard<std::mutex> lock(emutex);
        std::cerr<<"camera exception:"<<ex.what()<<std::endl;

        return true;
    }, wait_msec);
    cl.open_url(argv[1]);

    //add listener to process captured frame
    //the listener could process the task in another thread too,
    //to make things easier to explain, I prefer to process it in
    //the same thread of videoCapture
    cv::Mat img;
    cl.add_listener([&](cv::Mat input)
    {
        std::lock_guard<std::mutex> lock(emutex);
        img = input;
    }, &emutex);

    //execute the task(s)
    cl.run();

    //We must display the captured image at main thread but not
    //in the listener, because every manipulation related to gui
    //must perform in the main thread(it also called gui thread)
    for(int finished = false; finished != 'q';){
        finished = std::tolower(cv::waitKey(30));
        std::lock_guard<std::mutex> lock(emutex);
        if(!img.empty()){
            cv::imshow("frame", img);
        }
    }
}


Important details of async_opencv_video_capture

1. Create an infinite for loop to read the frame in another thread



void run()
{
    if(thread_){
        //before we start the thread,
        //we need to stop it
        set_stop(true);
        //call join before task(s)
        //of the thread done
        thread_->join();
        set_stop(false);
    }

    //create a new thread
    create_thread();
}



    void create_thread()
    {
        thread_ = std::make_unique<std::thread>([this]()
        {
            //read the frames in infinite for loop
            for(cv::Mat frame;;){
                std::lock_guard<Mutex> lock(mutex_);
                if(!stop_ && !listeners_.empty()){
                    try{
                        cap_>>frame;
                    }catch(std::exception const &ex){
                        //reopen the camera if exception thrown ,this may happen frequently when you
                        //receive frames from network
                        cap_.open(url_);
                        cam_exception_listener_(ex);
                    }

                    if(!frame.empty()){
                        for(auto &val : listeners_){
                            val.second(frame);
                        }
                    }else{
                        if(replay_){
                            cap_.open(url_);
                        }else{
                            break;
                        }
                    }
                    std::this_thread::sleep_for(wait_for_);
                }else{
                    break;
                }
            }
        });
    }

    The listeners_ is a vector which stores the std::function<cv::Mat> to be called in the infinite loop if the frame readed by the videoCapture was not empty. The users must handle the exceptions thrown by those functors by themselves else the app will crash.

2. Stop the thread in the destructor



void set_stop(bool val)
{
    std::lock_guard<Mutex> lock(mutex_);
    stop_ = val;
}

void stop()
{
    set_stop(true);
}

template<typename Mutex>
async_opencv_video_capture<Mutex>::~async_opencv_video_capture()
{
    stop();
    thread_->join();
}


    We must stop and join the thread in the destructor, else the thread may never end and cause the app freeze.

3. Select mutex type by template

    By default, async_opencv_video_capture use std::mutex, it is more efficient but may cause dead lock if you called the api of async_opencv_video_capture in the listeners. If you want to avoid dead lock this issue, use std::recursive_mutex to replace std::mutex.

Summary

      The source codes of async_opencv_video_capture could find on github.