Thursday, 25 June 2020

Dense extreme inception edge detection with opencv and deep learning

    This tutorial introduce how to perform edge detection with opencv and deep learning. You will learn how to apply Dense Extreme Inception Network(DexiNed) and Holistically-Nested Edge Detection (HED) to images and videos. If you were interesting about HED, I recommend you study a brilliant post of pyimagesearch, I would explain the main points of how to perform edge detection by these networks with c++ and opencv.


Apply HED by opencv and c++


    This page explain how to do it,  you can find the link of the model and prototxt from there too. The author register a new layer, without it opencv cannot generate proper results.

Apply DexiNed by opencv and c++



    You can find out the explanation of DexiNed from this page, in order to perform edge detection by DexiNed, we need to convert the model to onnx, I prefer pytorch for this purpose. Why I do not prefer tensorflow? Because convert the model of tensorflow to the format opencv can read is much more complicated from my ex experiences, tensorflow is feature rich but I always feel like they are trying very hard to make things unnecessary complicated, their notoriously bad api design explain this very well.

1.Convert pytorch model of DexiNed to onnx

  1. Clone the project blogCodes2
  2. Navigate into edges_detection_with_deep_learning
  3. Clone the project DexiNed
  4. Copy the file model.py in edges_detection_with_deep_learning/model.py into DexiNed/DexiNed-Pytorch
  5. Run the script to_onnx.py
    If you do not want to go through the trouble, just download the model from here.

2.Load and forward image by DexiNed and HED




void switch_to_cuda(cv::dnn::Net &net)
{
    try {
        for(auto const &vpair : cv::dnn::getAvailableBackends()){
            std::cout<<vpair.first<<", "<<vpair.second<<std::endl;
            if(vpair.first == cv::dnn::DNN_BACKEND_CUDA && vpair.second == cv::dnn::DNN_TARGET_CUDA){
                std::cout<<"can switch to cuda"<<std::endl;
                net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
                net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
                break;
            }
        }
    }catch(std::exception const &ex){
        net.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
        net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
        throw std::runtime_error(ex.what());
    }
}

std::tuple<cv::Mat, long long> forward_utils(cv::dnn::Net &net, cv::Mat const &input, cv::Size const &blob_size)
{
    using namespace std::chrono;

    //measure duration
    auto const start = high_resolution_clock::now();
    cv::Mat blob = cv::dnn::blobFromImage(input, 1.0, blob_size,
                                          cv::Scalar(104.00698793, 116.66876762, 122.67891434), false, false);
    net.setInput(blob);
    cv::Mat out = net.forward();
    cv::resize(out.reshape(1, blob_size.height), out, input.size());
    //the data type of out is CV_32F(single channel, floating point) so we need to upscale the value and convert
    //it to CV_8U(single channel, uchar)
    out *= 255;
    out.convertTo(out, CV_8U);
    //convert gray to bgr because we need to create montage(1 row, 3 column of images in our case)
    auto const finish = high_resolution_clock::now();
    auto const elapsed = duration_cast<milliseconds>(finish - start).count();
    cv::cvtColor(out, out, cv::COLOR_GRAY2BGR);

    return {out, elapsed};
}

class hed_edges_detector
{
public:
    hed_edges_detector(std::string const &weights, std::string const &config) :
        net_(cv::dnn::readNet(config, weights))
    {
        switch_to_cuda(net_);
    }

    long long elapsed() const
    {
        return elapsed_;
    }

    cv::Mat forward(cv::Mat const &input)
    {
        auto result = forward_utils(net_, input, {500, 500});
        elapsed_ += std::get<1>(result);
        return std::get<0>(result);
    }

private:
    long long elapsed_ = 0;
    cv::dnn::Net net_;
};

class dexi_edges_detector
{
public:
    explicit dexi_edges_detector(std::string const &model) :
        net_(cv::dnn::readNet(model))
    {
        switch_to_cuda(net_);
    }

    long long elapsed() const
    {
        return elapsed_;
    }

    cv::Mat forward(cv::Mat const &input)
    {
        auto result = forward_utils(net_, input, {400, 400});
        elapsed_ += std::get<1>(result);
        return std::get<0>(result);
    }

private:
    long long elapsed_ = 0;
    cv::dnn::Net net_;
};


3. Detect edges of image

   


void test_image(std::string const &mpath)
{
    cv::Mat img = cv::imread("2007_000129.jpg");
    hed_edges_detector hed(mpath + "hed_pretrained_bsds.caffemodel", mpath + "deploy.prototxt");
    auto hed_out = hed.forward(img);

    dexi_edges_detector dexi(mpath + "24_model.onnx");
    auto dexi_out = dexi.forward(img);

    cv::Size const frame_size(img.cols, img.rows);
    int constexpr grid_x = 3;
    int constexpr grid_y = 1;
    ocv::montage mt(frame_size, grid_x, grid_y);
    mt.add_image(img);
    mt.add_image(hed_out);
    mt.add_image(dexi_out);

    cv::imshow("results", mt.get_montage());
    cv::imwrite("results2.jpg", mt.get_montage());
    cv::waitKey();
}

4. Detect edges of video




void test_video(std::string const &mpath)
{
    cv::VideoCapture cap("pedestrian.mp4");
    if(cap.isOpened()){
        hed_edges_detector hed(mpath + "hed_pretrained_bsds.caffemodel", mpath + "deploy.prototxt");
        dexi_edges_detector dexi(mpath + "24_model.onnx");

        //unique_ptr is a resource manager class(smart pointer) of c++,
        //we allocate memory by the reset(or make_unique) api,
        //after leaving the scope(scope is surrounded by {}), the memory will be released. In c++, the
        //best way of manage the resource is avoid explicit memory allocation, if you really need to do it,
        //guard your memory by smart pointer. I use unique_ptr at here because I cannot
        //initialize the objects before I know the frame size of the video.
        std::unique_ptr<ocv::montage> mt;
        std::unique_ptr<cv::VideoWriter> vwriter;

        cv::Mat frame;
        float frame_count = 0;
        while(1){
            cap>>frame;
            if(frame.empty()){
                break;
            }

            ++frame_count;
            cv::resize(frame, frame, {}, 0.5, 0.5);
            auto const hed_out = hed.forward(frame);
            auto const dexi_out = dexi.forward(frame);
            if(!mt){
                //initialize the class to create montage
                //First arguments tell the class the size of each frame
                cv::Size const frame_size(frame.cols, frame.rows);
                int constexpr grid_x = 3;
                int constexpr grid_y = 1;
                mt.reset(new ocv::montage(frame_size, grid_x, grid_y));
            }
            if(!vwriter){
                auto const fourcc = cv::VideoWriter::fourcc('F', 'M', 'P', '4');
                int constexpr fps = 30;
                //because the montage is 3 columns and 1 row, so the cols need to multiply by 3
                vwriter.reset(new cv::VideoWriter("out.avi", fourcc, fps, {frame.cols * 3, frame.rows}));
            }
            mt->add_image(frame);
            mt->add_image(hed_out);
            mt->add_image(dexi_out);

            auto const montage = mt->get_montage();
            cv::imshow("out", mt->get_montage());
            vwriter->write(montage);
            cv::waitKey(10);
            mt->clear();
        }
        std::cout<<"hed elapsed time = "<<hed.elapsed()<<", frame count = "<<frame_count
                <<", fps = "<<1000.0f/(hed.elapsed()/frame_count)<<std::endl;
        std::cout<<"dexi elapsed time = "<<dexi.elapsed()<<", frame count = "<<frame_count
                <<", fps = "<<1000.0f/(dexi.elapsed()/frame_count)<<std::endl;
    }else{
        std::cerr<<"cannot open video pedestrian.mp4"<<std::endl;
    }
}

Results of image detection


img_00

img_01

Results of video detection



Runtime performance on gpu(gtx 1060)

    Following results are based on the video I posted on youtube.The video has 733 images. From left to right is original frame, frame processed by HED, frame processed by DexiNed.

    HED elapsed time is 43870ms, fps is 16.7085.
    DexiNed elapsed time is 45149ms, fps is 16.2351.

    The crop layer of HED do not support cuda, it should become faster after the cuda layer is done.

Source codes


    Located at github.

Tuesday, 9 June 2020

Asynchronous video capture written by opencv and Qt

Before we start


  1. If you do not familiar with QThread, the document of Qt5 show us how to use QThread properly already, please check it out by google(keyword is "QThread doc", or you could open the page by this link), and read this post, it may save you a lot of troubles.
  2. If you do not familiar with thread, I suggest you read the book c++ concurrency in action, chapter 1~4 and basic atomic knowledge from this site should be more than enough for most of the tasks.

 Why do we need asynchronous video capture


  1. Performance, VideoCapture could be slow when capturing the frame from rtsp protocol, especially when the frame size is big, the ideal solution is capture the frame and process the frame in different thread.
  2. Do not freeze the ui. cv::waitKey is a blocking operation, it is a bad idea to use it directly in gui programming.

Dependencies


    opencv4(using 4.3.0 in this tutorial)
    Qt5(using 5.13.2 in this tutorial)
   
    If you do not want to register a new account in order to download Qt5(they did a great job to piss off open source communities), you can try this link.


Define interfaces for worker

    In order to make us easier to switch the frame capture in the future, I create a worker_base class for this purpose.

frame_capture_config.hpp



#ifndef FRAME_CAPTURE_CONFIG_HPP
#define FRAME_CAPTURE_CONFIG_HPP

#include <QString>

namespace frame_capture{

struct frame_capture_config
{
    //True will copy the frame captured, useful if the functors
    //you add work in different thread
    bool deep_copy_ = false;
    int fps_ = 30;
    QString url_;
};

}

#endif // FRAME_CAPTURE_CONFIG_HPP


frame_worker_base.hpp



#ifndef FRAME_WORKER_BASE_HPP
#define FRAME_WORKER_BASE_HPP

#include <opencv2/core.hpp>

#include <QObject>

#include <functional>

namespace frame_capture{

struct frame_capture_config;

class worker_base : public QObject
{
    Q_OBJECT
public:
    explicit worker_base(QObject *parent = nullptr);

    /**
     * Add listener to process the frame
     * @param functor Process the frame
     * @param key Key of the functor, we need it to remove the functor
     * @return True if able to add the listener and vice versa
     */
    virtual bool add_image_listener(std::function<void(cv::Mat)> functor, void *key) = 0;

    virtual frame_capture_config get_params() const = 0;
    virtual QString get_url() const = 0;
    virtual bool is_stop() const = 0;
    /**
     * This function will stop the frame capturer, release all functor etc
     */
    virtual void release() = 0;
    /**
     * Remove the listener
     * @param key The key same as add_image_listener when you add the functor
     * @return True if able to remove the listener and vice versa
     * @warning Remember to remove_image_listener before the resources of the register functor
     * released, else the app may crash.
     */
    virtual bool remove_image_listener(void *key) = 0;
    virtual void set_max_fps(int input) = 0;
    virtual void set_params(frame_capture_config const &config) = 0;
    /**
     * Will start the frame captured with the url set by the start_url api
     */
    virtual void start() = 0;
    virtual void start_url(QString const &url) = 0;
    virtual void stop() = 0;

signals:
    void cannot_open(QString const &media_url);    
};

}

#endif // FRAME_WORKER_BASE_HPP


Implement frame capture by opencv


    Compare with another libraries like gstreamer, ffmpeg, libvlc, Qt etc. cv::VideoCapture has the simplest api to capture the frame, although c++ api of this class do not work on android yet(you have to use jni, this make porting the app to android become more troubles).


frame_capture_opencv_worker.hpp



#ifndef FRAME_CAPTURED_OPENCV_WORKER_HPP
#define FRAME_CAPTURED_OPENCV_WORKER_HPP

#include "frame_worker_base.hpp"

#include <QObject>

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>

#include <atomic>
#include <functional>
#include <map>
#include <mutex>

class QTimer;

namespace frame_capture{

struct frame_capture_config;

class capture_opencv_worker : public worker_base
{
    Q_OBJECT
public:
    explicit capture_opencv_worker(frame_capture_config const &config);
    ~capture_opencv_worker() override;

    bool add_image_listener(std::function<void(cv::Mat)> functor, void *key) override;
    frame_capture_config get_params() const override;
    QString get_url() const override;
    void set_max_fps(int input) override;

    bool is_stop() const override;
    void release() override;
    bool remove_image_listener(void *key) override;
    void set_params(frame_capture_config const &config) override;
    void start() override;
    void start_url(QString const &url) override;
    void stop() override;

private:
    void open_media(QString const &media_url);
    void captured_frame();
    void set_max_fps_non_ts(int input);
    void time_out();

    cv::VideoCapture capture_;
    bool deep_copy_ = false;
    int frame_duration_ = 30;
    std::map<void*, std::function<void(cv::Mat)>> functors_;
    int max_fps_;
    QString media_url_;
    mutable std::mutex mutex_;
    bool stop_;
    QTimer *timer_ = nullptr;
    int webcam_index_ = 0;
};

}

#endif // FRAME_CAPTURED_OPENCV_WORKER_HPP


frame_capture_opencv_worker.cpp


    Please read the comments carefully, they may help you avoid subtle bugs.


#include "frame_capture_opencv_worker.hpp"

#include "frame_capture_config.hpp"

#include <QDebug>
#include <QElapsedTimer>
#include <QThread>

#include <QTimer>

#include <chrono>
#include <thread>

namespace frame_capture{

capture_opencv_worker::capture_opencv_worker(frame_capture_config const &config) :
    worker_base(),
    deep_copy_(config.deep_copy_),
    media_url_(config.url_),
    stop_(true)
{
    set_max_fps(config.fps_);
}

capture_opencv_worker::~capture_opencv_worker()
{
    release();
}

bool capture_opencv_worker::add_image_listener(std::function<void (cv::Mat)> functor, void *key)
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return functors_.insert(std::make_pair(key, std::move(functor))).second;
}

frame_capture_config capture_opencv_worker::get_params() const
{
    std::lock_guard<std::mutex> lock(mutex_);
    frame_capture_config config;
    config.deep_copy_ = deep_copy_;
    config.fps_ = max_fps_;
    config.url_ = media_url_;

    return config;
}

QString capture_opencv_worker::get_url() const
{
    std::lock_guard<std::mutex> lock(mutex_);
    return media_url_;
}

void capture_opencv_worker::set_max_fps(int input)
{
    std::lock_guard<std::mutex> lock(mutex_);
    set_max_fps_non_ts(input);
}

void capture_opencv_worker::open_media(const QString &media_url)
{   
    qDebug()<<__func__<<": "<<media_url;
    bool can_convert_to_int = 0;
    if(timer_){
        timer_->stop();
    }
    media_url.toInt(&can_convert_to_int);
    stop_ = true;
    try{
        capture_.release();
        //If you pass in int, opencv will open webcam if it could
        if(can_convert_to_int){
            capture_.open(media_url.toInt());
        }else{
            capture_.open(media_url.toStdString());
        }
    }catch(std::exception const &ex){
        qDebug()<<__func__<<ex.what();
    }

    if(capture_.isOpened()){
        stop_ = false;        
    }else{
        stop_ = true;
        emit cannot_open(media_url);
    }
}

bool capture_opencv_worker::is_stop() const
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return stop_;
}

void capture_opencv_worker::release()
{
    qDebug()<<__func__<<": delete cam with url = "<<media_url_;
    std::lock_guard<std::mutex> lock(mutex_);
    qDebug()<<__func__<<": enter lock region";
    stop_ = true;
    qDebug()<<__func__<<": clear functor";
    functors_.clear();
    qDebug()<<__func__<<": release capture";
    capture_.release();
    qDebug()<<__func__<<": delete timer later";
}

bool capture_opencv_worker::remove_image_listener(void *key)
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return functors_.erase(key) > 0;
}

void capture_opencv_worker::set_params(const frame_capture_config &config)
{
    std::lock_guard<std::mutex> lock(mutex_);
    deep_copy_ = config.deep_copy_;
    media_url_ = config.url_;
    set_max_fps_non_ts(config.fps_);
}

void capture_opencv_worker::start()
{
    start_url(media_url_);
}

void capture_opencv_worker::start_url(QString const &url)
{    
    stop();
    qDebug()<<__func__<<": stop = "<<stop_<<", url = "<<url;
    std::lock_guard<std::mutex> lock(mutex_);
    open_media(url);
    if(capture_.isOpened()){
        media_url_ = url;
        captured_frame();
    }
}

void capture_opencv_worker::stop()
{        
    std::lock_guard<std::mutex> lock(mutex_);
    stop_ = true;
    qDebug()<<__func__<<": stop the worker = "<<stop_;
}

void capture_opencv_worker::captured_frame()
{        
    qDebug()<<__func__<<": capture_.isOpened()";

    //You must initialize and delete timer in the same thread
    //of the VideoCapture running, else you may trigger undefined
    //behavior
    if(!timer_){
        qDebug()<<__func__<<": init timer";
        timer_ = new QTimer;
        timer_->setSingleShot(true);
        connect(timer_, &QTimer::timeout, this, &capture_opencv_worker::time_out);
    }

    qDebug()<<__func__<<": start timer";
    timer_->start();
    qDebug()<<__func__<<": called start timer";
}

void capture_opencv_worker::set_max_fps_non_ts(int input)
{
    max_fps_ = std::max(input, 1);
    frame_duration_ = std::max(1000 / max_fps_, 1);
}

void capture_opencv_worker::time_out()
{        
    QElapsedTimer elapsed;
    elapsed.start();
    std::lock_guard<std::mutex> lock(mutex_);    
    if(!stop_ && timer_){
        capture_.grab();
        cv::Mat frame;
        capture_.retrieve(frame);
        if(!frame.empty()){
            for(auto &iter : functors_){
                iter.second(deep_copy_ ? frame.clone() : frame);
            }
            auto const interval = frame_duration_ - elapsed.elapsed();
            timer_->start(std::max(static_cast<int>(interval), 10));
        }else{
            open_media(media_url_);
            timer_->start();
        }
    }else{
        capture_.release();
        if(timer_){
            //you must delete timer in the thread where you initiaize it
            delete timer_;
            timer_ = nullptr;
        }
    }
}

}


Create controller associate with the worker


frame_capture_controller.hpp




#ifndef FRAME_CAPTURE_OPENCV_CONTROLLER_HPP
#define FRAME_CAPTURE_OPENCV_CONTROLLER_HPP

#include <QObject>
#include <QThread>
#include <QVariant>

#include <opencv2/core.hpp>

#include <functional>

namespace frame_capture{

struct frame_capture_config;

class worker_base;

class capture_controller : public QObject
{
    Q_OBJECT
public:
    explicit capture_controller(frame_capture_config const &config);
    ~capture_controller() override;

    bool add_image_listener(std::function<void(cv::Mat)> functor, void *key);
    frame_capture_config get_params() const;
    QString get_url() const;
    bool is_stop() const;
    bool remove_image_listener(void *key);
    void set_max_fps(int input);
    void set_params(frame_capture_config const &config);    
    void stop();

signals:
    void cannot_open(QString const &media_url);
    void reach_the_end();

    void start();
    void start_url(QString const &url);

private:
    void init_frame_capture();

    worker_base *frame_capture_;
    QThread thread_;
};

}

#endif // FRAME_CAPTURE_OPENCV_CONTROLLER_HPP


frame_capture_controller.cpp




#include "frame_capture_controller.hpp"

#include "frame_capture_config.hpp"

#include "frame_capture_opencv_worker.hpp"

#include <QDebug>

namespace frame_capture{

void capture_controller::init_frame_capture()
{
    frame_capture_->moveToThread(&thread_);
    connect(&thread_, &QThread::finished, frame_capture_, &QObject::deleteLater);

    connect(frame_capture_, &worker_base::cannot_open, this, &capture_controller::cannot_open);    

    connect(this, &capture_controller::start, frame_capture_, &worker_base::start);
    connect(this, &capture_controller::start_url, frame_capture_, &worker_base::start_url);

    thread_.start();
}

capture_controller::capture_controller(frame_capture_config const &config) :
    QObject(),
    frame_capture_(new capture_opencv_worker(config))
{
    init_frame_capture();
}

capture_controller::~capture_controller()
{        
    qDebug()<<__func__<<": quit";
    //must called release or before quit and wait, else the
    //frame capture will fall into infinite loop
    frame_capture_->release();
    thread_.quit();
    qDebug()<<__func__<<": wait";
    thread_.wait();
    qDebug()<<__func__<<": wait exit";
}

bool capture_controller::add_image_listener(std::function<void (cv::Mat)> functor, void *key)
{
    return frame_capture_->add_image_listener(std::move(functor), key);
}

frame_capture_config capture_controller::get_params() const
{
    return frame_capture_->get_params();
}

QString capture_controller::get_url() const
{
    return frame_capture_->get_url();
}

bool capture_controller::is_stop() const
{
    return frame_capture_->is_stop();
}

bool capture_controller::remove_image_listener(void *key)
{
    return frame_capture_->remove_image_listener(key);
}

void capture_controller::set_max_fps(int input)
{
    frame_capture_->set_max_fps(input);
}

void capture_controller::set_params(const frame_capture_config &config)
{
    frame_capture_->set_params(config);
}

void capture_controller::stop()
{
    frame_capture_->stop();
}

}


Do we need mutex?

    With current solution, yes, we need it. But we could avoid mutex if we declare the other api as signal and connect them to the worker, just like the start and start_url signals of the frame_capture_controller did.

How to use it?

    You can find the answer from tthis file--mainwindow.cpp. For simplicity, I did not put the functor into another thread.

Example






Warning

    Remeber to release the listener of the frame capture if the resources associate will be deleted.





Wednesday, 27 March 2019

Asynchronous computer vision algorithm

    In last post I introduce how to create an asynchronous class to capture the frame by cv::VideoCapture, today I would show you how to create an asynchronous algorithm which could be called a lot of times without re-spawn any new thread. 

  Main flow of async_to_gray_algo 

    async_to_gray_algo is a small class which convert the image from bgr channels to gray image in another thread.  If you have use any thread pool library before, all of them are using similar logic under the hood, but with more generic, flexible api.

async_to_gray_algo::async_to_gray_algo(cv::Mat &result, std::mutex &result_mutex) :
    result_(result),
    result_mutex_(result_mutex),
    stop_(false)
{
    auto func = [&]()
    {
        //1. In order to reuse the thread, we need to keep it alive
        //that is why we should put it in an infinite for loop
        for(;;){
            unique_lock<mutex> lock(mutex_);
            //2. use condition_variable to replace sleep(x milliseconds) is more efficient
            wait_.wait(lock, [&]() //wait_ will acquire the lock if condition satisfied
            {
                return stop_ || !input_.empty();
            });
            //3. stop the thread in destructor
            if(stop_){
                return;
            }

            //4. convert and write the results into result_
            //we need gmutex to synchronize the result_, else it may incur
            //race condition in the main thread.
            {
                lock_guard<mutex> glock(result_mutex);
                cv::cvtColor(input_, result_, COLOR_BGR2GRAY);
            }
            //5: clear the input_, else the wait_ variable may wake up and continue the task
            //due to spurious wake up
            input_.resize(0);
        }
    };
    thread_ = std::thread(func);
}

    After we initialize the thread, all we need to do is call it by the process api whenever we need to convert image from bgr channels to gray image.


void async_to_gray_algo::process(Mat input)
{
    {
        lock_guard<mutex> lock(mutex_);
        input_ = input;
    }
    //wait condition will acquire the mutex after it receive notification 
    wait_.notify_one();
}

     If we do not need this class anymore, we can and should stop it in the destructor, always followed the rule of RAII when you can is a best practices to keep your codes clean, robust and (much)easier to maintain(let the machine do the jobs of book keeping for humans).



async_to_gray_algo::~async_to_gray_algo()
{
    {
        lock_guard<mutex> lock(mutex_);
        stop_ = true;
    }
    wait_.notify_one();
    thread_.join();
}

What is spurious wake up?

    That means the condition_variable may wake up even no notification(notify_one or notify_all) happened. This is one of the reason why we should not wait without a condition(Another reason lost wake up).

Do we have a better way to reuse the thread?

    Yes, we have. The easiest solution is create a generic thread pool, you can check the codes of a simple thread pool at here. I would show you how to use it in the future.

Better way to pass the variable between different thread?

    As you see, the way I communicate between main thread and the other thread are awkward, it will be a hell to maintain the source codes like that when your program become bigger and bigger. Fortunately, we have better way to pass the variable between different thread with the help of Qt5, by their signal and slot mechanism.Not to mention, Qt5 can help us make the codes much more easy to maintain.

Summary

      The source codes of async_opencv_video_capture could find on github.

Contact me

    If you need someone to help you develop computer vision/Qt apps, please contact me on upwork.

Saturday, 23 March 2019

Asynchronous videoCapture of opencv

    Today I would like to introduce how to create an asynchronous videoCapture by opencv and standard library of c++. Captured video from HD video, especially the HD video from internet could be a time consuming task, it is not a good idea to waste the cpu cycle to wait the frame arrive, in order to speed up our app, or keep the gui alive, we better put the video capture part into another thread.

    With the helps of thread facilities added since c++11, make the videoCapture of opencv support cross platform asynchronous read operation become a simple task, let us have a simple example.


#include <ocv_libs/camera/async_opencv_video_capture.hpp>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>

#include <iostream>
#include <mutex>

int main(int argc, char *argv[])
{    
    if(argc != 2){
        std::cerr<<"must enter url of media\n";
        return -1;
    }

    std::mutex emutex;
    //create the functor to handle the exception when cv::VideoCapture fail
    //to capture the frame and wait 30 msec between each frame
    long long constexpr wait_msec = 30;
    ocv::camera::async_opencv_video_capture<> cl([&](std::exception const &ex)
    {
        //cerr of c++ is not a thread safe class, so we need to lock the mutex
        std::lock_guard<std::mutex> lock(emutex);
        std::cerr<<"camera exception:"<<ex.what()<<std::endl;

        return true;
    }, wait_msec);
    cl.open_url(argv[1]);

    //add listener to process captured frame
    //the listener could process the task in another thread too,
    //to make things easier to explain, I prefer to process it in
    //the same thread of videoCapture
    cv::Mat img;
    cl.add_listener([&](cv::Mat input)
    {
        std::lock_guard<std::mutex> lock(emutex);
        img = input;
    }, &emutex);

    //execute the task(s)
    cl.run();

    //We must display the captured image at main thread but not
    //in the listener, because every manipulation related to gui
    //must perform in the main thread(it also called gui thread)
    for(int finished = false; finished != 'q';){
        finished = std::tolower(cv::waitKey(30));
        std::lock_guard<std::mutex> lock(emutex);
        if(!img.empty()){
            cv::imshow("frame", img);
        }
    }
}


Important details of async_opencv_video_capture

1. Create an infinite for loop to read the frame in another thread



void run()
{
    if(thread_){
        //before we start the thread,
        //we need to stop it
        set_stop(true);
        //call join before task(s)
        //of the thread done
        thread_->join();
        set_stop(false);
    }

    //create a new thread
    create_thread();
}



    void create_thread()
    {
        thread_ = std::make_unique<std::thread>([this]()
        {
            //read the frames in infinite for loop
            for(cv::Mat frame;;){
                std::lock_guard<Mutex> lock(mutex_);
                if(!stop_ && !listeners_.empty()){
                    try{
                        cap_>>frame;
                    }catch(std::exception const &ex){
                        //reopen the camera if exception thrown ,this may happen frequently when you
                        //receive frames from network
                        cap_.open(url_);
                        cam_exception_listener_(ex);
                    }

                    if(!frame.empty()){
                        for(auto &val : listeners_){
                            val.second(frame);
                        }
                    }else{
                        if(replay_){
                            cap_.open(url_);
                        }else{
                            break;
                        }
                    }
                    std::this_thread::sleep_for(wait_for_);
                }else{
                    break;
                }
            }
        });
    }

    The listeners_ is a vector which stores the std::function<cv::Mat> to be called in the infinite loop if the frame readed by the videoCapture was not empty. The users must handle the exceptions thrown by those functors by themselves else the app will crash.

2. Stop the thread in the destructor



void set_stop(bool val)
{
    std::lock_guard<Mutex> lock(mutex_);
    stop_ = val;
}

void stop()
{
    set_stop(true);
}

template<typename Mutex>
async_opencv_video_capture<Mutex>::~async_opencv_video_capture()
{
    stop();
    thread_->join();
}


    We must stop and join the thread in the destructor, else the thread may never end and cause the app freeze.

3. Select mutex type by template

    By default, async_opencv_video_capture use std::mutex, it is more efficient but may cause dead lock if you called the api of async_opencv_video_capture in the listeners. If you want to avoid dead lock this issue, use std::recursive_mutex to replace std::mutex.

Summary

      The source codes of async_opencv_video_capture could find on github.

Contact me

    If you need someone to help you develop computer vision/Qt apps, please contact me on upwork.

Saturday, 9 March 2019

Build mxnet 1.3.1 on windows

    If you were like me, tried to build mxnet 1.3.1 on windows, you may suffer a lot of pains since mxnet do not have decent support on windows, apparently the developers of mxnet do not perform enough tests(maybe none) on windows before they release the stable version. Despite of all of the troubles mxnet brought, it is still a nice tool of deep learning, that is why I am still prefer to work with it.

    I believe one of the best way to make the open source project become better is contribute something back to it, that is why I would like to write down how to build mxnet 1.3.1 on windows step by step.

1. Do not build mxnet on windows with intel mkl

    Do not do this unless you are asking for trouble, please check the details on stackoverflow and issue 14343.

2. Build openBLAS with native msvc ABI


    The openBLAS post at here do not work with vc2015 anymore(if you updated your vc2015), the abi are not compatible with msvc. The easiest solution to solve this issue is build the openBLAS by yourself.The steps are

a. Clone openBLAS of xianyi from github
b. Compile openBLAS as the instruction shown here. Do not install Anaconda and miniconda together, just pick one of them. If you do not know where is your vcvars64.bat on your pc, I suggest you use Everything to find the path.
c. Copy the files(cblas.h, f77blas.h) from the generated  folder into the build folder.

3. Clone mxnet fork by me


git clone --recursive https://github.com/stereomatchingkiss/incubator-mxnet
cd mxnet 
git checkout 1.3.1_win_compile_fix
 
    This branch fix some type mismatch errors.
 

4. Comment out codes in shuffel_op.cu

     This file is under the folder "mxnet\src\operator\random", there is a function ShuffleForwardGPU, 
comment out the implementation else there will have a lot of compile times errors(no suitable 
user-defined conversion from "mshadow::Tensor<mxnet::gpu, 1, mxnet::index_t>" to 
"const mshadow::Tensor<mshadow::gpu, 1, unsigned int>" exists).
 
     I guess this function would not be called when doing inference task, after all who would like to 
make their inference results become unpredictable? If you were like me, only want to use 
cpp_package to do the inference task, you should be safe to comment out the codes. 

5. Open cmake


  Open your cmake and select msvc with 64bits.

6. Configuration




   The most important note are

1. Do not use anything related to intel MKL.

2. Do not build cpp_package at the first time

    Without mkl mxnet cannot exploit full power of the cpu, but with it your app cannot run at all, 
depending on how you build it, your app may throw the error 
"Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll." or 
"Check failed: MXNDArrayWaitToRead(blob_ptr_->handle_) == 0 (-1 vs. 0)"

    If you do not need cuda, uncheck following options, USE_CUDA and USE_CUDNN. After that, click Configure->uncheck BUILD_TESTING->click Configure->click generate.

7. Build mxnet without cpp_package


    a. Open your ALL_BUILD.vcxproj
    b. Navigate to the project "mxnet"
    c. Right click your mouse, select "Properties"
    d. Select Linker->Input
    e. Link to flangmain.lib, flangrti.lib, flang.lib, ompstub.lib. For example, my paths are

    C:\Users\yyyy\Anaconda3\pkgs\flang-5.0.0-he025d50_20180525\Library\lib\flangmain.lib  
    C:\Users\yyyy\Anaconda3\pkgs\flang-5.0.0-he025d50_20180525\Library\lib\flangrti.lib
    C:\Users\yyyy\Anaconda3\pkgs\flang-5.0.0-he025d50_20180525\Library\lib\flang.lib
    C:\Users\yyyy\Anaconda3\pkgs\flang-5.0.0-he025d50_20180525\Library\lib\ompstub.lib

    If you do not know where are they, use Everything to find the path.

    f. Navigate to the project "ALL_Build"
    g.Right click your mouse, click build.

8. Configure cmake to build with cpp_package


    Now we can build mxnet with cpp_package, let us go to cmake again and change some settings.

a. (Optional)Change your install path, else you may not be able to install(ex : change to C:/Users/yyyy/programs/Qt/3rdLibs/mxnet/build_gpu_1_3_1_temp/install).

b. Make sure you have set the PATH of python, if you are building 32/64bits version of mxnet, you need python of 32/64bits, else you wouldn't be able to generate op.h. I suggest you use Rapid environment to manage your path on windows.


If your vc complain it cannot find the python exe, reopen your vc.

c. Check USE_CPP_PACKAGE->uncheck BUILD_TESTING->configure->generate

d. Remove the example projects since they will hinder the build process, those projects are alexnet, charRNN, googleNet, inception_bn, lenet, lenet_with_mxdataiter, mlp, mlp_cpu, mlp_gpu, resnset.

e. Go to your build/Release folder, copy the libmxnet.dll into any folder which could be found by
the windows(the path in the Path), let us assume that path call global_path.

f. Open your Developer command prompt(mine is developer command prompt for vs2015), let us call it DCP

g. Navigate your DCP to global_path

h. Enter "dumpbin /dependents libmxnet.dll", this command will show you the dependencies of this
dll. In my case, it show

flangrti.dll
flang.dll
ompstub.dll
cudnn64_7.dll
cublas64_92.dll
cufft64_92.dll
cusolver64_92.dll
curand64_92.dll
nvrtc64_92.dll
nvcuda.dll
KERNEL32.dll
VCOMP140.DLL

We only need to copy flangrti.dll, flang.dll, ompstub.dll into global_dll in order to generate op.h, because another dll already exist in the PATH. Again, please use Everything to find the path.

i. Your mxnet need to link to Link to flangmain.lib, flangrti.lib, flang.lib, ompstub.lib again since generate clear them.

j. Navigate to the project "ALL_Build"   

k.  Right click your mouse, click build.

l. Navigate to the project "INSTALL"

m.  Right click your mouse, click build.

n. Copy cpp-package\include\mxnet-cpp into build/install/include

o. Copy mxnet\3rdparty\tvm\nnvm\include\nnvm into build/install/include

9. Add mx_float for scale, int for num_filter

    The op.h generated by this solution, there are two parameters lack type declaration, you need to add them by yourself.

Conclusion

    Congratulation, now you have build the mxnet successfully, to tell you the truth, this is not a pleasant journey, there are too many bugs/issues when I try to build mxnet1.3.1on windows(1.4.0 got more bugs on windows when you try to build it) , there are many bugs should be found before they release the major version if them have tried to build mxnet on windows. I believe windows and cpp_package are not their main concern yet, let us hope that 

a. Someday they can put more love into windows and cpp_package. Windows still dominate market of desktop/laptop and cpp_package is a much better choice than python if you want to do edge deployment.

b. Adopt a commit system like opencv(whenever you commit your codes, opencv build it on every single platforms they support), this could prevent a log of bugs, the later you adopt, the more cost you need to pay for cross-platform.

c. Let us cross our finger, hope them can fix all of these bugs before next version release