Thursday 25 June 2020

Dense extreme inception edge detection with opencv and deep learning

    This tutorial introduce how to perform edge detection with opencv and deep learning. You will learn how to apply Dense Extreme Inception Network(DexiNed) and Holistically-Nested Edge Detection (HED) to images and videos. If you were interesting about HED, I recommend you study a brilliant post of pyimagesearch, I would explain the main points of how to perform edge detection by these networks with c++ and opencv.


Apply HED by opencv and c++


    This page explain how to do it,  you can find the link of the model and prototxt from there too. The author register a new layer, without it opencv cannot generate proper results.

Apply DexiNed by opencv and c++



    You can find out the explanation of DexiNed from this page, in order to perform edge detection by DexiNed, we need to convert the model to onnx, I prefer pytorch for this purpose. Why I do not prefer tensorflow? Because convert the model of tensorflow to the format opencv can read is much more complicated from my ex experiences, tensorflow is feature rich but I always feel like they are trying very hard to make things unnecessary complicated, their notoriously bad api design explain this very well.

1.Convert pytorch model of DexiNed to onnx

  1. Clone the project blogCodes2
  2. Navigate into edges_detection_with_deep_learning
  3. Clone the project DexiNed
  4. Copy the file model.py in edges_detection_with_deep_learning/model.py into DexiNed/DexiNed-Pytorch
  5. Run the script to_onnx.py
    If you do not want to go through the trouble, just download the model from here.

2.Load and forward image by DexiNed and HED




void switch_to_cuda(cv::dnn::Net &net)
{
    try {
        for(auto const &vpair : cv::dnn::getAvailableBackends()){
            std::cout<<vpair.first<<", "<<vpair.second<<std::endl;
            if(vpair.first == cv::dnn::DNN_BACKEND_CUDA && vpair.second == cv::dnn::DNN_TARGET_CUDA){
                std::cout<<"can switch to cuda"<<std::endl;
                net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
                net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
                break;
            }
        }
    }catch(std::exception const &ex){
        net.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
        net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
        throw std::runtime_error(ex.what());
    }
}

std::tuple<cv::Mat, long long> forward_utils(cv::dnn::Net &net, cv::Mat const &input, cv::Size const &blob_size)
{
    using namespace std::chrono;

    //measure duration
    auto const start = high_resolution_clock::now();
    cv::Mat blob = cv::dnn::blobFromImage(input, 1.0, blob_size,
                                          cv::Scalar(104.00698793, 116.66876762, 122.67891434), false, false);
    net.setInput(blob);
    cv::Mat out = net.forward();
    cv::resize(out.reshape(1, blob_size.height), out, input.size());
    //the data type of out is CV_32F(single channel, floating point) so we need to upscale the value and convert
    //it to CV_8U(single channel, uchar)
    out *= 255;
    out.convertTo(out, CV_8U);
    //convert gray to bgr because we need to create montage(1 row, 3 column of images in our case)
    auto const finish = high_resolution_clock::now();
    auto const elapsed = duration_cast<milliseconds>(finish - start).count();
    cv::cvtColor(out, out, cv::COLOR_GRAY2BGR);

    return {out, elapsed};
}

class hed_edges_detector
{
public:
    hed_edges_detector(std::string const &weights, std::string const &config) :
        net_(cv::dnn::readNet(config, weights))
    {
        switch_to_cuda(net_);
    }

    long long elapsed() const
    {
        return elapsed_;
    }

    cv::Mat forward(cv::Mat const &input)
    {
        auto result = forward_utils(net_, input, {500, 500});
        elapsed_ += std::get<1>(result);
        return std::get<0>(result);
    }

private:
    long long elapsed_ = 0;
    cv::dnn::Net net_;
};

class dexi_edges_detector
{
public:
    explicit dexi_edges_detector(std::string const &model) :
        net_(cv::dnn::readNet(model))
    {
        switch_to_cuda(net_);
    }

    long long elapsed() const
    {
        return elapsed_;
    }

    cv::Mat forward(cv::Mat const &input)
    {
        auto result = forward_utils(net_, input, {400, 400});
        elapsed_ += std::get<1>(result);
        return std::get<0>(result);
    }

private:
    long long elapsed_ = 0;
    cv::dnn::Net net_;
};


3. Detect edges of image

   


void test_image(std::string const &mpath)
{
    cv::Mat img = cv::imread("2007_000129.jpg");
    hed_edges_detector hed(mpath + "hed_pretrained_bsds.caffemodel", mpath + "deploy.prototxt");
    auto hed_out = hed.forward(img);

    dexi_edges_detector dexi(mpath + "24_model.onnx");
    auto dexi_out = dexi.forward(img);

    cv::Size const frame_size(img.cols, img.rows);
    int constexpr grid_x = 3;
    int constexpr grid_y = 1;
    ocv::montage mt(frame_size, grid_x, grid_y);
    mt.add_image(img);
    mt.add_image(hed_out);
    mt.add_image(dexi_out);

    cv::imshow("results", mt.get_montage());
    cv::imwrite("results2.jpg", mt.get_montage());
    cv::waitKey();
}

4. Detect edges of video




void test_video(std::string const &mpath)
{
    cv::VideoCapture cap("pedestrian.mp4");
    if(cap.isOpened()){
        hed_edges_detector hed(mpath + "hed_pretrained_bsds.caffemodel", mpath + "deploy.prototxt");
        dexi_edges_detector dexi(mpath + "24_model.onnx");

        //unique_ptr is a resource manager class(smart pointer) of c++,
        //we allocate memory by the reset(or make_unique) api,
        //after leaving the scope(scope is surrounded by {}), the memory will be released. In c++, the
        //best way of manage the resource is avoid explicit memory allocation, if you really need to do it,
        //guard your memory by smart pointer. I use unique_ptr at here because I cannot
        //initialize the objects before I know the frame size of the video.
        std::unique_ptr<ocv::montage> mt;
        std::unique_ptr<cv::VideoWriter> vwriter;

        cv::Mat frame;
        float frame_count = 0;
        while(1){
            cap>>frame;
            if(frame.empty()){
                break;
            }

            ++frame_count;
            cv::resize(frame, frame, {}, 0.5, 0.5);
            auto const hed_out = hed.forward(frame);
            auto const dexi_out = dexi.forward(frame);
            if(!mt){
                //initialize the class to create montage
                //First arguments tell the class the size of each frame
                cv::Size const frame_size(frame.cols, frame.rows);
                int constexpr grid_x = 3;
                int constexpr grid_y = 1;
                mt.reset(new ocv::montage(frame_size, grid_x, grid_y));
            }
            if(!vwriter){
                auto const fourcc = cv::VideoWriter::fourcc('F', 'M', 'P', '4');
                int constexpr fps = 30;
                //because the montage is 3 columns and 1 row, so the cols need to multiply by 3
                vwriter.reset(new cv::VideoWriter("out.avi", fourcc, fps, {frame.cols * 3, frame.rows}));
            }
            mt->add_image(frame);
            mt->add_image(hed_out);
            mt->add_image(dexi_out);

            auto const montage = mt->get_montage();
            cv::imshow("out", mt->get_montage());
            vwriter->write(montage);
            cv::waitKey(10);
            mt->clear();
        }
        std::cout<<"hed elapsed time = "<<hed.elapsed()<<", frame count = "<<frame_count
                <<", fps = "<<1000.0f/(hed.elapsed()/frame_count)<<std::endl;
        std::cout<<"dexi elapsed time = "<<dexi.elapsed()<<", frame count = "<<frame_count
                <<", fps = "<<1000.0f/(dexi.elapsed()/frame_count)<<std::endl;
    }else{
        std::cerr<<"cannot open video pedestrian.mp4"<<std::endl;
    }
}

Results of image detection


img_00

img_01

Results of video detection



Runtime performance on gpu(gtx 1060)

    Following results are based on the video I posted on youtube.The video has 733 images. From left to right is original frame, frame processed by HED, frame processed by DexiNed.

    HED elapsed time is 43870ms, fps is 16.7085.
    DexiNed elapsed time is 45149ms, fps is 16.2351.

    The crop layer of HED do not support cuda, it should become faster after the cuda layer is done.

Source codes


    Located at github.

Tuesday 9 June 2020

Asynchronous video capture written by opencv and Qt

Before we start


  1. If you do not familiar with QThread, the document of Qt5 show us how to use QThread properly already, please check it out by google(keyword is "QThread doc", or you could open the page by this link), and read this post, it may save you a lot of troubles.
  2. If you do not familiar with thread, I suggest you read the book c++ concurrency in action, chapter 1~4 and basic atomic knowledge from this site should be more than enough for most of the tasks.

 Why do we need asynchronous video capture


  1. Performance, VideoCapture could be slow when capturing the frame from rtsp protocol, especially when the frame size is big, the ideal solution is capture the frame and process the frame in different thread.
  2. Do not freeze the ui. cv::waitKey is a blocking operation, it is a bad idea to use it directly in gui programming.

Dependencies


    opencv4(using 4.3.0 in this tutorial)
    Qt5(using 5.13.2 in this tutorial)
   
    If you do not want to register a new account in order to download Qt5(they did a great job to piss off open source communities), you can try this link.


Define interfaces for worker

    In order to make us easier to switch the frame capture in the future, I create a worker_base class for this purpose.

frame_capture_config.hpp



#ifndef FRAME_CAPTURE_CONFIG_HPP
#define FRAME_CAPTURE_CONFIG_HPP

#include <QString>

namespace frame_capture{

struct frame_capture_config
{
    //True will copy the frame captured, useful if the functors
    //you add work in different thread
    bool deep_copy_ = false;
    int fps_ = 30;
    QString url_;
};

}

#endif // FRAME_CAPTURE_CONFIG_HPP


frame_worker_base.hpp



#ifndef FRAME_WORKER_BASE_HPP
#define FRAME_WORKER_BASE_HPP

#include <opencv2/core.hpp>

#include <QObject>

#include <functional>

namespace frame_capture{

struct frame_capture_config;

class worker_base : public QObject
{
    Q_OBJECT
public:
    explicit worker_base(QObject *parent = nullptr);

    /**
     * Add listener to process the frame
     * @param functor Process the frame
     * @param key Key of the functor, we need it to remove the functor
     * @return True if able to add the listener and vice versa
     */
    virtual bool add_image_listener(std::function<void(cv::Mat)> functor, void *key) = 0;

    virtual frame_capture_config get_params() const = 0;
    virtual QString get_url() const = 0;
    virtual bool is_stop() const = 0;
    /**
     * This function will stop the frame capturer, release all functor etc
     */
    virtual void release() = 0;
    /**
     * Remove the listener
     * @param key The key same as add_image_listener when you add the functor
     * @return True if able to remove the listener and vice versa
     * @warning Remember to remove_image_listener before the resources of the register functor
     * released, else the app may crash.
     */
    virtual bool remove_image_listener(void *key) = 0;
    virtual void set_max_fps(int input) = 0;
    virtual void set_params(frame_capture_config const &config) = 0;
    /**
     * Will start the frame captured with the url set by the start_url api
     */
    virtual void start() = 0;
    virtual void start_url(QString const &url) = 0;
    virtual void stop() = 0;

signals:
    void cannot_open(QString const &media_url);    
};

}

#endif // FRAME_WORKER_BASE_HPP


Implement frame capture by opencv


    Compare with another libraries like gstreamer, ffmpeg, libvlc, Qt etc. cv::VideoCapture has the simplest api to capture the frame, although c++ api of this class do not work on android yet(you have to use jni, this make porting the app to android become more troubles).


frame_capture_opencv_worker.hpp



#ifndef FRAME_CAPTURED_OPENCV_WORKER_HPP
#define FRAME_CAPTURED_OPENCV_WORKER_HPP

#include "frame_worker_base.hpp"

#include <QObject>

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>

#include <atomic>
#include <functional>
#include <map>
#include <mutex>

class QTimer;

namespace frame_capture{

struct frame_capture_config;

class capture_opencv_worker : public worker_base
{
    Q_OBJECT
public:
    explicit capture_opencv_worker(frame_capture_config const &config);
    ~capture_opencv_worker() override;

    bool add_image_listener(std::function<void(cv::Mat)> functor, void *key) override;
    frame_capture_config get_params() const override;
    QString get_url() const override;
    void set_max_fps(int input) override;

    bool is_stop() const override;
    void release() override;
    bool remove_image_listener(void *key) override;
    void set_params(frame_capture_config const &config) override;
    void start() override;
    void start_url(QString const &url) override;
    void stop() override;

private:
    void open_media(QString const &media_url);
    void captured_frame();
    void set_max_fps_non_ts(int input);
    void time_out();

    cv::VideoCapture capture_;
    bool deep_copy_ = false;
    int frame_duration_ = 30;
    std::map<void*, std::function<void(cv::Mat)>> functors_;
    int max_fps_;
    QString media_url_;
    mutable std::mutex mutex_;
    bool stop_;
    QTimer *timer_ = nullptr;
    int webcam_index_ = 0;
};

}

#endif // FRAME_CAPTURED_OPENCV_WORKER_HPP


frame_capture_opencv_worker.cpp


    Please read the comments carefully, they may help you avoid subtle bugs.


#include "frame_capture_opencv_worker.hpp"

#include "frame_capture_config.hpp"

#include <QDebug>
#include <QElapsedTimer>
#include <QThread>

#include <QTimer>

#include <chrono>
#include <thread>

namespace frame_capture{

capture_opencv_worker::capture_opencv_worker(frame_capture_config const &config) :
    worker_base(),
    deep_copy_(config.deep_copy_),
    media_url_(config.url_),
    stop_(true)
{
    set_max_fps(config.fps_);
}

capture_opencv_worker::~capture_opencv_worker()
{
    release();
}

bool capture_opencv_worker::add_image_listener(std::function<void (cv::Mat)> functor, void *key)
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return functors_.insert(std::make_pair(key, std::move(functor))).second;
}

frame_capture_config capture_opencv_worker::get_params() const
{
    std::lock_guard<std::mutex> lock(mutex_);
    frame_capture_config config;
    config.deep_copy_ = deep_copy_;
    config.fps_ = max_fps_;
    config.url_ = media_url_;

    return config;
}

QString capture_opencv_worker::get_url() const
{
    std::lock_guard<std::mutex> lock(mutex_);
    return media_url_;
}

void capture_opencv_worker::set_max_fps(int input)
{
    std::lock_guard<std::mutex> lock(mutex_);
    set_max_fps_non_ts(input);
}

void capture_opencv_worker::open_media(const QString &media_url)
{   
    qDebug()<<__func__<<": "<<media_url;
    bool can_convert_to_int = 0;
    if(timer_){
        timer_->stop();
    }
    media_url.toInt(&can_convert_to_int);
    stop_ = true;
    try{
        capture_.release();
        //If you pass in int, opencv will open webcam if it could
        if(can_convert_to_int){
            capture_.open(media_url.toInt());
        }else{
            capture_.open(media_url.toStdString());
        }
    }catch(std::exception const &ex){
        qDebug()<<__func__<<ex.what();
    }

    if(capture_.isOpened()){
        stop_ = false;        
    }else{
        stop_ = true;
        emit cannot_open(media_url);
    }
}

bool capture_opencv_worker::is_stop() const
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return stop_;
}

void capture_opencv_worker::release()
{
    qDebug()<<__func__<<": delete cam with url = "<<media_url_;
    std::lock_guard<std::mutex> lock(mutex_);
    qDebug()<<__func__<<": enter lock region";
    stop_ = true;
    qDebug()<<__func__<<": clear functor";
    functors_.clear();
    qDebug()<<__func__<<": release capture";
    capture_.release();
    qDebug()<<__func__<<": delete timer later";
}

bool capture_opencv_worker::remove_image_listener(void *key)
{    
    std::lock_guard<std::mutex> lock(mutex_);
    return functors_.erase(key) > 0;
}

void capture_opencv_worker::set_params(const frame_capture_config &config)
{
    std::lock_guard<std::mutex> lock(mutex_);
    deep_copy_ = config.deep_copy_;
    media_url_ = config.url_;
    set_max_fps_non_ts(config.fps_);
}

void capture_opencv_worker::start()
{
    start_url(media_url_);
}

void capture_opencv_worker::start_url(QString const &url)
{    
    stop();
    qDebug()<<__func__<<": stop = "<<stop_<<", url = "<<url;
    std::lock_guard<std::mutex> lock(mutex_);
    open_media(url);
    if(capture_.isOpened()){
        media_url_ = url;
        captured_frame();
    }
}

void capture_opencv_worker::stop()
{        
    std::lock_guard<std::mutex> lock(mutex_);
    stop_ = true;
    qDebug()<<__func__<<": stop the worker = "<<stop_;
}

void capture_opencv_worker::captured_frame()
{        
    qDebug()<<__func__<<": capture_.isOpened()";

    //You must initialize and delete timer in the same thread
    //of the VideoCapture running, else you may trigger undefined
    //behavior
    if(!timer_){
        qDebug()<<__func__<<": init timer";
        timer_ = new QTimer;
        timer_->setSingleShot(true);
        connect(timer_, &QTimer::timeout, this, &capture_opencv_worker::time_out);
    }

    qDebug()<<__func__<<": start timer";
    timer_->start();
    qDebug()<<__func__<<": called start timer";
}

void capture_opencv_worker::set_max_fps_non_ts(int input)
{
    max_fps_ = std::max(input, 1);
    frame_duration_ = std::max(1000 / max_fps_, 1);
}

void capture_opencv_worker::time_out()
{        
    QElapsedTimer elapsed;
    elapsed.start();
    std::lock_guard<std::mutex> lock(mutex_);    
    if(!stop_ && timer_){
        capture_.grab();
        cv::Mat frame;
        capture_.retrieve(frame);
        if(!frame.empty()){
            for(auto &iter : functors_){
                iter.second(deep_copy_ ? frame.clone() : frame);
            }
            auto const interval = frame_duration_ - elapsed.elapsed();
            timer_->start(std::max(static_cast<int>(interval), 10));
        }else{
            open_media(media_url_);
            timer_->start();
        }
    }else{
        capture_.release();
        if(timer_){
            //you must delete timer in the thread where you initiaize it
            delete timer_;
            timer_ = nullptr;
        }
    }
}

}


Create controller associate with the worker


frame_capture_controller.hpp




#ifndef FRAME_CAPTURE_OPENCV_CONTROLLER_HPP
#define FRAME_CAPTURE_OPENCV_CONTROLLER_HPP

#include <QObject>
#include <QThread>
#include <QVariant>

#include <opencv2/core.hpp>

#include <functional>

namespace frame_capture{

struct frame_capture_config;

class worker_base;

class capture_controller : public QObject
{
    Q_OBJECT
public:
    explicit capture_controller(frame_capture_config const &config);
    ~capture_controller() override;

    bool add_image_listener(std::function<void(cv::Mat)> functor, void *key);
    frame_capture_config get_params() const;
    QString get_url() const;
    bool is_stop() const;
    bool remove_image_listener(void *key);
    void set_max_fps(int input);
    void set_params(frame_capture_config const &config);    
    void stop();

signals:
    void cannot_open(QString const &media_url);
    void reach_the_end();

    void start();
    void start_url(QString const &url);

private:
    void init_frame_capture();

    worker_base *frame_capture_;
    QThread thread_;
};

}

#endif // FRAME_CAPTURE_OPENCV_CONTROLLER_HPP


frame_capture_controller.cpp




#include "frame_capture_controller.hpp"

#include "frame_capture_config.hpp"

#include "frame_capture_opencv_worker.hpp"

#include <QDebug>

namespace frame_capture{

void capture_controller::init_frame_capture()
{
    frame_capture_->moveToThread(&thread_);
    connect(&thread_, &QThread::finished, frame_capture_, &QObject::deleteLater);

    connect(frame_capture_, &worker_base::cannot_open, this, &capture_controller::cannot_open);    

    connect(this, &capture_controller::start, frame_capture_, &worker_base::start);
    connect(this, &capture_controller::start_url, frame_capture_, &worker_base::start_url);

    thread_.start();
}

capture_controller::capture_controller(frame_capture_config const &config) :
    QObject(),
    frame_capture_(new capture_opencv_worker(config))
{
    init_frame_capture();
}

capture_controller::~capture_controller()
{        
    qDebug()<<__func__<<": quit";
    //must called release or before quit and wait, else the
    //frame capture will fall into infinite loop
    frame_capture_->release();
    thread_.quit();
    qDebug()<<__func__<<": wait";
    thread_.wait();
    qDebug()<<__func__<<": wait exit";
}

bool capture_controller::add_image_listener(std::function<void (cv::Mat)> functor, void *key)
{
    return frame_capture_->add_image_listener(std::move(functor), key);
}

frame_capture_config capture_controller::get_params() const
{
    return frame_capture_->get_params();
}

QString capture_controller::get_url() const
{
    return frame_capture_->get_url();
}

bool capture_controller::is_stop() const
{
    return frame_capture_->is_stop();
}

bool capture_controller::remove_image_listener(void *key)
{
    return frame_capture_->remove_image_listener(key);
}

void capture_controller::set_max_fps(int input)
{
    frame_capture_->set_max_fps(input);
}

void capture_controller::set_params(const frame_capture_config &config)
{
    frame_capture_->set_params(config);
}

void capture_controller::stop()
{
    frame_capture_->stop();
}

}


Do we need mutex?

    With current solution, yes, we need it. But we could avoid mutex if we declare the other api as signal and connect them to the worker, just like the start and start_url signals of the frame_capture_controller did.

How to use it?

    You can find the answer from tthis file--mainwindow.cpp. For simplicity, I did not put the functor into another thread.

Example






Warning

    Remeber to release the listener of the frame capture if the resources associate will be deleted.