OpenCV2 provide us two convenient algorithms to to search the highest probability region, they are meanShift and CamShift. From wikipedia, Mean-shift start from an initial location and iteratively around, find the centroid location and repeats the procedure until the window center converges to a stable point(I haven't implemented it by myself yet, so I can't say I really understand what are the algorithm doing, but this should be the concepts of mean-shift).CamShift is an improved version of mean-shift, it could change the size and the orientation of the search window, but mean-shift would not.
Graph_00(original image)
Graph_01(target)
Assume we want to find the region selected by Graph_02, we could use backprojection and mean-shift to achieve our propose.
Graph_02
#include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/video/tracking.hpp> #include <imageAlgorithm.hpp> //for OCV::histProject() int main() { cv::Mat input = cv::imread("/Users/Qt/program/blogsCodes/pic/" "monkey00.png"); cv::Mat target = cv::imread("/Users/Qt/program/blogsCodes/pic/" "monkey02.png"); if(!input.empty() && !target.empty()){ //initial search window(same as the region of interest) cv::Rect const roi_initial_location(240, 56, 86, 72); cv::Mat roi = input(roi_initial_location).clone(); //find out the back projection histogram base on //roi and target(the probability of roi occur on target) cv::Mat probability_map = OCV::histProject().get_projection_map_hue (target, roi); cv::TermCriteria criteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 10, 0.01); //use mean shift to find the roi cv::Rect mean_shift_window = roi_initial_location; cv::meanShift(probability_map, mean_shift_window, criteria); //use cam shift to find the roi cv::Rect cam_shift_window = roi_initial_location; cv::CamShift(probability_map, cam_shift_window, criteria); //draw the region of interest on input cv::rectangle(input, roi_initial_location, cv::Scalar(0, 0, 255)); //draw the initial window location(red) and the location //obtained by meanshift(green) and camshift(blue) cv::rectangle(target, roi_initial_location, cv::Scalar(0, 0, 255)); cv::rectangle(target, mean_shift_window, cv::Scalar(0, 255, 0)); cv::rectangle(target, cam_shift_window, cv::Scalar(255, 0, 0)); cv::imshow("roi", input); cv::imshow("target", target); cv::imwrite("roi.png", input); cv::imwrite("target.png", target); cv::waitKey(); }else{ std::cerr<<"input or target is empty\n"; } }The result is shown at graph_03.
graph_03
The implementation of OCV::histProject().get_projection_map_hue is pretty
straightforward. Firtst we need to convert the input and roi to hsv color space, then
filter out the low saturation pixels of the roi histogram and probability map if any since
low saturation pixels their r, g, b components are almost equal.This makes it difficult to
determine the exact color. As you guess, the quality of the result are highly depend on
the probability map and initial position.
void histProject::get_projection_map_hue( cv::Mat const &input, cv::Mat const &model, cv::Mat &output, int min_saturation) { convert_to_hsv(input, input_hsv_); convert_to_hsv(model, model_hsv_); if(min_saturation > 0){ model_saturation_mask_.create(model_hsv_.size(), model_hsv_.depth()); mix_channels(model_hsv_, model_saturation_mask_, {1, 0}); cv::threshold(model_saturation_mask_, model_saturation_mask_, min_saturation, 255, cv::THRESH_BINARY); calc_histogram<1>({model_hsv_}, model_hist_, {0}, {180}, {{ {0, 180} }}, model_saturation_mask_); map_saturation_mask_.create(input_hsv_.size(), input_hsv_.depth()); mix_channels(input_hsv_, map_saturation_mask_, {1, 0}); cv::threshold(map_saturation_mask_, map_saturation_mask_, min_saturation, 255, cv::THRESH_BINARY); OCV::calc_back_project<1>({input_hsv_}, {0}, model_hist_, output, {{ {0, 180} }}); output &= map_saturation_mask_; }else{ calc_histogram<1>({model_hsv_}, model_hist_, {0}, {180}, {{ {0, 180} }}); OCV::calc_back_project<1>({input_hsv_}, {0}, model_hist_, output, {{ {0, 180} }}); } }
If you want to trace the object by camera or trace the object in video, it is pretty
simple, you only need to read the "frame" of the camera or video, then apply the
mean-shift or CamShift algorithm on those frame(CamShift could gain better result,
do remember to use the return value of CamShift as a new window for CamShift).
Next tutorial of mean-shift I will try to trace the object of video by CamShift explain
how to implement mean-shift.
As usual, the codes can download from github.
No comments:
Post a Comment