I'm using OpenCV in C++ to process a cv::Mat
before printing it to a ROS topic. For some reason cv::drawKeypoints
messes up my result by virtually stretching it over the width beyond the image frame:
. The blob in the right topic represents the one on the top left in the left topic.
Here's my code:
image_transport::Publisher pubthresh;
image_transport::Publisher pubkps;
cv::SimpleBlobDetector detector;
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat mat = cv_bridge::toCvShare(msg, "bgr8")->image;
cv::cvtColor(mat,mat, CV_BGR2GRAY );
cv::threshold(mat,mat,35,255,0);
std::vector<cv::KeyPoint> keypoints;
detector.detect(mat, keypoints);
cv::Mat kps;
cv::drawKeypoints( mat, keypoints, kps, cv::Scalar(0,0,255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
sensor_msgs::ImageConstPtr ithresh,ikps;
ithresh = cv_bridge::CvImage(std_msgs::Header(), "mono8", mat).toImageMsg();
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
pubthresh.publish(ithresh);
pubkps.publish(ikps);
}
int main(int argc, char **argv)
{
...
image_transport::Subscriber sub = it.subscribe("/saliency_map", 1, imageCallback);
...
}
After the cv::drawKeypoints
operation both cv::Mat
are treated the same. According to the documentation the image shouldn't get resized either. What am I missing here?
Looks like your result image isn't grayscale but color image. Stretching here means, that each pixel becomes implicitly 3x the size in horizontal direction, because of having 3 channels, which are interpreted as grayscale values.
So try to convert kps
to grayscale before using your publishing stuff.
cv::cvtColor(kps,kps, CV_BGR2GRAY );
Or adjust the line
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
to publish a bgr color image instead of "mono8". But I don't know how to use that code.