Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

16UC1 Format Support? #99

Open
5730289021-NN opened this issue May 14, 2020 · 9 comments
Open

16UC1 Format Support? #99

5730289021-NN opened this issue May 14, 2020 · 9 comments

Comments

@5730289021-NN
Copy link

Greeting everyone,

When I run Intel Realsense D435i on Ubuntu 18.04, I've got these message when I try to access to depth sensor data aka. /camera/depth/image_rect_raw

[ INFO] [1589436848.180278750]: Handling Request: /stream_viewer?topic=/camera/depth/image_rect_raw
[ INFO] [1589436848.199399190]: Handling Request: /stream?topic=/camera/depth/image_rect_raw
[ERROR] [1589436848.323809148]: cv_bridge exception: [16UC1] is not a color format. but [bgr8] is. The conversion does not make sense

I'm not sure whether the 16UC1 support is implemented already.

Does anyone get this problem like this?
Thanks

@jveitchmichaelis
Copy link

jveitchmichaelis commented Jun 25, 2020

Have you tried mono16? That ought to work. See the list of names here:

http://wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages#cv_bridge.2FTutorials.2FUsingCvBridgeCppHydro.Converting_ROS_image_messages_to_OpenCV_images

The video server won't do min/max scaling of the intensities in the video though, so you may end up with a very dark stream.

@liamabcxyz
Copy link

Have you solved the problem? I met the same problem. @5730289021-NN

@jveitchmichaelis
Copy link

@liamabcxyz did you try mono16 as well? ROS can definitely handle 16-bit images. Or you can do your own scaling and then feed the video server an 8-bit image. If you need to actually use the data though, you should listen on the topic.

@liamabcxyz
Copy link

Hi, @jveitchmichaelis Thanks for your reply! My goal is to display depth image("/camera/depth/image_rect_raw") on webpage. But I don't know how to change 16uc1 to mono16 directly. Thus, my current plan is to send sensor_msgs/Image.msg to webpage, use javascript to display the image. Thanks! I'm looking forward to your reply!

@jveitchmichaelis
Copy link

jveitchmichaelis commented Feb 18, 2021

@liamabcxyz This is set in cv_bridge so all you need to do is pass in your 16 bit image (i.e. a cv::Mat) and declare it as mono16. It depends if you're using Python or C++, but essentially you set the encoding flag when you convert your OpenCV Mat. You need to do this at the level of your camera publisher.

Possibly modify it here:

img = cv_bridge::toCvCopy(msg, "bgr8")->image;

e.g. add an additional condition to check for 16 bit images. Or, just rescale your 16-bit depth image to an 8-bit range.

else if(msg->encoding.find("16") != std::string::npos){
      img = cv_bridge::toCvCopy(msg, "mono16")->image;
}else{
      // Convert to OpenCV native BGR color
      img = cv_bridge::toCvCopy(msg, "bgr8")->image;
}

see http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython

@liamabcxyz
Copy link

liamabcxyz commented Feb 18, 2021

Hi, @jveitchmichaelis Thanks again! With your help, I write a node to convert 16uc1 to bgr8 with python. It works!

  import roslib
  roslib.load_manifest('converter_depth_image')
  import sys
  import rospy
  import cv2
  import numpy as np
  from std_msgs.msg import String
  from sensor_msgs.msg import Image
  from cv_bridge import CvBridge, CvBridgeError

  class image_converter:

    def __init__(self):
      self.image_pub = rospy.Publisher("image_topic_3",Image)

      self.bridge = CvBridge()
      self.image_sub = rospy.Subscriber("camera/depth/image_rect_raw",Image,self.callback)

    def callback(self,data):

      try:
        data.encoding='mono16'
        cv_image = self.bridge.imgmsg_to_cv2(data, "mono16")
      except CvBridgeError as e:
        print(e)

      img_n = cv2.normalize(src=cv_image, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
      im_color = cv2.cvtColor(img_n, cv2.COLOR_GRAY2BGR)


      try:
        self.image_pub.publish(self.bridge.cv2_to_imgmsg(im_color, "bgr8"))
      except CvBridgeError as e:
        print(e)

  def main(args):
    rospy.init_node('image_converter', anonymous=True)
    ic = image_converter()

    try:
      rospy.spin()
    except KeyboardInterrupt:
      print("Shutting down")

  if __name__ == '__main__':
      main(sys.argv)

tlwong3 added a commit to dfautomation/web_video_server that referenced this issue Apr 28, 2021
@RalphCodesTheInternet
Copy link

How was the latency with this approach?

Have you by any change figured out how to decode 16UC1 messages sent from rosbridge?

@zaksel
Copy link

zaksel commented Feb 16, 2022

Hi, i also use a Realsense camera with the provided ros wrapper and also stumbled on this error message.

I got the depth stream working by adjusting the image_streamer.cpp like mentioned by @jveitchmichaelis.
On the ros wrapper of the realsense i changed the encoding to mono16 by changing the following lines in base_realsense_node.cpp:
Line 96
_encoding[RS2_STREAM_DEPTH] = sensor_msgs::image_encodings::TYPE_16UC1; // ROS message type
_encoding[RS2_STREAM_DEPTH] = sensor_msgs::image_encodings::MONO16; // ROS message type
and also Line119
_depth_aligned_encoding[RS2_STREAM_COLOR] = sensor_msgs::image_encodings::TYPE_16UC1;
_depth_aligned_encoding[RS2_STREAM_COLOR] = sensor_msgs::image_encodings::MONO16;

Maybe this can be helpful for others :)

@StevenqrYang
Copy link

encount error like this when using your decoding function :

[ERROR] [1687136715.285666]: bad callback: <bound method detect.read_depimg of <__main__.detect object at 0x7f0c7142e8>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "rstest_ros.py", line 281, in read_depimg
    self.depth_image = bridge.imgmsg_to_cv2(data, "mono16")
  File "/home/jetson/catkin_cvbridge_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 168, in imgmsg_to_cv2
    dtype=dtype, buffer=img_msg.data)
TypeError: buffer is too small for requested array

As the data of this topic is stored as a list.

tlwong3 added a commit to dfautomation/web_video_server that referenced this issue Sep 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants