当前位置: 代码迷 >> 综合 >> Jetson TX2 Xvier 之 GStreamer+OpenCV读取显示摄像头
  详细解决方案

Jetson TX2 Xvier 之 GStreamer+OpenCV读取显示摄像头

热度:40   发布时间:2024-02-22 12:08:57.0

转自:https://blog.csdn.net/zong596568821xp/article/details/80306987

参考:https://jkjung-avt.github.io/tx2-camera-with-python/

参考:http://blog.iotwrt.com/media/2017/08/23/opencv-gstreamer/

硬解码就是利用硬件芯片来解码的,TX2有单独的解码模块,NVDEC。软解码是用软件程序来解码,比较占用CPU资源。截止当前,nvidia的硬件编码官方提供了nvenc的方法,且在ffmpeg中已经增加了对nvenc的编码库。对于硬件解码,官方提供了基于cuda的解码方法,但是ffmpeg中还没有相应的解码库。

查看cpu gpu 以及编解码模块的使用: 

sudo jtop

首先,要先讨论下为什么需要在OpenCV上用上Gstreamer, 比如我直接一个摄像头 v4l2 图像传给 OpenCV 不行吗?

Gstreamer是嵌入式平台处理Media的首选组件, 像Nvdia/TI/NXP/Rockchip平台, 都是使用Gstreamer来整合Media应用. 在Rockchip平台上, 我们已经有为Gstreamer开发了像Decode/Encode/ISP-Camera/2D加速器/DRM-Display-sink这些的Plugin。所以OpenCV如果链接上Gstreamer,输入源就不仅仅是摄像头, 还可以是RTSP/本地视频;输出显示的代码可以不用写, 让Gstreamer来显示; 转换格式让Gstreamer来转, 利用硬件加速; 处理的图像送回Gstreamer编码。

本文转载自JK Jung的帖子,在本贴中,贴主分享了如何使用python 代码及 OpenCV在Jetson TX2上抓取和显示摄像头视频,包括IP摄像头, USB 网络摄像头和Jetson板载摄像头,这个简单代码也同样可以在Jetson TX1上运行。

准备条件

  1. 需要在Jetson TX2上安装 GStreamer 支持的 python和OpenCV(opencv-3.4.0方法) 
  2. 如果你是用 IP CAM, 你必须构建好,并且知道它的RTSP URI, 比如. rtsp://admin:XXXXX@192.168.1.64:554.
  3. 如果你是用USB 网络摄像头(I 使用的是罗技 C920),这个USB摄像头通常安装在 /dev/video1, 因为 Jetson 板载摄像头已经占用了 /dev/video0.
  4. 安装 gstreamer1.0-plugins-bad ,这个包含了 h264parse 元素. 这是为了解码来自IP摄像头的H.264 RTSP stream 所需要的(方法:sudo apt-get install gstreamer1.0-plugins-bad)

使用方法

OpenCV安装方法

参照buildOpenCVTX2的sh脚本进行手动编译。

注意-DWITH_GSTREAMER=ON要打开,如果要使用python3,则将-DBUILD_opencv_python3=OFF改为ON

代码下载

在GitHub上下载tegra-cam.py源代码 ,代码如下

import sys
import argparse
import cv2WINDOW_NAME = 'CameraDemo'def parse_args():# Parse input argumentsdesc = 'Capture and display live camera video on Jetson TX2/TX1'parser = argparse.ArgumentParser(description=desc)parser.add_argument('--rtsp', dest='use_rtsp',help='use IP CAM (remember to also set --uri)',action='store_true')parser.add_argument('--uri', dest='rtsp_uri',help='RTSP URI, e.g. rtsp://192.168.1.64:554',default=None, type=str)parser.add_argument('--latency', dest='rtsp_latency',help='latency in ms for RTSP [200]',default=200, type=int)parser.add_argument('--usb', dest='use_usb',help='use USB webcam (remember to also set --vid)',action='store_true')parser.add_argument('--vid', dest='video_dev',help='device # of USB webcam (/dev/video?) [1]',default=1, type=int)parser.add_argument('--width', dest='image_width',help='image width [1920]',default=1920, type=int)parser.add_argument('--height', dest='image_height',help='image height [1080]',default=1080, type=int)args = parser.parse_args()return argsdef open_cam_rtsp(uri, width, height, latency):gst_str = ('rtspsrc location={} latency={} ! ''rtph264depay ! h264parse ! omxh264dec ! ''nvvidconv ! ''video/x-raw, width=(int){}, height=(int){}, ''format=(string)BGRx ! ''videoconvert ! appsink').format(uri, latency, width, height)return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)def open_cam_usb(dev, width, height):# We want to set width and height here, otherwise we could just do:#     return cv2.VideoCapture(dev)gst_str = ('v4l2src device=/dev/video{} ! ''video/x-raw, width=(int){}, height=(int){}, ''format=(string)RGB ! ''videoconvert ! appsink').format(dev, width, height)return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)def open_cam_onboard(width, height):# On versions of L4T prior to 28.1, add 'flip-method=2' into gst_strgst_str = ('nvcamerasrc ! ''video/x-raw(memory:NVMM), ''width=(int)2592, height=(int)1458, ''format=(string)I420, framerate=(fraction)30/1 ! ''nvvidconv ! ''video/x-raw, width=(int){}, height=(int){}, ''format=(string)BGRx ! ''videoconvert ! appsink').format(width, height)return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)def open_window(width, height):cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)cv2.resizeWindow(WINDOW_NAME, width, height)cv2.moveWindow(WINDOW_NAME, 0, 0)cv2.setWindowTitle(WINDOW_NAME, 'Camera Demo for Jetson TX2/TX1')def read_cam(cap):show_help = Truefull_scrn = Falsehelp_text = '"Esc" to Quit, "H" for Help, "F" to Toggle Fullscreen'font = cv2.FONT_HERSHEY_PLAINwhile True:if cv2.getWindowProperty(WINDOW_NAME, 0) < 0:# Check to see if the user has closed the window# If yes, terminate the programbreak_, img = cap.read() # grab the next image frame from cameraif show_help:cv2.putText(img, help_text, (11, 20), font,1.0, (32, 32, 32), 4, cv2.LINE_AA)cv2.putText(img, help_text, (10, 20), font,1.0, (240, 240, 240), 1, cv2.LINE_AA)cv2.imshow(WINDOW_NAME, img)key = cv2.waitKey(10)if key == 27: # ESC key: quit programbreakelif key == ord('H') or key == ord('h'): # toggle help messageshow_help = not show_helpelif key == ord('F') or key == ord('f'): # toggle fullscreenfull_scrn = not full_scrnif full_scrn:cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)else:cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_NORMAL)def main():args = parse_args()print('Called with args:')print(args)print('OpenCV version: {}'.format(cv2.__version__))if args.use_rtsp:cap = open_cam_rtsp(args.rtsp_uri,args.image_width,args.image_height,args.rtsp_latency)elif args.use_usb:cap = open_cam_usb(args.video_dev,args.image_width,args.image_height)else: # by default, use the Jetson onboard cameracap = open_cam_onboard(args.image_width,args.image_height)if not cap.isOpened():sys.exit('Failed to open camera!')open_window(args.image_width, args.image_height)read_cam(cap)cap.release()cv2.destroyAllWindows()if __name__ == '__main__':main()

板载相机

按照下面步骤利用Jetson板载摄像头抓取和显示影像。默认分辨率为 1920x1080 @ 30fps.

python tegra-cam.py

USB摄像头

按照下面步骤使用 USB 网络摄像头,并且把分辨率设置为1280x720. 注意‘–vid 1’意思是使用 /dev/video1.

python tegra-cam.py --usb --vid 1 --width 1280 --height 720

IP摄像头

按照下面步骤使用IP摄像头,把最后的RTSP URI 参数设置为你自己的IP CAM.

python tegra-cam.py --rtsp --uri "rtsp://192.168.171.199:554/user=admin&password=&channel=1&stream=0.sdp?"

如果使用上述指令报错,则输入以下指令,首先确认相机没有问题,然后再查找其他原因

gst-launch-1.0 rtspsrc location="rtsp://192.168.171.199:554/user=admin&password=&channel=1&stream=0.sdp?" ! rtph264depay ! h264parse ! omxh264dec ! nveglglessink

讨论

这个tegra-cam.py 脚本的关键是依靠GStreamer pipelines 我用来call cv.VideoCapture(). 在我的经验里,使用 nvvidconv 做图像缩放,用 BGR 做颜色格式转换(注意: OpenCV 需要 BGR 作为最后的输出) 在帧速率方面会有更好效果。

def open_cam_rtsp(uri, width, height, latency):gst_str = ("rtspsrc location={} latency={} ! rtph264depay ! h264parse ! omxh264dec ! ""nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! ""videoconvert ! appsink").format(uri, latency, width, height)return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)def open_cam_usb(dev, width, height):# We want to set width and height here, otherwise we could just do:#     return cv2.VideoCapture(dev)gst_str = ("v4l2src device=/dev/video{} ! ""video/x-raw, width=(int){}, height=(int){}, format=(string)RGB ! ""videoconvert ! appsink").format(dev, width, height)return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)def open_cam_onboard(width, height):# On versions of L4T previous to L4T 28.1, flip-method=2# Use Jetson onboard cameragst_str = ("nvcamerasrc ! ""video/x-raw(memory:NVMM), width=(int)2592, height=(int)1458, format=(string)I420, framerate=(fraction)30/1 ! ""nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! ""videoconvert ! appsink").format(width, height)return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
  相关解决方案