Home:ALL Converter>Create rtsp stream based on opencv images in python

Create rtsp stream based on opencv images in python

Ask Time:2018-06-27T17:17:07         Author:Max la Cour Christensen

Json Formatter

My goal is to read frames from an rtsp server, do some opencv manipulation, and write the manipulated frames to a new rtsp server.

I tried the following based on Write in Gstreamer pipeline from opencv in python, but I was unable to figure out what the appropriate gst-launch-1.0 arguments should be to create the rtsp server. Can anyone assist with proper arguments to gst-launch-1.0? The ones I tried got stuck in "Pipeline is PREROLLING"

import cv2

cap = cv2.VideoCapture("rtsp://....")

framerate = 25.0

out = cv2.VideoWriter('appsrc ! videoconvert ! '
  'x264enc noise-reduction=10000 speed-preset=ultrafast 
   tune=zerolatency ! '
  'rtph264pay config-interval=1 pt=96 !'
  'tcpserversink host=192.168.1.27 port=5000 sync=false',
  0, framerate, (640, 480))


counter = 0
while cap.isOpened():
  ret, frame = cap.read()
  if ret:
    out.write(frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
      break
  else:
    break

cap.release()
out.release()

I also tried another solution based on Write opencv frames into gstreamer rtsp server pipeline

import cv2
import gi 

gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0') 
from gi.repository import Gst, GstRtspServer, GObject

class SensorFactory(GstRtspServer.RTSPMediaFactory):
  def __init__(self, **properties): 
    super(SensorFactory, self).__init__(**properties) 
    #self.cap = cv2.VideoCapture(0)
    self.cap = cv2.VideoCapture("rtsp://....")
    self.number_frames = 0 
    self.fps = 30
    self.duration = 1 / self.fps * Gst.SECOND  # duration of a frame in nanoseconds 
    self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' \
                         'caps=video/x-raw,format=BGR,width=640,height=480,framerate={}/1 ' \
                         '! videoconvert ! video/x-raw,format=I420 ' \
                         '! x264enc speed-preset=ultrafast tune=zerolatency ' \
                         '! rtph264pay config-interval=1 name=pay0 pt=96'.format(self.fps)
  
  def on_need_data(self, src, lenght):
    if self.cap.isOpened():
      ret, frame = self.cap.read()
      if ret:
        data = frame.tostring() 
        buf = Gst.Buffer.new_allocate(None, len(data), None)
        buf.fill(0, data)
        buf.duration = self.duration
        timestamp = self.number_frames * self.duration
        buf.pts = buf.dts = int(timestamp)
        buf.offset = timestamp
        self.number_frames += 1
        retval = src.emit('push-buffer', buf) 
        
        print('pushed buffer, frame {}, duration {} ns, durations {} s'.format(self.number_frames, self.duration, self.duration / Gst.SECOND)) 

        if retval != Gst.FlowReturn.OK: 
          print(retval) 

  def do_create_element(self, url): 
    return Gst.parse_launch(self.launch_string) 

  def do_configure(self, rtsp_media): 
    self.number_frames = 0 
    appsrc = rtsp_media.get_element().get_child_by_name('source') 
    appsrc.connect('need-data', self.on_need_data) 


class GstServer(GstRtspServer.RTSPServer): 
  def __init__(self, **properties): 
    super(GstServer, self).__init__(**properties) 
    self.factory = SensorFactory() 
    self.factory.set_shared(True) 
    self.get_mount_points().add_factory("/test", self.factory) 
    self.attach(None) 


GObject.threads_init() 
Gst.init(None) 

server = GstServer() 

loop = GObject.MainLoop() 
loop.run()

This solution generates the rtsp server and streams it to it. I can open the resulting rtsp stream in VLC, but it keeps displaying the first frame and does not update with new frames. Anyone who knows why?

I'm looking for any solution which will enable me with low latency to read frames from an rtsp server into an opencv format, manipulate the frames and output the frames into a new rtsp server (which I also need to create). If something better exists, the solution does not need to be based on gstreamer.

I am using Ubuntu 16.04 with python2.7 and opencv 3.4.1

Author:Max la Cour Christensen,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/51058911/create-rtsp-stream-based-on-opencv-images-in-python
Innuendo :

I did once a similar thing, reading frames from RTSP server and processing them within OpenCV. For some reason I could not use VideoCapture of cv2, it did not work. So my solution was to use ffmpeg to convert RTSP input into a stream of bitmaps, for my problem it was ok to read the grayscale image with 1 byte per pixel.\nThe basic implementation idea was:\n\nRunning ffmpeg process, which is my start_reading() method;\nHaving a thread which reads bytes from ffmpeg's stdout frame by frame within a pipe;\nHaving a property of the class which returns the last frame from ffmpeg. Note that this is asynchronous reading, as you could see from the code, but worked fine to me;\n\nHere's my code (it's python3 but should be easily convertible to 2.7).\nimport subprocess\nimport shlex\nimport time\nfrom threading import Thread\nimport os\nimport numpy as np\nimport logging\n\n\nclass FFMPEGVideoReader(object):\n def __init__(self, rtsp_url: str, width:int=320, height:int=180) -> None:\n super().__init__()\n self.rtsp_url = rtsp_url\n self.width = width\n self.height=height\n self.process = None\n self._stdout_reader = Thread(target=self._receive_output, name='stdout_reader', daemon=True)\n self._stdout_reader.start()\n self.frame_number = -1\n self._last_frame_read = -1\n\n def start_reading(self):\n if self.process is not None:\n self.process.kill()\n self.process = None\n # Customize your input/output params here\n command = 'ffmpeg -i {rtsp} -f rawvideo -r 4 -pix_fmt gray -vf scale={width}:{height} -'.format(rtsp=self.rtsp_url, width=self.width, height=self.height)\n logging.debug('Opening ffmpeg process with command "%s"' % command)\n args = shlex.split(command)\n FNULL = open(os.devnull, 'w')\n self.process = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=FNULL)\n\n def _receive_output(self):\n chunksize = self.width*self.height\n\n while True:\n while self.process is None:\n time.sleep(1)\n self._last_chunk = self.process.stdout.read(chunksize)\n self.frame_number += 1\n \n @property\n def frame(self):\n started = time.time()\n while self._last_frame_read == self.frame_number:\n time.sleep(0.125) # Put your FPS threshold here\n if time.time() - started > self.MAX_FRAME_WAIT:\n logging.warning('Reloading ffmpeg process...')\n self.start_reading()\n started = time.time()\n self._last_frame_read = self.frame_number\n\n dt = np.dtype('uint8')\n vec = np.frombuffer(self._last_chunk, dtype=dt)\n return np.reshape(vec, (self.height, self.width))\n\n\nif __name__ == '__main__':\n logging.basicConfig(level=logging.DEBUG)\n vr = FFMPEGVideoReader('rtsp://192.168.1.10:554/onvif2', width=320, height=180)\n vr.start_reading()\n\n while True:\n print('update')\n fr = vr.frame\n np.save('frame.npy', fr)\n\nIf you need color images, thenyou need to change the pix_fmt in the ffmpeg's command, reading (width * height * channels) bytes, and then reshaping it correctly to one more axis.",
2022-01-24T18:02:44
SeB :

Another option would be to have an opencv VideoWriter encoding H264 frames and sending to shmsink:\nh264_shmsink = cv2.VideoWriter("appsrc is-live=true ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! "\n "nvv4l2h264enc insert-sps-pps=1 ! video/x-h264, stream-format=byte-stream ! h264parse ! shmsink socket-path=/tmp/my_h264_sock ",\n cv2.CAP_GSTREAMER, 0, float(fps), (int(width), int(height)))\n\nwhere width and height are the sizes of pushed frames, and then use shmsrc doing timestamp as source for test-launch RTSP server such as:\n./test-launch "shmsrc socket-path=/tmp/my_h264_sock do-timestamp=1 ! video/x-h264, stream-format=byte-stream, width=640, height=480, framerate=30/1 ! h264parse ! video/x-h264, stream-format=byte-stream ! rtph264pay pt=96 name=pay0 "\n\nThis may have some system overhead, but may work for low bitrate, or may require some optimization for higher bitrates.",
2022-01-30T21:07:52
yy