google chrome – FFMPEG stream recording results in video that browser cannot play

I am recording an rtsp stream with the following command and flag:

ffmpeg -rtsp_transport tcp -nostdin -hide_banner -loglevel error -t 3600 -i rtsp://IP/channel -c copy -f segment -segment_time_delta 0.20 -write_empty_segments 1 -segment_time 1 -reset_timestamps 1 -map 0 ./%d/cam1.mp4

I added the segment_time_delta flag because I noticed the keyframes in the first 1s were not arriving regularly, and my goal was to synchronize 2 distinct cameras filming the same scene.

This worked, however the resulting video is one that cannot be played in the Chrome browser. Similarly, videojs complains about an unsupported format.

This wasn’t the case before the segment_time_delta flag, but that’s the only thing solving my issue right now.

Am I missing some flag or options to have the properly encoded mp4 that can be played without re-encoding?

Happy to provide any more information!

windows – How to keep the first minute of a batch of videos without reencoding with FFMPEG?


Your privacy


By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.




Convert multiple .mp3 files (or single .m4a) into .m4b with ffmpeg and afconvert on macOS

After a lot of research, I found a solution and wanted to share my findings here.

The general idea of the process is to:

  • Combine the separate mp3s into a single mp3
  • Convert the combined mp3 into an m4a using afconvert (AFAIK mac-only) to reduce the amount of filesize increase from converting the mp3’s to an m4a by utilizing the itunes plus commands (stackexchange topic; apple docs) (my attempts at using ffmpeg for this part resulted in huge file size bloat or long processing times)
    • Note that there is still some file size bloat because the command I used to turn the caf > m4a drops the -u pgcm 2 parameter because it resulted in errors (Couldn't set audio converter property ('prop'))
  • Generate an FFMETADATA file (more info) using a python script (inspired by this) to facilitate the conversion of an m4a to m4b and preserve chapters.
  • Combine the m4a and FFMETADATA file into an m4b

Prereqs:

  • Order your mp3 files in a directory for just that audiobook (ex. 00 – Chapter 1.mp3, 01 – Chapter 2.mp3, etc)
  • Python3 (brew install python if you dont have it)
  • FFMPEG (brew install ffmpeg if you don’t have it)
  • AFConvert (pre-installed on osx)

Steps:

  1. Save the below python script to a file then execute it. Supply the audiobook directory that your mp3s are in. If you followed the ordering example above, the “enumeration separator” would be ” – ” (enter into the command without quotes)
import re
import glob
from mutagen.mp3 import MP3
import os
import datetime

chapterFileName = "chapters.txt"
metadataFileName = "FFMETADATAFILE"

def main():
   global chapterFileName
   global metadataFileName

   print("This script will help generate an FFMETADATA file to facilitatenconverting an .m4a to a .m4b file")

   # scan given directory for file type
   directory=input('Directory (default pwd): ') or os.getcwd()
   print('   using: "' + directory + '"')
   chapterFileName = directory + "/chapters.txt";
   metadataFileName = directory + "/FFMETADATAFILE"

   skip = input('Skip chapter.txt creation? (default n): ') or 'n'
   if skip == 'y':
      createMetadataFile()
      return

   fileType=input('Input audio file type (default mp3): ') or 'mp3'
   print('   using: "' + fileType + '"')
   numberSeperator=input('Enumeration separator (symbol/phrase between enumeration and title): ') or ''
   print('   using: "' + (numberSeperator or '(blank)') + '"')
   if not directory or not fileType:
      print('Input missing - exiting')
      return

   fileNames = list()
   for file in glob.glob(directory + '/*.' + fileType):
      fileNames.append(file)
   fileNames.sort()

   rawChapters = list()
   currentTimestamp = 0 # in seconds
   for file in fileNames:
      audio = MP3(file)
      
      time = str(datetime.timedelta(seconds=currentTimestamp))

      title = os.path.splitext(file)(0).split("https://apple.stackexchange.com/")(-1)
      if numberSeperator != '':
         title = title.split(numberSeperator)(-1)

      rawChapters.append(time + ' ' + title)
      currentTimestamp = int(currentTimestamp + audio.info.length)

   with open(chapterFileName, "w") as chaptersFile:
      for chapter in rawChapters:
         chaptersFile.write(chapter + "n")

   input('File created at "' + chapterFileName + '". Review to make sure it looks rightn ("<timestamp> <title>"), then hit Enter to continue... ')
   createMetadataFile()

def createMetadataFile():
   global chapterFileName
   global metadataFileName

   # import chapters and create ffmetadatafile
   chapters = list()
   with open(chapterFileName, 'r') as f:
      for line in f:
         x = re.match(r"(d*):(d{2}):(d{2}) (.*)", line)
         hrs = int(x.group(1))
         mins = int(x.group(2))
         secs = int(x.group(3))
         title = x.group(4)

         minutes = (hrs * 60) + mins
         seconds = secs + (minutes * 60)
         timestamp = (seconds * 1000)
         chap = {
            "title": title,
            "startTime": timestamp
         }
         chapters.append(chap)

   text = ";FFMETADATA1n"
   for i in range(len(chapters)-1):
      chap = chapters(i)
      title = chap('title')
      start = chap('startTime')
      end = chapters(i+1)('startTime')-1
      text += f"(CHAPTER)nTIMEBASE=1/1000nSTART={start}nEND={end}ntitle={title}n"

   with open(metadataFileName, "w") as myfile:
       myfile.write(text)
   
   print('Created metadata file at "' + metadataFileName + '"')
   removeChapters = input('Remove chapter.txt? (default y): ') or 'y'
   if removeChapters == 'y':
      os.remove(chapterFileName)

main()
  1. Open Terminal to the directory of your mp3s and FFMETADATA file and execute the following command:
ffmpeg -f concat -safe 0 -i <(for f in ./*.mp3; do echo "file '$PWD/$f'"; done) -c copy output.mp3 && afconvert output.mp3 intermediate.caf -d 0 -f caff --soundcheck-generate -v && afconvert intermediate.caf -d aac -f m4af --soundcheck-read -b 256000 -q 127 -s 2 output.m4a -v && ffmpeg -i output.m4a -i FFMETADATAFILE -map_metadata 1 -codec copy output.m4b && rm output.mp3 && rm output.m4a && rm FFMETADATAFILE && rm intermediate.caf

This is a joined command that will (you can split it up at the “&&”s and run separately if you want):
(1) combine the mp3’s into a single mp3 called “output.mp3”;
(2) convert the combined mp3 into an intermediate caff file;
(3) convert the caff file into an m4a;
(4) combine the m4a and FFMETADATAFILE into an m4b;
(5) clean up the files used and generated by this command

If the file bloat is too much from afconvert, you can also move combined mp3 (output.mp3) into Music/iTunes and convert to aac from there (if so, don’t need scripts 2 & 3 here), but its a lot slower and may not yield significant improvements.

  1. You now have an m4b file (output.m4b) to use! I like to open the file in the freeware Kid3 tag editor and add the following fields:
    • Title: title of audiobook
    • Author
    • Album: title of audiobook
    • Comment: audiobook description
    • Genre: “Audiobook”
    • Date: Year of recording or book published
    • Cover

From here, you can add the m4b to your audiobooks app of choice or store it in your calibre library

windows – FFMPEG is throwing Not enough memory resources on Azure VM

I have 500 MB video file which I am converting to MP4 format.

My laptop is i7 and 16 GB Memory and ffmpeg is able to convert the file.
Command used is as follows:

ffmpeg.exe -i  "a.mp4" -c:v  libx264  -crf  35  -preset  ultrafast  "b.mp4"

On Azure , I have D series machine with 8 core and 32 GB. Memory used is 14% only. at this time, when I run this command , after certain time I get following error.

Is there any other fine tuning required on azure VM?

out:


err: ffmpeg version N-102630-g51f1194eda Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 10-win32 (GCC) 20210408
  configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --enable-libvmaf --enable-vulkan --enable-amf --enable-libaom --enable-avisynth --enable-libdav1d --enable-libdavs2 --enable-ffnvcodec --enable-cuda-llvm --enable-libglslang --enable-libgme --enable-libass --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libmfx --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --enable-libvidstab --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp
  libavutil      57.  0.100 / 57.  0.100
  libavcodec     59.  1.100 / 59.  1.100
  libavformat    59.  2.101 / 59.  2.101
  libavdevice    59.  0.100 / 59.  0.100
  libavfilter     8.  0.101 /  8.  0.101
  libswscale      6.  0.100 /  6.  0.100
  libswresample   4.  0.100 /  4.  0.100
  libpostproc    56.  0.100 / 56.  0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'a.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    creation_time   : 2013-11-27T01:38:48.000000Z
  Duration: 00:17:38.00, start: 0.040000, bitrate: 1972 kb/s
  Stream #0:0(und): Video: hevc (Main) (hev1 / 0x31766568), yuvj420p(pc, bt709), 1920x1080, 1971 kb/s, 25 fps, 25 tbr, 1250 tbn (default)
    Metadata:
      creation_time   : 2013-11-27T01:38:48.000000Z
      handler_name    : VideoHandler
      vendor_id       : (0)(0)(0)(0)
Stream mapping:
  Stream #0:0 -> #0:0 (hevc (native) -> h264 (libx264))
Press (q) to stop, (?) for help
(libx264 @ 00000253f18aa780) using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
(libx264 @ 00000253f18aa780) profile Constrained Baseline, level 4.0, 4:2:0, 8-bit
(libx264 @ 00000253f18aa780) 264 - core 161 - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=35.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, mp4, to 'b.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    encoder         : Lavf59.2.101
  Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuvj420p(pc, bt709, progressive), 1920x1080, q=2-31, 25 fps, 12800 tbn (default)
    Metadata:
      creation_time   : 2013-11-27T01:38:48.000000Z
      handler_name    : VideoHandler
      vendor_id       : (0)(0)(0)(0)
      encoder         : Lavc59.1.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=26450 fps=256 q=-1.0 Lsize=  331714kB time=00:17:37.96 bitrate=2568.5kbits/s speed=10.2x
video:331604kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.032993%
(libx264 @ 00000253f18aa780) frame I:106   Avg QP:31.17  size:143921
(libx264 @ 00000253f18aa780) frame P:26344 Avg QP:34.09  size: 12310
(libx264 @ 00000253f18aa780) mb I  I16..4: 100.0%  0.0%  0.0%
(libx264 @ 00000253f18aa780) mb P  I16..4:  1.3%  0.0%  0.0%  P16..4: 17.5%  0.0%  0.0%  0.0%  0.0%    skip:81.2%
(libx264 @ 00000253f18aa780) coded y,uvDC,uvAC intra: 48.5% 8.5% 1.4% inter: 8.1% 0.3% 0.0%
(libx264 @ 00000253f18aa780) i16 v,h,dc,p: 16% 30% 41% 13%
(libx264 @ 00000253f18aa780) i8c dc,h,v,p: 66% 17% 15%  2%
(libx264 @ 00000253f18aa780) kb/s:2567.58
Not enough memory resources are available to process this command.

FFMpeg Batch file – Super User

I have around 500 mp4 files. I would like to add Title, Artist, and Album for each mp4 file. I am trying to use FFmpeg with the following commands. How can I create a batch file to update all of the mp4 files?

ffmpeg -i Video1.mp4 -metadata title=”Title1″ -c copy D:New_SongsVideo1.mp4
ffmpeg -i Video2.mp4 -metadata title=”Title2″ -c copy D:New_SongsVideo2.mp4
ffmpeg -i Video3.mp4 -metadata title=”Title3″ -c copy D:New_SongsVideo3.mp4
ffmpeg -i Video4.mp4 -metadata title=”Title4″ -c copy D:New_SongsVideo4.mp4

image processing – How to create a cardboard .vr.jpg file in Windows Shell (DOS batch script) using ffmpeg and exiftool?

Starting from this incredibly complex script I am trying to write a stripped down version which just creates a .vr.jpg file starting from one single equirectangular full panorama 360×180, but the resulting image:

It should be “just” a matter of writing six EXIF/XMP tags in the file, plus one EXIF/XMP tag containing the image for right eye; my script does all of this…. but it actually does not work.

Any idea?

If I understand correctly this image…

photosphere

…values of tags for a full sphere should be:

  1. CroppedAreaLeftPixels = 0
  2. CroppedAreaTopPixels = 0
  3. CroppedAreaImageWidthPixels = Image Width
  4. CroppedAreaImageHeightPixels = Image Height
  5. FullPanoWidthPixels = Image Width
  6. FullPanoHeightPixels = Image Height

But nor this neither other combinations work.

Ideas?

The script:

set R=%~n1.vr.jpg
set O=%~n1-cardboard.vr.jpg

:mpo
call :set                                                     ImageWidthR ImageHeightR
exiftool -s2 -ImageWidth -ImageHeight "%R%"

for /f "tokens=1,2 usebackq delims=: " %%i in (`exiftool -s2 -ImageWidth -ImageHeight "%R%"`) do set %%i=%%j




set /a FullPanoWidthPixels=ImageWidth
set /a FullPanoHeightPixels=ImageHeight

set /a CroppedAreaImageWidthPixels=ImageWidth
set /a CroppedAreaImageHeightPixels=ImageHeight

set /a CroppedAreaLeftPixels=0
set /a CroppedAreaTopPixels=ImageHeight/2



exiftool -XMP-GPano:all^
 -XMP-GPano:UsePanoramaViewer="True"^
 -XMP-GPano:CroppedAreaLeftPixels="%CroppedAreaLeftPixels%"^
 -XMP-GPano:CroppedAreaTopPixels="%CroppedAreaTopPixels%"^
 -XMP-GPano:CroppedAreaImageWidthPixels="%CroppedAreaImageWidthPixels%"^
 -XMP-GPano:CroppedAreaImageHeightPixels="%CroppedAreaImageHeightPixels%"^
 -XMP-GPano:FullPanoWidthPixels="%FullPanoWidthPixels%"^
 -XMP-GPano:FullPanoHeightPixels="%FullPanoHeightPixels%"^
 -XMP-GPano:ProjectionType="equirectangular"^
 -XMP-GPano:LargestValidInteriorRectLeft="0"^
 -XMP-GPano:LargestValidInteriorRectTop="0"^
 -XMP-GPano:LargestValidInteriorRectWidth="%FullPanoWidthPixels%"^
 -XMP-GPano:LargestValidInteriorRectHeight="%FullPanoHeightPixels%"^
 -XMP-GPano:InitialHorizontalFOVDegrees="50"^
                -XMP-GImage:"ImageMimeType=image/jpeg"^
                -XMP-GImage:"ImageData<=%~n1.vr.jpg" ^
 %~n1.vr.jpg


goto :eof

:set
 @if "%1"=="" goto :EOF
 @set %1=
 @shift
@goto :set

image processing – How to fill with transparent color the empty space created by V360 filter of FFMPEG while converting from flat to equirectangular?

I am using this command to convert to flat a small region of an equirectangular image (BTW: how can I specify how much large the section is?):

ffmpeg -hide_banner    -i input.png -vf v360=e:flat -y output1.png

full image equirect

Extracted section:

extracted

The I modify the cropping as I need:

edited section

Then I convert it back to equirectangular:

ffmpeg  -hide_banner -i output1.png -vf v360=flat:e -y output2.png

I get a big equirectangular white image, with my section in the center:

equi section

Now I want to overlay it to the original image…

ffmpeg  -hide_banner   -i input.png  -i  output2.png -filter_complex "overlay" -y output3.png

… but of course the original image is completely hidden by the white part of output2.png, unless I manually open it in IrfanView, set the white as transparent, and save the image before overlaying it on the original image.

Can I use FFMpeg to set as transparent the white part of the image without usaing IrfanView? I experimented randomly with fillcolor and color, but I don’t get it.

fisheye – How to calibrate a specific 360 camera for FFMpeg to properly reproject to equirectangular?

I have a generic 360 Camera, and when I convert its fisheye 235° output into equirectangular using FFMpeg, I don’t get good results, the image appears squeezed at the bottom.

How could I determine the exact transformation needed to properly convert my images into equirectangular? FFMpeg has a “remap” filter, which uses 2 .pgm files for remapping… but how do I crate such files for my camera?

There is also lensfun and lenscorrection , but question is the same: how do I tune them for my camera?

I am using the generic filter v360 with these parameters:

ffmpeg -i input.jpg -vf v360=fisheye:e:ih_fov=235:iv_fov=235 -y output.jpg

These are 2 croppings on a detail of the equirectangular output:

cropped frames

Look at how much the car is squeezed!

This also cause 360 video stabilization to fail due to too much distortion.

ffmpeg – How to synchronize audio and video when using x11grab and pulse?

I am using FFMpeg to grab the desktop using x11grab. I am also using pulse for audio. However, in my FFMpeg output, the audio is ahead of the video. My arguments are:

ffmpeg 
      -hide_banner 
      -loglevel error 
      -f x11grab 
      -framerate 30 
      -draw_mouse 0 
      -video_size <width>x<height> 
      -thread_queue_size 1024 
      -i :0.0+0,0 
      -f pulse 
      -ac 2 
      -thread_queue_size 1024 
      -i <pulse device name> 
      -map 0:v:0 
      -c:v libx264 
      -preset ultrafast 
      -minrate:v 500K  
      -maxrate:v 4M 
      -bufsize:v 4M 
      -x264-params keyint=90 
      -pix_fmt yuv420p 
      -map 1:a:0 
      -c:a aac 
      -ab 128k 
      -ac 2 
      -ar 44100 
      -f flv 
      <rtmp url>

How do I lighten-only N frames not a power of two in ffmpeg?

I was trying to apply a lighten-only filter to combine multiple frames like so:

ffmpeg -i 20210730-223539.mp4 -vf “tblend=lighten,framestep=100,setpts=0.01*PTS” -r 60 -crf 22 -an -c:v libx265 20210730-223539-OUT.mp4

This footage has a timestamp and I was expecting the timestamp to become unreadable in the lower digits as the lighten-only filter combines frames. However, I can clearly see each frame advancing the timestamp by a crisp 7 seconds. I went back and re-read the documentation and it seems that tblend always only ever combines two frames, no matter what framestep I supply!

I’ve already read the question and associated answer for How do I blend/average N amount of frames as opposed to only 2?. However, in my case I was trying to use tblend with the lighten only filter to create a smoothed time lapse that captures features like lightning which are only present in 1 or 2 frames while I am trying to speed up the video by a factor of 100 or more.

I have previously used ffmpeg to create a single still image from an entire video file by daisy chaining tblend commands, but this required a lot of work to try to deal with fitting the length of the video into powers of two number of frames, and in the end only resulted in a single image.

How can I apply a lighten only filter to successive groups of N frames to produce a new sped up video, where N will not be a power of two?