yading@10: \input texinfo @c -*- texinfo -*- yading@10: yading@10: @settitle FFmpeg FAQ yading@10: @titlepage yading@10: @center @titlefont{FFmpeg FAQ} yading@10: @end titlepage yading@10: yading@10: @top yading@10: yading@10: @contents yading@10: yading@10: @chapter General Questions yading@10: yading@10: @section Why doesn't FFmpeg support feature [xyz]? yading@10: yading@10: Because no one has taken on that task yet. FFmpeg development is yading@10: driven by the tasks that are important to the individual developers. yading@10: If there is a feature that is important to you, the best way to get yading@10: it implemented is to undertake the task yourself or sponsor a developer. yading@10: yading@10: @section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it? yading@10: yading@10: No. Windows DLLs are not portable, bloated and often slow. yading@10: Moreover FFmpeg strives to support all codecs natively. yading@10: A DLL loader is not conducive to that goal. yading@10: yading@10: @section I cannot read this file although this format seems to be supported by ffmpeg. yading@10: yading@10: Even if ffmpeg can read the container format, it may not support all its yading@10: codecs. Please consult the supported codec list in the ffmpeg yading@10: documentation. yading@10: yading@10: @section Which codecs are supported by Windows? yading@10: yading@10: Windows does not support standard formats like MPEG very well, unless you yading@10: install some additional codecs. yading@10: yading@10: The following list of video codecs should work on most Windows systems: yading@10: @table @option yading@10: @item msmpeg4v2 yading@10: .avi/.asf yading@10: @item msmpeg4 yading@10: .asf only yading@10: @item wmv1 yading@10: .asf only yading@10: @item wmv2 yading@10: .asf only yading@10: @item mpeg4 yading@10: Only if you have some MPEG-4 codec like ffdshow or Xvid installed. yading@10: @item mpeg1video yading@10: .mpg only yading@10: @end table yading@10: Note, ASF files often have .wmv or .wma extensions in Windows. It should also yading@10: be mentioned that Microsoft claims a patent on the ASF format, and may sue yading@10: or threaten users who create ASF files with non-Microsoft software. It is yading@10: strongly advised to avoid ASF where possible. yading@10: yading@10: The following list of audio codecs should work on most Windows systems: yading@10: @table @option yading@10: @item adpcm_ima_wav yading@10: @item adpcm_ms yading@10: @item pcm_s16le yading@10: always yading@10: @item libmp3lame yading@10: If some MP3 codec like LAME is installed. yading@10: @end table yading@10: yading@10: yading@10: @chapter Compilation yading@10: yading@10: @section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'} yading@10: yading@10: This is a bug in gcc. Do not report it to us. Instead, please report it to yading@10: the gcc developers. Note that we will not add workarounds for gcc bugs. yading@10: yading@10: Also note that (some of) the gcc developers believe this is not a bug or yading@10: not a bug they should fix: yading@10: @url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}. yading@10: Then again, some of them do not know the difference between an undecidable yading@10: problem and an NP-hard problem... yading@10: yading@10: @section I have installed this library with my distro's package manager. Why does @command{configure} not see it? yading@10: yading@10: Distributions usually split libraries in several packages. The main package yading@10: contains the files necessary to run programs using the library. The yading@10: development package contains the files necessary to build programs using the yading@10: library. Sometimes, docs and/or data are in a separate package too. yading@10: yading@10: To build FFmpeg, you need to install the development package. It is usually yading@10: called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the yading@10: build is finished, but be sure to keep the main package. yading@10: yading@10: @chapter Usage yading@10: yading@10: @section ffmpeg does not work; what is wrong? yading@10: yading@10: Try a @code{make distclean} in the ffmpeg source directory before the build. yading@10: If this does not help see yading@10: (@url{http://ffmpeg.org/bugreports.html}). yading@10: yading@10: @section How do I encode single pictures into movies? yading@10: yading@10: First, rename your pictures to follow a numerical sequence. yading@10: For example, img1.jpg, img2.jpg, img3.jpg,... yading@10: Then you may run: yading@10: yading@10: @example yading@10: ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg yading@10: @end example yading@10: yading@10: Notice that @samp{%d} is replaced by the image number. yading@10: yading@10: @file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc. yading@10: yading@10: Use the @option{-start_number} option to declare a starting number for yading@10: the sequence. This is useful if your sequence does not start with yading@10: @file{img001.jpg} but is still in a numerical order. The following yading@10: example will start with @file{img100.jpg}: yading@10: yading@10: @example yading@10: ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg yading@10: @end example yading@10: yading@10: If you have large number of pictures to rename, you can use the yading@10: following command to ease the burden. The command, using the bourne yading@10: shell syntax, symbolically links all files in the current directory yading@10: that match @code{*jpg} to the @file{/tmp} directory in the sequence of yading@10: @file{img001.jpg}, @file{img002.jpg} and so on. yading@10: yading@10: @example yading@10: x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done yading@10: @end example yading@10: yading@10: If you want to sequence them by oldest modified first, substitute yading@10: @code{$(ls -r -t *jpg)} in place of @code{*jpg}. yading@10: yading@10: Then run: yading@10: yading@10: @example yading@10: ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg yading@10: @end example yading@10: yading@10: The same logic is used for any image format that ffmpeg reads. yading@10: yading@10: You can also use @command{cat} to pipe images to ffmpeg: yading@10: yading@10: @example yading@10: cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg yading@10: @end example yading@10: yading@10: @section How do I encode movie to single pictures? yading@10: yading@10: Use: yading@10: yading@10: @example yading@10: ffmpeg -i movie.mpg movie%d.jpg yading@10: @end example yading@10: yading@10: The @file{movie.mpg} used as input will be converted to yading@10: @file{movie1.jpg}, @file{movie2.jpg}, etc... yading@10: yading@10: Instead of relying on file format self-recognition, you may also use yading@10: @table @option yading@10: @item -c:v ppm yading@10: @item -c:v png yading@10: @item -c:v mjpeg yading@10: @end table yading@10: to force the encoding. yading@10: yading@10: Applying that to the previous example: yading@10: @example yading@10: ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg yading@10: @end example yading@10: yading@10: Beware that there is no "jpeg" codec. Use "mjpeg" instead. yading@10: yading@10: @section Why do I see a slight quality degradation with multithreaded MPEG* encoding? yading@10: yading@10: For multithreaded MPEG* encoding, the encoded slices must be independent, yading@10: otherwise thread n would practically have to wait for n-1 to finish, so it's yading@10: quite logical that there is a small reduction of quality. This is not a bug. yading@10: yading@10: @section How can I read from the standard input or write to the standard output? yading@10: yading@10: Use @file{-} as file name. yading@10: yading@10: @section -f jpeg doesn't work. yading@10: yading@10: Try '-f image2 test%d.jpg'. yading@10: yading@10: @section Why can I not change the frame rate? yading@10: yading@10: Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates. yading@10: Choose a different codec with the -c:v command line option. yading@10: yading@10: @section How do I encode Xvid or DivX video with ffmpeg? yading@10: yading@10: Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4 yading@10: standard (note that there are many other coding formats that use this yading@10: same standard). Thus, use '-c:v mpeg4' to encode in these formats. The yading@10: default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want yading@10: a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will yading@10: force the fourcc 'xvid' to be stored as the video fourcc rather than the yading@10: default. yading@10: yading@10: @section Which are good parameters for encoding high quality MPEG-4? yading@10: yading@10: '-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2', yading@10: things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'. yading@10: yading@10: @section Which are good parameters for encoding high quality MPEG-1/MPEG-2? yading@10: yading@10: '-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2' yading@10: but beware the '-g 100' might cause problems with some decoders. yading@10: Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd. yading@10: yading@10: @section Interlaced video looks very bad when encoded with ffmpeg, what is wrong? yading@10: yading@10: You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced yading@10: material, and try '-top 0/1' if the result looks really messed-up. yading@10: yading@10: @section How can I read DirectShow files? yading@10: yading@10: If you have built FFmpeg with @code{./configure --enable-avisynth} yading@10: (only possible on MinGW/Cygwin platforms), yading@10: then you may use any file that DirectShow can read as input. yading@10: yading@10: Just create an "input.avs" text file with this single line ... yading@10: @example yading@10: DirectShowSource("C:\path to your file\yourfile.asf") yading@10: @end example yading@10: ... and then feed that text file to ffmpeg: yading@10: @example yading@10: ffmpeg -i input.avs yading@10: @end example yading@10: yading@10: For ANY other help on Avisynth, please visit the yading@10: @uref{http://www.avisynth.org/, Avisynth homepage}. yading@10: yading@10: @section How can I join video files? yading@10: yading@10: To "join" video files is quite ambiguous. The following list explains the yading@10: different kinds of "joining" and points out how those are addressed in yading@10: FFmpeg. To join video files may mean: yading@10: yading@10: @itemize yading@10: yading@10: @item yading@10: To put them one after the other: this is called to @emph{concatenate} them yading@10: (in short: concat) and is addressed yading@10: @ref{How can I concatenate video files, in this very faq}. yading@10: yading@10: @item yading@10: To put them together in the same file, to let the user choose between the yading@10: different versions (example: different audio languages): this is called to yading@10: @emph{multiplex} them together (in short: mux), and is done by simply yading@10: invoking ffmpeg with several @option{-i} options. yading@10: yading@10: @item yading@10: For audio, to put all channels together in a single stream (example: two yading@10: mono streams into one stereo stream): this is sometimes called to yading@10: @emph{merge} them, and can be done using the yading@10: @url{http://ffmpeg.org/ffmpeg-filters.html#amerge, @code{amerge}} filter. yading@10: yading@10: @item yading@10: For audio, to play one on top of the other: this is called to @emph{mix} yading@10: them, and can be done by first merging them into a single stream and then yading@10: using the @url{http://ffmpeg.org/ffmpeg-filters.html#pan, @code{pan}} filter to mix yading@10: the channels at will. yading@10: yading@10: @item yading@10: For video, to display both together, side by side or one on top of a part of yading@10: the other; it can be done using the yading@10: @url{http://ffmpeg.org/ffmpeg-filters.html#overlay, @code{overlay}} video filter. yading@10: yading@10: @end itemize yading@10: yading@10: @anchor{How can I concatenate video files} yading@10: @section How can I concatenate video files? yading@10: yading@10: There are several solutions, depending on the exact circumstances. yading@10: yading@10: @subsection Concatenating using the concat @emph{filter} yading@10: yading@10: FFmpeg has a @url{http://ffmpeg.org/ffmpeg-filters.html#concat, yading@10: @code{concat}} filter designed specifically for that, with examples in the yading@10: documentation. This operation is recommended if you need to re-encode. yading@10: yading@10: @subsection Concatenating using the concat @emph{demuxer} yading@10: yading@10: FFmpeg has a @url{http://www.ffmpeg.org/ffmpeg-formats.html#concat, yading@10: @code{concat}} demuxer which you can use when you want to avoid a re-encode and yading@10: your format doesn't support file level concatenation. yading@10: yading@10: @subsection Concatenating using the concat @emph{protocol} (file level) yading@10: yading@10: FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat, yading@10: @code{concat}} protocol designed specifically for that, with examples in the yading@10: documentation. yading@10: yading@10: A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate yading@10: video by merely concatenating the files containing them. yading@10: yading@10: Hence you may concatenate your multimedia files by first transcoding them to yading@10: these privileged formats, then using the humble @code{cat} command (or the yading@10: equally humble @code{copy} under Windows), and finally transcoding back to your yading@10: format of choice. yading@10: yading@10: @example yading@10: ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg yading@10: ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg yading@10: cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg yading@10: ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi yading@10: @end example yading@10: yading@10: Additionally, you can use the @code{concat} protocol instead of @code{cat} or yading@10: @code{copy} which will avoid creation of a potentially huge intermediate file. yading@10: yading@10: @example yading@10: ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg yading@10: ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg yading@10: ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg yading@10: ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi yading@10: @end example yading@10: yading@10: Note that you may need to escape the character "|" which is special for many yading@10: shells. yading@10: yading@10: Another option is usage of named pipes, should your platform support it: yading@10: yading@10: @example yading@10: mkfifo intermediate1.mpg yading@10: mkfifo intermediate2.mpg yading@10: ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null & yading@10: ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null & yading@10: cat intermediate1.mpg intermediate2.mpg |\ yading@10: ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi yading@10: @end example yading@10: yading@10: @subsection Concatenating using raw audio and video yading@10: yading@10: Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also yading@10: allow concatenation, and the transcoding step is almost lossless. yading@10: When using multiple yuv4mpegpipe(s), the first line needs to be discarded yading@10: from all but the first stream. This can be accomplished by piping through yading@10: @code{tail} as seen below. Note that when piping through @code{tail} you yading@10: must use command grouping, @code{@{ ;@}}, to background properly. yading@10: yading@10: For example, let's say we want to concatenate two FLV files into an yading@10: output.flv file: yading@10: yading@10: @example yading@10: mkfifo temp1.a yading@10: mkfifo temp1.v yading@10: mkfifo temp2.a yading@10: mkfifo temp2.v yading@10: mkfifo all.a yading@10: mkfifo all.v yading@10: ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null & yading@10: ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null & yading@10: ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null & yading@10: @{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} & yading@10: cat temp1.a temp2.a > all.a & yading@10: cat temp1.v temp2.v > all.v & yading@10: ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \ yading@10: -f yuv4mpegpipe -i all.v \ yading@10: -y output.flv yading@10: rm temp[12].[av] all.[av] yading@10: @end example yading@10: yading@10: @section -profile option fails when encoding H.264 video with AAC audio yading@10: yading@10: @command{ffmpeg} prints an error like yading@10: yading@10: @example yading@10: Undefined constant or missing '(' in 'baseline' yading@10: Unable to parse option value "baseline" yading@10: Error setting option profile to value baseline. yading@10: @end example yading@10: yading@10: Short answer: write @option{-profile:v} instead of @option{-profile}. yading@10: yading@10: Long answer: this happens because the @option{-profile} option can apply to both yading@10: video and audio. Specifically the AAC encoder also defines some profiles, none yading@10: of which are named @var{baseline}. yading@10: yading@10: The solution is to apply the @option{-profile} option to the video stream only yading@10: by using @url{http://ffmpeg.org/ffmpeg.html#Stream-specifiers-1, Stream specifiers}. yading@10: Appending @code{:v} to it will do exactly that. yading@10: yading@10: @section Using @option{-f lavfi}, audio becomes mono for no apparent reason. yading@10: yading@10: Use @option{-dumpgraph -} to find out exactly where the channel layout is yading@10: lost. yading@10: yading@10: Most likely, it is through @code{auto-inserted aresample}. Try to understand yading@10: why the converting filter was needed at that place. yading@10: yading@10: Just before the output is a likely place, as @option{-f lavfi} currently yading@10: only support packed S16. yading@10: yading@10: Then insert the correct @code{aformat} explicitly in the filtergraph, yading@10: specifying the exact format. yading@10: yading@10: @example yading@10: aformat=sample_fmts=s16:channel_layouts=stereo yading@10: @end example yading@10: yading@10: @section Why does FFmpeg not see the subtitles in my VOB file? yading@10: yading@10: VOB and a few other formats do not have a global header that describes yading@10: everything present in the file. Instead, applications are supposed to scan yading@10: the file to see what it contains. Since VOB files are frequently large, only yading@10: the beginning is scanned. If the subtitles happen only later in the file, yading@10: they will not be initally detected. yading@10: yading@10: Some applications, including the @code{ffmpeg} command-line tool, can only yading@10: work with streams that were detected during the initial scan; streams that yading@10: are detected later are ignored. yading@10: yading@10: The size of the initial scan is controlled by two options: @code{probesize} yading@10: (default ~5 Mo) and @code{analyzeduration} (default 5,000,000 µs = 5 s). For yading@10: the subtitle stream to be detected, both values must be large enough. yading@10: yading@10: @section Why was the @command{ffmpeg} @option{-sameq} option removed? What to use instead? yading@10: yading@10: The @option{-sameq} option meant "same quantizer", and made sense only in a yading@10: very limited set of cases. Unfortunately, a lot of people mistook it for yading@10: "same quality" and used it in places where it did not make sense: it had yading@10: roughly the expected visible effect, but achieved it in a very inefficient yading@10: way. yading@10: yading@10: Each encoder has its own set of options to set the quality-vs-size balance, yading@10: use the options for the encoder you are using to set the quality level to a yading@10: point acceptable for your tastes. The most common options to do that are yading@10: @option{-qscale} and @option{-qmax}, but you should peruse the documentation yading@10: of the encoder you chose. yading@10: yading@10: @chapter Development yading@10: yading@10: @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat? yading@10: yading@10: Yes. Check the @file{doc/examples} directory in the source yading@10: repository, also available online at: yading@10: @url{https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples}. yading@10: yading@10: Examples are also installed by default, usually in yading@10: @code{$PREFIX/share/ffmpeg/examples}. yading@10: yading@10: Also you may read the Developers Guide of the FFmpeg documentation. Alternatively, yading@10: examine the source code for one of the many open source projects that yading@10: already incorporate FFmpeg at (@url{projects.html}). yading@10: yading@10: @section Can you support my C compiler XXX? yading@10: yading@10: It depends. If your compiler is C99-compliant, then patches to support yading@10: it are likely to be welcome if they do not pollute the source code yading@10: with @code{#ifdef}s related to the compiler. yading@10: yading@10: @section Is Microsoft Visual C++ supported? yading@10: yading@10: Yes. Please see the @uref{platform.html, Microsoft Visual C++} yading@10: section in the FFmpeg documentation. yading@10: yading@10: @section Can you add automake, libtool or autoconf support? yading@10: yading@10: No. These tools are too bloated and they complicate the build. yading@10: yading@10: @section Why not rewrite FFmpeg in object-oriented C++? yading@10: yading@10: FFmpeg is already organized in a highly modular manner and does not need to yading@10: be rewritten in a formal object language. Further, many of the developers yading@10: favor straight C; it works for them. For more arguments on this matter, yading@10: read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}. yading@10: yading@10: @section Why are the ffmpeg programs devoid of debugging symbols? yading@10: yading@10: The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug yading@10: information. Those binaries are stripped to create ffmpeg, ffplay, etc. If yading@10: you need the debug information, use the *_g versions. yading@10: yading@10: @section I do not like the LGPL, can I contribute code under the GPL instead? yading@10: yading@10: Yes, as long as the code is optional and can easily and cleanly be placed yading@10: under #if CONFIG_GPL without breaking anything. So, for example, a new codec yading@10: or filter would be OK under GPL while a bug fix to LGPL code would not. yading@10: yading@10: @section I'm using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves. yading@10: yading@10: FFmpeg builds static libraries by default. In static libraries, dependencies yading@10: are not handled. That has two consequences. First, you must specify the yading@10: libraries in dependency order: @code{-lavdevice} must come before yading@10: @code{-lavformat}, @code{-lavutil} must come after everything else, etc. yading@10: Second, external libraries that are used in FFmpeg have to be specified too. yading@10: yading@10: An easy way to get the full list of required libraries in dependency order yading@10: is to use @code{pkg-config}. yading@10: yading@10: @example yading@10: c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec) yading@10: @end example yading@10: yading@10: See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for yading@10: more details. yading@10: yading@10: @section I'm using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available. yading@10: yading@10: FFmpeg is a pure C project, so to use the libraries within your C++ application yading@10: you need to explicitly state that you are using a C library. You can do this by yading@10: encompassing your FFmpeg includes using @code{extern "C"}. yading@10: yading@10: See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3} yading@10: yading@10: @section I'm using libavutil from within my C++ application but the compiler complains about 'UINT64_C' was not declared in this scope yading@10: yading@10: FFmpeg is a pure C project using C99 math features, in order to enable C++ yading@10: to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS yading@10: yading@10: @section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat? yading@10: yading@10: You have to create a custom AVIOContext using @code{avio_alloc_context}, yading@10: see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources. yading@10: yading@10: @section Where can I find libav* headers for Pascal/Delphi? yading@10: yading@10: see @url{http://www.iversenit.dk/dev/ffmpeg-headers/} yading@10: yading@10: @section Where is the documentation about ffv1, msmpeg4, asv1, 4xm? yading@10: yading@10: see @url{http://www.ffmpeg.org/~michael/} yading@10: yading@10: @section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec? yading@10: yading@10: Even if peculiar since it is network oriented, RTP is a container like any yading@10: other. You have to @emph{demux} RTP before feeding the payload to libavcodec. yading@10: In this specific case please look at RFC 4629 to see how it should be done. yading@10: yading@10: @section AVStream.r_frame_rate is wrong, it is much larger than the frame rate. yading@10: yading@10: r_frame_rate is NOT the average frame rate, it is the smallest frame rate yading@10: that can accurately represent all timestamps. So no, it is not yading@10: wrong if it is larger than the average! yading@10: For example, if you have mixed 25 and 30 fps content, then r_frame_rate yading@10: will be 150. yading@10: yading@10: @section Why is @code{make fate} not running all tests? yading@10: yading@10: Make sure you have the fate-suite samples and the @code{SAMPLES} Make variable yading@10: or @code{FATE_SAMPLES} environment variable or the @code{--samples} yading@10: @command{configure} option is set to the right path. yading@10: yading@10: @section Why is @code{make fate} not finding the samples? yading@10: yading@10: Do you happen to have a @code{~} character in the samples path to indicate a yading@10: home directory? The value is used in ways where the shell cannot expand it, yading@10: causing FATE to not find files. Just replace @code{~} by the full path. yading@10: yading@10: @bye