annotate ffmpeg/doc/faq.texi @ 13:844d341cf643 tip

Back up before ISMIR
author Yading Song <yading.song@eecs.qmul.ac.uk>
date Thu, 31 Oct 2013 13:17:06 +0000
parents 6840f77b83aa
children
rev   line source
yading@10 1 \input texinfo @c -*- texinfo -*-
yading@10 2
yading@10 3 @settitle FFmpeg FAQ
yading@10 4 @titlepage
yading@10 5 @center @titlefont{FFmpeg FAQ}
yading@10 6 @end titlepage
yading@10 7
yading@10 8 @top
yading@10 9
yading@10 10 @contents
yading@10 11
yading@10 12 @chapter General Questions
yading@10 13
yading@10 14 @section Why doesn't FFmpeg support feature [xyz]?
yading@10 15
yading@10 16 Because no one has taken on that task yet. FFmpeg development is
yading@10 17 driven by the tasks that are important to the individual developers.
yading@10 18 If there is a feature that is important to you, the best way to get
yading@10 19 it implemented is to undertake the task yourself or sponsor a developer.
yading@10 20
yading@10 21 @section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it?
yading@10 22
yading@10 23 No. Windows DLLs are not portable, bloated and often slow.
yading@10 24 Moreover FFmpeg strives to support all codecs natively.
yading@10 25 A DLL loader is not conducive to that goal.
yading@10 26
yading@10 27 @section I cannot read this file although this format seems to be supported by ffmpeg.
yading@10 28
yading@10 29 Even if ffmpeg can read the container format, it may not support all its
yading@10 30 codecs. Please consult the supported codec list in the ffmpeg
yading@10 31 documentation.
yading@10 32
yading@10 33 @section Which codecs are supported by Windows?
yading@10 34
yading@10 35 Windows does not support standard formats like MPEG very well, unless you
yading@10 36 install some additional codecs.
yading@10 37
yading@10 38 The following list of video codecs should work on most Windows systems:
yading@10 39 @table @option
yading@10 40 @item msmpeg4v2
yading@10 41 .avi/.asf
yading@10 42 @item msmpeg4
yading@10 43 .asf only
yading@10 44 @item wmv1
yading@10 45 .asf only
yading@10 46 @item wmv2
yading@10 47 .asf only
yading@10 48 @item mpeg4
yading@10 49 Only if you have some MPEG-4 codec like ffdshow or Xvid installed.
yading@10 50 @item mpeg1video
yading@10 51 .mpg only
yading@10 52 @end table
yading@10 53 Note, ASF files often have .wmv or .wma extensions in Windows. It should also
yading@10 54 be mentioned that Microsoft claims a patent on the ASF format, and may sue
yading@10 55 or threaten users who create ASF files with non-Microsoft software. It is
yading@10 56 strongly advised to avoid ASF where possible.
yading@10 57
yading@10 58 The following list of audio codecs should work on most Windows systems:
yading@10 59 @table @option
yading@10 60 @item adpcm_ima_wav
yading@10 61 @item adpcm_ms
yading@10 62 @item pcm_s16le
yading@10 63 always
yading@10 64 @item libmp3lame
yading@10 65 If some MP3 codec like LAME is installed.
yading@10 66 @end table
yading@10 67
yading@10 68
yading@10 69 @chapter Compilation
yading@10 70
yading@10 71 @section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'}
yading@10 72
yading@10 73 This is a bug in gcc. Do not report it to us. Instead, please report it to
yading@10 74 the gcc developers. Note that we will not add workarounds for gcc bugs.
yading@10 75
yading@10 76 Also note that (some of) the gcc developers believe this is not a bug or
yading@10 77 not a bug they should fix:
yading@10 78 @url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}.
yading@10 79 Then again, some of them do not know the difference between an undecidable
yading@10 80 problem and an NP-hard problem...
yading@10 81
yading@10 82 @section I have installed this library with my distro's package manager. Why does @command{configure} not see it?
yading@10 83
yading@10 84 Distributions usually split libraries in several packages. The main package
yading@10 85 contains the files necessary to run programs using the library. The
yading@10 86 development package contains the files necessary to build programs using the
yading@10 87 library. Sometimes, docs and/or data are in a separate package too.
yading@10 88
yading@10 89 To build FFmpeg, you need to install the development package. It is usually
yading@10 90 called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the
yading@10 91 build is finished, but be sure to keep the main package.
yading@10 92
yading@10 93 @chapter Usage
yading@10 94
yading@10 95 @section ffmpeg does not work; what is wrong?
yading@10 96
yading@10 97 Try a @code{make distclean} in the ffmpeg source directory before the build.
yading@10 98 If this does not help see
yading@10 99 (@url{http://ffmpeg.org/bugreports.html}).
yading@10 100
yading@10 101 @section How do I encode single pictures into movies?
yading@10 102
yading@10 103 First, rename your pictures to follow a numerical sequence.
yading@10 104 For example, img1.jpg, img2.jpg, img3.jpg,...
yading@10 105 Then you may run:
yading@10 106
yading@10 107 @example
yading@10 108 ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
yading@10 109 @end example
yading@10 110
yading@10 111 Notice that @samp{%d} is replaced by the image number.
yading@10 112
yading@10 113 @file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc.
yading@10 114
yading@10 115 Use the @option{-start_number} option to declare a starting number for
yading@10 116 the sequence. This is useful if your sequence does not start with
yading@10 117 @file{img001.jpg} but is still in a numerical order. The following
yading@10 118 example will start with @file{img100.jpg}:
yading@10 119
yading@10 120 @example
yading@10 121 ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
yading@10 122 @end example
yading@10 123
yading@10 124 If you have large number of pictures to rename, you can use the
yading@10 125 following command to ease the burden. The command, using the bourne
yading@10 126 shell syntax, symbolically links all files in the current directory
yading@10 127 that match @code{*jpg} to the @file{/tmp} directory in the sequence of
yading@10 128 @file{img001.jpg}, @file{img002.jpg} and so on.
yading@10 129
yading@10 130 @example
yading@10 131 x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
yading@10 132 @end example
yading@10 133
yading@10 134 If you want to sequence them by oldest modified first, substitute
yading@10 135 @code{$(ls -r -t *jpg)} in place of @code{*jpg}.
yading@10 136
yading@10 137 Then run:
yading@10 138
yading@10 139 @example
yading@10 140 ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
yading@10 141 @end example
yading@10 142
yading@10 143 The same logic is used for any image format that ffmpeg reads.
yading@10 144
yading@10 145 You can also use @command{cat} to pipe images to ffmpeg:
yading@10 146
yading@10 147 @example
yading@10 148 cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
yading@10 149 @end example
yading@10 150
yading@10 151 @section How do I encode movie to single pictures?
yading@10 152
yading@10 153 Use:
yading@10 154
yading@10 155 @example
yading@10 156 ffmpeg -i movie.mpg movie%d.jpg
yading@10 157 @end example
yading@10 158
yading@10 159 The @file{movie.mpg} used as input will be converted to
yading@10 160 @file{movie1.jpg}, @file{movie2.jpg}, etc...
yading@10 161
yading@10 162 Instead of relying on file format self-recognition, you may also use
yading@10 163 @table @option
yading@10 164 @item -c:v ppm
yading@10 165 @item -c:v png
yading@10 166 @item -c:v mjpeg
yading@10 167 @end table
yading@10 168 to force the encoding.
yading@10 169
yading@10 170 Applying that to the previous example:
yading@10 171 @example
yading@10 172 ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
yading@10 173 @end example
yading@10 174
yading@10 175 Beware that there is no "jpeg" codec. Use "mjpeg" instead.
yading@10 176
yading@10 177 @section Why do I see a slight quality degradation with multithreaded MPEG* encoding?
yading@10 178
yading@10 179 For multithreaded MPEG* encoding, the encoded slices must be independent,
yading@10 180 otherwise thread n would practically have to wait for n-1 to finish, so it's
yading@10 181 quite logical that there is a small reduction of quality. This is not a bug.
yading@10 182
yading@10 183 @section How can I read from the standard input or write to the standard output?
yading@10 184
yading@10 185 Use @file{-} as file name.
yading@10 186
yading@10 187 @section -f jpeg doesn't work.
yading@10 188
yading@10 189 Try '-f image2 test%d.jpg'.
yading@10 190
yading@10 191 @section Why can I not change the frame rate?
yading@10 192
yading@10 193 Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates.
yading@10 194 Choose a different codec with the -c:v command line option.
yading@10 195
yading@10 196 @section How do I encode Xvid or DivX video with ffmpeg?
yading@10 197
yading@10 198 Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4
yading@10 199 standard (note that there are many other coding formats that use this
yading@10 200 same standard). Thus, use '-c:v mpeg4' to encode in these formats. The
yading@10 201 default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want
yading@10 202 a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will
yading@10 203 force the fourcc 'xvid' to be stored as the video fourcc rather than the
yading@10 204 default.
yading@10 205
yading@10 206 @section Which are good parameters for encoding high quality MPEG-4?
yading@10 207
yading@10 208 '-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2',
yading@10 209 things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'.
yading@10 210
yading@10 211 @section Which are good parameters for encoding high quality MPEG-1/MPEG-2?
yading@10 212
yading@10 213 '-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2'
yading@10 214 but beware the '-g 100' might cause problems with some decoders.
yading@10 215 Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd.
yading@10 216
yading@10 217 @section Interlaced video looks very bad when encoded with ffmpeg, what is wrong?
yading@10 218
yading@10 219 You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced
yading@10 220 material, and try '-top 0/1' if the result looks really messed-up.
yading@10 221
yading@10 222 @section How can I read DirectShow files?
yading@10 223
yading@10 224 If you have built FFmpeg with @code{./configure --enable-avisynth}
yading@10 225 (only possible on MinGW/Cygwin platforms),
yading@10 226 then you may use any file that DirectShow can read as input.
yading@10 227
yading@10 228 Just create an "input.avs" text file with this single line ...
yading@10 229 @example
yading@10 230 DirectShowSource("C:\path to your file\yourfile.asf")
yading@10 231 @end example
yading@10 232 ... and then feed that text file to ffmpeg:
yading@10 233 @example
yading@10 234 ffmpeg -i input.avs
yading@10 235 @end example
yading@10 236
yading@10 237 For ANY other help on Avisynth, please visit the
yading@10 238 @uref{http://www.avisynth.org/, Avisynth homepage}.
yading@10 239
yading@10 240 @section How can I join video files?
yading@10 241
yading@10 242 To "join" video files is quite ambiguous. The following list explains the
yading@10 243 different kinds of "joining" and points out how those are addressed in
yading@10 244 FFmpeg. To join video files may mean:
yading@10 245
yading@10 246 @itemize
yading@10 247
yading@10 248 @item
yading@10 249 To put them one after the other: this is called to @emph{concatenate} them
yading@10 250 (in short: concat) and is addressed
yading@10 251 @ref{How can I concatenate video files, in this very faq}.
yading@10 252
yading@10 253 @item
yading@10 254 To put them together in the same file, to let the user choose between the
yading@10 255 different versions (example: different audio languages): this is called to
yading@10 256 @emph{multiplex} them together (in short: mux), and is done by simply
yading@10 257 invoking ffmpeg with several @option{-i} options.
yading@10 258
yading@10 259 @item
yading@10 260 For audio, to put all channels together in a single stream (example: two
yading@10 261 mono streams into one stereo stream): this is sometimes called to
yading@10 262 @emph{merge} them, and can be done using the
yading@10 263 @url{http://ffmpeg.org/ffmpeg-filters.html#amerge, @code{amerge}} filter.
yading@10 264
yading@10 265 @item
yading@10 266 For audio, to play one on top of the other: this is called to @emph{mix}
yading@10 267 them, and can be done by first merging them into a single stream and then
yading@10 268 using the @url{http://ffmpeg.org/ffmpeg-filters.html#pan, @code{pan}} filter to mix
yading@10 269 the channels at will.
yading@10 270
yading@10 271 @item
yading@10 272 For video, to display both together, side by side or one on top of a part of
yading@10 273 the other; it can be done using the
yading@10 274 @url{http://ffmpeg.org/ffmpeg-filters.html#overlay, @code{overlay}} video filter.
yading@10 275
yading@10 276 @end itemize
yading@10 277
yading@10 278 @anchor{How can I concatenate video files}
yading@10 279 @section How can I concatenate video files?
yading@10 280
yading@10 281 There are several solutions, depending on the exact circumstances.
yading@10 282
yading@10 283 @subsection Concatenating using the concat @emph{filter}
yading@10 284
yading@10 285 FFmpeg has a @url{http://ffmpeg.org/ffmpeg-filters.html#concat,
yading@10 286 @code{concat}} filter designed specifically for that, with examples in the
yading@10 287 documentation. This operation is recommended if you need to re-encode.
yading@10 288
yading@10 289 @subsection Concatenating using the concat @emph{demuxer}
yading@10 290
yading@10 291 FFmpeg has a @url{http://www.ffmpeg.org/ffmpeg-formats.html#concat,
yading@10 292 @code{concat}} demuxer which you can use when you want to avoid a re-encode and
yading@10 293 your format doesn't support file level concatenation.
yading@10 294
yading@10 295 @subsection Concatenating using the concat @emph{protocol} (file level)
yading@10 296
yading@10 297 FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat,
yading@10 298 @code{concat}} protocol designed specifically for that, with examples in the
yading@10 299 documentation.
yading@10 300
yading@10 301 A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate
yading@10 302 video by merely concatenating the files containing them.
yading@10 303
yading@10 304 Hence you may concatenate your multimedia files by first transcoding them to
yading@10 305 these privileged formats, then using the humble @code{cat} command (or the
yading@10 306 equally humble @code{copy} under Windows), and finally transcoding back to your
yading@10 307 format of choice.
yading@10 308
yading@10 309 @example
yading@10 310 ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
yading@10 311 ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
yading@10 312 cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg
yading@10 313 ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
yading@10 314 @end example
yading@10 315
yading@10 316 Additionally, you can use the @code{concat} protocol instead of @code{cat} or
yading@10 317 @code{copy} which will avoid creation of a potentially huge intermediate file.
yading@10 318
yading@10 319 @example
yading@10 320 ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
yading@10 321 ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
yading@10 322 ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg
yading@10 323 ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
yading@10 324 @end example
yading@10 325
yading@10 326 Note that you may need to escape the character "|" which is special for many
yading@10 327 shells.
yading@10 328
yading@10 329 Another option is usage of named pipes, should your platform support it:
yading@10 330
yading@10 331 @example
yading@10 332 mkfifo intermediate1.mpg
yading@10 333 mkfifo intermediate2.mpg
yading@10 334 ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null &
yading@10 335 ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null &
yading@10 336 cat intermediate1.mpg intermediate2.mpg |\
yading@10 337 ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi
yading@10 338 @end example
yading@10 339
yading@10 340 @subsection Concatenating using raw audio and video
yading@10 341
yading@10 342 Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
yading@10 343 allow concatenation, and the transcoding step is almost lossless.
yading@10 344 When using multiple yuv4mpegpipe(s), the first line needs to be discarded
yading@10 345 from all but the first stream. This can be accomplished by piping through
yading@10 346 @code{tail} as seen below. Note that when piping through @code{tail} you
yading@10 347 must use command grouping, @code{@{ ;@}}, to background properly.
yading@10 348
yading@10 349 For example, let's say we want to concatenate two FLV files into an
yading@10 350 output.flv file:
yading@10 351
yading@10 352 @example
yading@10 353 mkfifo temp1.a
yading@10 354 mkfifo temp1.v
yading@10 355 mkfifo temp2.a
yading@10 356 mkfifo temp2.v
yading@10 357 mkfifo all.a
yading@10 358 mkfifo all.v
yading@10 359 ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
yading@10 360 ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
yading@10 361 ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null &
yading@10 362 @{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} &
yading@10 363 cat temp1.a temp2.a > all.a &
yading@10 364 cat temp1.v temp2.v > all.v &
yading@10 365 ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
yading@10 366 -f yuv4mpegpipe -i all.v \
yading@10 367 -y output.flv
yading@10 368 rm temp[12].[av] all.[av]
yading@10 369 @end example
yading@10 370
yading@10 371 @section -profile option fails when encoding H.264 video with AAC audio
yading@10 372
yading@10 373 @command{ffmpeg} prints an error like
yading@10 374
yading@10 375 @example
yading@10 376 Undefined constant or missing '(' in 'baseline'
yading@10 377 Unable to parse option value "baseline"
yading@10 378 Error setting option profile to value baseline.
yading@10 379 @end example
yading@10 380
yading@10 381 Short answer: write @option{-profile:v} instead of @option{-profile}.
yading@10 382
yading@10 383 Long answer: this happens because the @option{-profile} option can apply to both
yading@10 384 video and audio. Specifically the AAC encoder also defines some profiles, none
yading@10 385 of which are named @var{baseline}.
yading@10 386
yading@10 387 The solution is to apply the @option{-profile} option to the video stream only
yading@10 388 by using @url{http://ffmpeg.org/ffmpeg.html#Stream-specifiers-1, Stream specifiers}.
yading@10 389 Appending @code{:v} to it will do exactly that.
yading@10 390
yading@10 391 @section Using @option{-f lavfi}, audio becomes mono for no apparent reason.
yading@10 392
yading@10 393 Use @option{-dumpgraph -} to find out exactly where the channel layout is
yading@10 394 lost.
yading@10 395
yading@10 396 Most likely, it is through @code{auto-inserted aresample}. Try to understand
yading@10 397 why the converting filter was needed at that place.
yading@10 398
yading@10 399 Just before the output is a likely place, as @option{-f lavfi} currently
yading@10 400 only support packed S16.
yading@10 401
yading@10 402 Then insert the correct @code{aformat} explicitly in the filtergraph,
yading@10 403 specifying the exact format.
yading@10 404
yading@10 405 @example
yading@10 406 aformat=sample_fmts=s16:channel_layouts=stereo
yading@10 407 @end example
yading@10 408
yading@10 409 @section Why does FFmpeg not see the subtitles in my VOB file?
yading@10 410
yading@10 411 VOB and a few other formats do not have a global header that describes
yading@10 412 everything present in the file. Instead, applications are supposed to scan
yading@10 413 the file to see what it contains. Since VOB files are frequently large, only
yading@10 414 the beginning is scanned. If the subtitles happen only later in the file,
yading@10 415 they will not be initally detected.
yading@10 416
yading@10 417 Some applications, including the @code{ffmpeg} command-line tool, can only
yading@10 418 work with streams that were detected during the initial scan; streams that
yading@10 419 are detected later are ignored.
yading@10 420
yading@10 421 The size of the initial scan is controlled by two options: @code{probesize}
yading@10 422 (default ~5 Mo) and @code{analyzeduration} (default 5,000,000 µs = 5 s). For
yading@10 423 the subtitle stream to be detected, both values must be large enough.
yading@10 424
yading@10 425 @section Why was the @command{ffmpeg} @option{-sameq} option removed? What to use instead?
yading@10 426
yading@10 427 The @option{-sameq} option meant "same quantizer", and made sense only in a
yading@10 428 very limited set of cases. Unfortunately, a lot of people mistook it for
yading@10 429 "same quality" and used it in places where it did not make sense: it had
yading@10 430 roughly the expected visible effect, but achieved it in a very inefficient
yading@10 431 way.
yading@10 432
yading@10 433 Each encoder has its own set of options to set the quality-vs-size balance,
yading@10 434 use the options for the encoder you are using to set the quality level to a
yading@10 435 point acceptable for your tastes. The most common options to do that are
yading@10 436 @option{-qscale} and @option{-qmax}, but you should peruse the documentation
yading@10 437 of the encoder you chose.
yading@10 438
yading@10 439 @chapter Development
yading@10 440
yading@10 441 @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
yading@10 442
yading@10 443 Yes. Check the @file{doc/examples} directory in the source
yading@10 444 repository, also available online at:
yading@10 445 @url{https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples}.
yading@10 446
yading@10 447 Examples are also installed by default, usually in
yading@10 448 @code{$PREFIX/share/ffmpeg/examples}.
yading@10 449
yading@10 450 Also you may read the Developers Guide of the FFmpeg documentation. Alternatively,
yading@10 451 examine the source code for one of the many open source projects that
yading@10 452 already incorporate FFmpeg at (@url{projects.html}).
yading@10 453
yading@10 454 @section Can you support my C compiler XXX?
yading@10 455
yading@10 456 It depends. If your compiler is C99-compliant, then patches to support
yading@10 457 it are likely to be welcome if they do not pollute the source code
yading@10 458 with @code{#ifdef}s related to the compiler.
yading@10 459
yading@10 460 @section Is Microsoft Visual C++ supported?
yading@10 461
yading@10 462 Yes. Please see the @uref{platform.html, Microsoft Visual C++}
yading@10 463 section in the FFmpeg documentation.
yading@10 464
yading@10 465 @section Can you add automake, libtool or autoconf support?
yading@10 466
yading@10 467 No. These tools are too bloated and they complicate the build.
yading@10 468
yading@10 469 @section Why not rewrite FFmpeg in object-oriented C++?
yading@10 470
yading@10 471 FFmpeg is already organized in a highly modular manner and does not need to
yading@10 472 be rewritten in a formal object language. Further, many of the developers
yading@10 473 favor straight C; it works for them. For more arguments on this matter,
yading@10 474 read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
yading@10 475
yading@10 476 @section Why are the ffmpeg programs devoid of debugging symbols?
yading@10 477
yading@10 478 The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug
yading@10 479 information. Those binaries are stripped to create ffmpeg, ffplay, etc. If
yading@10 480 you need the debug information, use the *_g versions.
yading@10 481
yading@10 482 @section I do not like the LGPL, can I contribute code under the GPL instead?
yading@10 483
yading@10 484 Yes, as long as the code is optional and can easily and cleanly be placed
yading@10 485 under #if CONFIG_GPL without breaking anything. So, for example, a new codec
yading@10 486 or filter would be OK under GPL while a bug fix to LGPL code would not.
yading@10 487
yading@10 488 @section I'm using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves.
yading@10 489
yading@10 490 FFmpeg builds static libraries by default. In static libraries, dependencies
yading@10 491 are not handled. That has two consequences. First, you must specify the
yading@10 492 libraries in dependency order: @code{-lavdevice} must come before
yading@10 493 @code{-lavformat}, @code{-lavutil} must come after everything else, etc.
yading@10 494 Second, external libraries that are used in FFmpeg have to be specified too.
yading@10 495
yading@10 496 An easy way to get the full list of required libraries in dependency order
yading@10 497 is to use @code{pkg-config}.
yading@10 498
yading@10 499 @example
yading@10 500 c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
yading@10 501 @end example
yading@10 502
yading@10 503 See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for
yading@10 504 more details.
yading@10 505
yading@10 506 @section I'm using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available.
yading@10 507
yading@10 508 FFmpeg is a pure C project, so to use the libraries within your C++ application
yading@10 509 you need to explicitly state that you are using a C library. You can do this by
yading@10 510 encompassing your FFmpeg includes using @code{extern "C"}.
yading@10 511
yading@10 512 See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3}
yading@10 513
yading@10 514 @section I'm using libavutil from within my C++ application but the compiler complains about 'UINT64_C' was not declared in this scope
yading@10 515
yading@10 516 FFmpeg is a pure C project using C99 math features, in order to enable C++
yading@10 517 to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
yading@10 518
yading@10 519 @section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat?
yading@10 520
yading@10 521 You have to create a custom AVIOContext using @code{avio_alloc_context},
yading@10 522 see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources.
yading@10 523
yading@10 524 @section Where can I find libav* headers for Pascal/Delphi?
yading@10 525
yading@10 526 see @url{http://www.iversenit.dk/dev/ffmpeg-headers/}
yading@10 527
yading@10 528 @section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
yading@10 529
yading@10 530 see @url{http://www.ffmpeg.org/~michael/}
yading@10 531
yading@10 532 @section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?
yading@10 533
yading@10 534 Even if peculiar since it is network oriented, RTP is a container like any
yading@10 535 other. You have to @emph{demux} RTP before feeding the payload to libavcodec.
yading@10 536 In this specific case please look at RFC 4629 to see how it should be done.
yading@10 537
yading@10 538 @section AVStream.r_frame_rate is wrong, it is much larger than the frame rate.
yading@10 539
yading@10 540 r_frame_rate is NOT the average frame rate, it is the smallest frame rate
yading@10 541 that can accurately represent all timestamps. So no, it is not
yading@10 542 wrong if it is larger than the average!
yading@10 543 For example, if you have mixed 25 and 30 fps content, then r_frame_rate
yading@10 544 will be 150.
yading@10 545
yading@10 546 @section Why is @code{make fate} not running all tests?
yading@10 547
yading@10 548 Make sure you have the fate-suite samples and the @code{SAMPLES} Make variable
yading@10 549 or @code{FATE_SAMPLES} environment variable or the @code{--samples}
yading@10 550 @command{configure} option is set to the right path.
yading@10 551
yading@10 552 @section Why is @code{make fate} not finding the samples?
yading@10 553
yading@10 554 Do you happen to have a @code{~} character in the samples path to indicate a
yading@10 555 home directory? The value is used in ways where the shell cannot expand it,
yading@10 556 causing FATE to not find files. Just replace @code{~} by the full path.
yading@10 557
yading@10 558 @bye