FFmpeg
|
libopencore-amr.c
Go to the documentation of this file.
98 static const uint8_t block_size[16] = { 12, 13, 15, 17, 19, 20, 26, 31, 5, 0, 0, 0, 0, 0, 0, 0 };
179 { "dtx", "Allow DTX (generate comfort noise)", offsetof(AMRContext, enc_dtx), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_ENCODING_PARAM },
332 static const uint8_t block_size[16] = {18, 24, 33, 37, 41, 47, 51, 59, 61, 6, 6, 0, 0, 0, 1, 1};
Definition: libavcodec/avcodec.h:367
Definition: libavcodec/avcodec.h:368
Definition: frame.h:76
Definition: opt.h:222
av_dlog(ac->avr,"%d samples - audio_convert: %s to %s (%s)\n", len, av_get_sample_fmt_name(ac->in_fmt), av_get_sample_fmt_name(ac->out_fmt), use_generic?ac->func_descr_generic:ac->func_descr)
Definition: libavcodec/avcodec.h:2854
initialize output if(nPeaks >3)%at least 3 peaks in spectrum for trying to find f0 nf0peaks
struct AMRWBContext AMRWBContext
Definition: samplefmt.h:50
Definition: amrnbdec.c:100
AMRNBFrame frame
decoded AMR parameters (lsf coefficients, codebook indexes, etc)
Definition: amrnbdec.c:101
#define AV_OPT_FLAG_ENCODING_PARAM
a generic parameter which can be set by the user for muxing or encoding
Definition: opt.h:281
static int amr_decode_fix_avctx(AVCodecContext *avctx)
Definition: libopencore-amr.c:30
Definition: avutil.h:144
int ff_af_queue_add(AudioFrameQueue *afq, const AVFrame *f)
Definition: audio_frame_queue.c:43
Definition: amrwbdec.c:47
int ff_alloc_packet2(AVCodecContext *avctx, AVPacket *avpkt, int size)
Definition: libavcodec/utils.c:1377
Definition: audio_frame_queue.h:32
Definition: libavcodec/avcodec.h:1134
int ff_get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
Definition: libavcodec/utils.c:823
size_t av_strlcatf(char *dst, size_t size, const char *fmt,...)
Definition: avstring.c:100
Filter the word “frame” indicates either a video frame or a group of audio as stored in an AVFilterBuffer structure Format for each input and each output the list of supported formats For video that means pixel format For audio that means channel sample they are references to shared objects When the negotiation mechanism computes the intersection of the formats supported at each end of a all references to both lists are replaced with a reference to the intersection And when a single format is eventually chosen for a link amongst the remaining all references to the list are updated That means that if a filter requires that its input and output have the same format amongst a supported all it has to do is use a reference to the same list of formats query_formats can leave some formats unset and return AVERROR(EAGAIN) to cause the negotiation mechanism toagain later.That can be used by filters with complex requirements to use the format negotiated on one link to set the formats supported on another.Buffer references ownership and permissions
void avpriv_report_missing_feature(void *avc, const char *msg,...) av_printf_format(2
void ff_af_queue_init(AVCodecContext *avctx, AudioFrameQueue *afq)
Definition: audio_frame_queue.c:27
void ff_af_queue_remove(AudioFrameQueue *afq, int nb_samples, int64_t *pts, int *duration)
Definition: audio_frame_queue.c:74
Filter the word “frame” indicates either a video frame or a group of audio samples
Definition: filter_design.txt:2
static int decode(AVCodecContext *avctx, void *data, int *got_frame, AVPacket *avpkt)
Definition: crystalhd.c:868
struct AMRContext AMRContext
Definition: libavcodec/avcodec.h:1028
Generated on Tue Sep 2 2025 06:55:56 for FFmpeg by
